Long-Term Supported Versions

    Innovation Versions

      Key Features

      GMEM

      Generalized Memory Management (GMEM) is an optimal solution for memory management in OS for AI. It provides a centralized management mechanism for heterogeneous memory interconnection. GMEM innovates the memory management architecture in the Linux kernel. Its logical mapping system masks the differences between the ways how the CPU and accelerator access memory addresses. The Remote Pager memory message interaction framework provides the device access abstraction layer. In the unified address space, GMEM automatically migrates data to the OS or accelerator when data needs to be accessed or paged. GMEM APIs are consistent with native Linux memory management APIs, and feature high usability, performance, and portability.

      • Logical mapping system: GMEM high-level APIs in the kernel allow the accelerator driver to directly obtain memory management functions and create logical page tables. The logical page tables decouple the high-layer logic of memory management from the hardware layer of the CPU, so as to abstract the high-layer memory management logic that can be reused by various accelerators.

      • Remote Pager: This framework has the message channel, process management, memory swap, and memory prefetch modules for the interaction between the host and accelerator. The remote_pager abstraction layer simplifies device adaptation by enabling third-party accelerators to easily access the GMEM system.

      • User APIs: Users can directly use the memory map (mmap) of the OS to allocate the unified virtual memory. GMEM adds the flag (MMAP_PEER_SHARED) for allocating the unified virtual memory to the mmap system call. The libgmem user-mode library provides the hmadvise API of memory prefetch semantics to help users optimize the accelerator memory access efficiency.

      Native Support for Open Source Large Language Models (LLaMA and ChatGLM)

      The two model inference frameworks, llama.cpp and chatglm-cpp, are implemented based on C/C++. They allow users to deploy and use open source large language models on CPUs by means of model quantization. llama.cpp supports the deployment of multiple open source LLMs, such as LLaMa, LLaMa2, and Vicuna. It supports the deployment of multiple open source Chinese LLMs, such as ChatGLM-6B, ChatGLM2-6B, and Baichuan-13B.

      • Implemented in GGML-based C/C++.

      • They accelerate memory for efficient CPU inference through int4/int8 quantization, optimized KV cache, and parallel computing.

      Features in the openEuler Kernel 6.4

      openEuler 23.09 runs on Linux kernel 6.4. It inherits the competitive advantages of community versions and innovative features released in the openEuler community.

      • Tidal affinity scheduling: The system dynamically adjusts CPU affinity based on the service load. When the service load is light, the system uses preferred CPUs to enhance resource locality. When the service load is heavy, the system has new CPU cores added to improve the QoS.

      • CPU QoS priority-based load balancing: CPU QoS isolation is enhanced in online and offline hybrid deployments, and QoS load balancing across CPUs is supported to further reduce QoS interference from offline services.

      • Simultaneous multithreading (SMT) expeller free of priority inversion: This feature resolves the priority inversion problem in the SMT expeller feature and reduces the impact of offline tasks on the quality of service (QoS) of online tasks.

      • Multiple priorities in a hybrid deployment: Each cgroup can have a cpu.qos_level that ranges from -2 to 2. You can set qos_level_weight to assign a different priority to each cgroup and allocate CPU resources to each cgroup based on the CPU usage. This feature is also capable of wakeup preemption.

      • Programmable scheduling: The programmable scheduling framework based on eBPF allows the kernel scheduler to dynamically expand scheduling policies to meet performance requirements of different loads.

      • NUMA-aware spinlock: The lock transfer algorithm is optimized for the multi-NUMA system based on the MCS spinlock. The lock is preferentially transferred within the local NUMA node, greatly reducing cross-NUMA cache synchronization and ping-pong. As a result, the overall lock throughput is increased and service performance is improved.

      • TCP compression: The data of specified ports can be compressed at the TCP layer before transmission. After the data is received, it is decompressed and transferred to the user mode. TCP compression accelerates data transmission between nodes.

      • Kernel live patches: Kernel live patches are used to fix bugs in kernel function implementation without a system restart. They dynamically replace functions when the system is running. Live patches on openEuler work by modifying instructions. They feature a high patching efficiency because they directly jump to new functions without search and transfer, while live patches on the Linux mainline version work based on ftrace.

      • Sharepool shared memory: This technology shares data among multiple processes and allows multiple processes to access the same memory area for data sharing.

      • Memcg asynchronous reclamation: This optimized mechanism asynchronously reclaims memory in the memcg memory subsystem when the system load is low to avoid memory reclamation delay when the system load becomes heavy.

      • filescgroup: The filescgroup subsystem manages the number of files (that is, the number of handles) opened by a process group. This subsystem provides easy-to-call APIs for resource management. Compared with the rlimit method, the filescgroup subsystem can better control the number of file handles for resource application and release, dynamic adjustment of resource usage, and group control.

      • Cgroup writeback for cgroup v1: Cgroup writeback provides a flexible method to manage the writeback behavior of the file system cache. The main functions of cgroup writeback include cache writeback control, I/O priority control, and writeback policy adjustment.

      • Core suspension detection: If the performance monitor unit (PMU) stops counting, the hard lockup cannot detect system suspension. The core suspension detection feature enables each CPU to check whether adjacent CPUs are suspended. This ensures that the system can perform self-healing even when some CPUs are suspended due to interruption disabling.

      Embedded

      openEuler 23.09 Embedded is equipped with an embedded virtualization base that is available in the Jailhouse virtualization solution or the OpenAMP lightweight hybrid deployment solution. You can select the most appropriate solution to suite your services. This version also supports the Robot Operating System (ROS) Humble version, which integrates core software packages such as ros-core, ros-base, and simultaneous localization and mapping (SLAM) to meet the ROS 2 runtime requirements.

      • Southbound ecosystem: openEuler Embedded Linux supports AArch64 and x86-64 chip architectures and related hardware such as RK3568, Hi3093, RK3399, RK3588, Raspberry Pi 4B, and x86-64 industrial computers. It preliminarily supports AArch32 and RISC-V chip architectures based on QEMU simulation.

      • Embedded elastic virtualization base: The converged elastic base of openEuler Embedded is a collection of technologies used to enable multiple OSs or runtimes to run on a system-on-a-chip (SoC). These technologies include bare metal, embedded virtualization, lightweight containers, LibOS, trusted execution environment (TEE), and heterogeneous deployment.

      • Mixed criticality deployment framework: The mixed-criticality (MICA) deployment framework is built on the converged elastic base. The unified framework masks the differences between the technologies used in the underlying converged elastic base, enabling Linux to be deployed together with other OSs.

      • Northbound ecosystem: More than 350 common embedded software packages can be built using openEuler. The ROS 2 Humble version is supported, which contains core software packages such as ros-core, ros-base, and SLAM. The ROS SDK is provided to simplify embedded ROS development. The soft real-time capability based on Linux kernel 5.10 allows for response to soft real-time interrupts within microseconds. DSoftBus and HiChain for point-to-point authentication of OpenHarmony have been integrated to implement interconnection between openEuler-based embedded devices and between openEuler-based embedded devices and OpenHarmony-based devices.

      • UniProton: This hard RTOS features ultra-low latency and flexible MICA deployments. It is suited for industrial control because it supports both microcontroller units and multi-core CPUs.

      SysCare

      SysCare is a system-level hotfix software that provides security patches and hot fixing for OSs. It can fix system errors without restarting hosts. By combining kernel-mode and user-mode hot patching, SysCare takes over system repair, allowing users to focus on core services. In the future, OS hot upgrade will be provided to further free O&M users and improve O&M efficiency.

      Patches built in containers:

      • eBPF is used to monitor the compiler process. In this way, hot patch change information can be obtained in pure user mode without creating character devices, and users can compile hot patches in multiple containers concurrently.

      • Users can install different RPM packages (syscare-build-kmod or syscare-build-ebpf) to use ko or eBPF. The syscare-build process automatically adapts to the corresponding underlying implementation.

      GCC for openEuler

      GCC for openEuler is developed based on the open source GCC 12.3 and supports features such as automatic feedback-directed optimization (FDO), software and hardware collaboration, memory optimization, SVE, and vectorized math libraries.

      • The default language is upgraded from C14/C++14 to C17/C++17, and more hardware architecture features such as the Armv9-A architecture and x86 AVX512-FP16 are supported.

      • GCC for openEuler supports structure optimization and instruction selection optimization, fully utilizing the hardware features of the Arm architecture to achieve higher operating efficiency. In the benchmark tests such as SPEC CPU 2017, GCC for openEuler delivers much better performance than GCC 10.3 of the upstream community.

      • GCC for openEuler also supports automatic FDO to greatly improve the performance of the MySQL database at the application layer.

      A-Ops

      The amount of data generated by IT infrastructure and applications sees a 2- to 3-fold increase every year. The application of big data and machine learning technologies is maturing, driving the generation of efficient and intelligent O&M systems to help enterprises reduce costs and improve efficiency. A-Ops is an intelligent O&M framework that supports basic capabilities such as CVE management, exception detection (database scenario), and quick troubleshooting to reduce O&M costs.

      • Intelligent patch management: Supports the patch service, kernel hot fix, intelligent patch inspection, and Hybrid management of cold and hot patches.

      • Exception detection: Detects network I/O delays, packet loss, interruption, and high disk I/O loads in MySQL and openGauss service scenarios.

      • Configuration source tracing: Provides cluster configuration collection and baseline capabilities to implement manageable and controllable configurations. The configuration of the entire cluster is checked and compared with the baseline in real time to quickly identify unauthorized configuration changes and locate faults.

      A-Ops gala

      The gala project will fully support fault diagnosis in Kubernetes scenarios, including application drill-down analysis, observable microservice and DB performance, cloud-native network monitoring, cloud-native performance profiling, process performance diagnosis, and minute-level diagnosis of five types of OS issues (network, drive, process, memory, and scheduling).

      • Easy deployment of the Kubernetes environment: gala-gopher can be deployed as a DaemonSet, and a gala-gopher instance is deployed on each worker node. gala-spider and gala-anteater are deployed as containers on the Kubernetes management node.

      • Application drill-down analysis: Diagnoses subhealth problems in cloud native scenarios and demarcates problems between applications and the cloud platform in minutes.

      • Full-stack monitoring: Provides application-oriented refined monitoring for cross-software stacks, including the language runtime (JVM), glibc, system call, and kernel (TCP, I/O, and scheduling), and allows users to view the impact of system resources on applications in real time.

      • Full-link monitoring: Provides network traffic topology (TCP and RPC) and software deployment topology information, and builds a system 3D topology based on the information to accurately show the resource scope on which applications depend and quickly identify the fault radius.

      • Causal AI model: Provides visualized root cause derivation to demarcate faults to resource nodes in minutes.

      • Observable microservice and DB performance: Provides non-intrusive microservice, DB, and HTTP1.x access performance observation, including the throughput, latency, and error rate; and supports refined API observation and HTTP TRACE to view abnormal HTTP requests.

      • Observable PostgreSQL access performance: Observes performance in terms of the throughput, latency, and error rate; and supports refined SQL access observation and slow SQL trace to show SQL statements of slow SQL queries.

      • Cloud-native application performance profiling: Provides a non-intrusive and zero-modification cross-stack profiling analysis tool and can connect to the common UI front end of Pyroscope.

      • Cloud-native network monitoring: Provides TCP, socket, and DNS monitoring for Kubernetes scenarios for more refined network monitoring.

      • Process performance diagnosis: Provides process-level performance problem diagnosis for middleware (such as MySQL and Redis) in cloud native scenarios, monitors process performance KPIs and process-related system-layer metrics (such as I/O, memory, and TCP), and detects process performance KPI exceptions and system-layer metrics that affect the KPIs.

      sysMaster

      sysMaster is a collection of ultra-lightweight and highly reliable service management programs. It provides an innovative implementation of PID 1 to replace the conventional init process. Written in Rust, sysMaster is equipped with fault monitoring, second-level self-recovery, and quick startup capabilities, which help improve OS reliability and service availability. In version 0.5.0, sysMaster can manage system services in container and VM scenarios.

      • Added the devMaster component to manage device hot swap.

      • Added the live update and hot reboot functions to sysMaster.

      • Allows PID 1 to run on a VM.

      utsudo

      utsudo uses Rust to reconstruct sudo to deliver a more efficient, secure, and flexible privilege escalation tool. The involved modules include the common utility, overall framework, and function plugins.

      • Access control: Restricts the commands that can be executed by users as required, and specifies the required authentication method.

      • Audit log: Records and traces the commands and tasks executed by each user using utsudo.

      • Temporary privilege escalation: allows common users to enter their passwords to temporarily escalate to the super user to execute specific commands or tasks.

      • Flexible configuration: Allows users to set arguments such as command aliases, environment variables, and execution parameters to meet complex system management requirements.

      utshell

      utshell is a new shell that inherits the usage habits of Bash. It can interact with users through command lines, specifically, responding to user operations to execute commands and provide feedback. In addition, it can execute automated scripts to facilitate O&M.

      • Command execution: Runs commands deployed on the user's machine and sends return values to the user.

      • Batch processing: Automates task execution using scripts.

      • Job control: Concurrently executes multiple user commands as background jobs, and manages and controls the tasks that are executed concurrently.

      • Historical records: Records the commands entered by users.

      • Command aliases: Allows users to create aliases for commands to customize their operations.

      migration-tools

      Developed by UnionTech Software Technology Co., Ltd., migration-tools is oriented to users who want to quickly, smoothly, stably, and securely migrate services to the openEuler OS. migration-tools consists of the following modules:

      • Server module: It is developed on the Python Flask Web framework. As the core of migration-tools, it receives task requests, processes execution instructions, and distributes the instructions to each Agent.

      • Agent module: It is installed in the OS to be migrated to receive task requests from the Server module and perform migration.

      • Configuration module: It reads configuration files for the Server and Agent modules.

      • Log module: It records logs during migration.

      • Migration evaluation module: It provides evaluation reports such as basic environment check, software package comparison analysis, and ABI compatibility check before migration, providing a basis for users' migration work.

      • Migration function module: It provides migration with a few clicks, displays the migration progress, and checks the migration result.

      DDE

      DDE focuses on delivering polished user interaction and visual design. DDE is powered by independently developed core technologies for desktop environments and provides login, screen locking, desktop and file manager, launcher, dock, window manager, control center, and more functions. As one of the preferred desktop environments, DDE features a user-friendly interface, elegant interaction, high reliability, and privacy protection. You can use DDE to work more creatively and efficiently or enjoy media entertainment while keeping in touch with friends.

      Kmesh

      Based on the programmable kernel, Kmesh offloads service governance to the OS, thus shortening the inter-service communication latency to only 1/5 of the industry average.

      • Kmesh can connect to a mesh control plane (such as Istio) that complies with the Dynamic Resource Discovery (xDS) protocol.

      • Traffic orchestration: Polling and other load balancing policies, L4 and L7 routing support, and backend service policies available in percentage mode are supported.

      • Sockmap for mesh acceleration: Take the typical service mesh scenario as an example. When a sockmap is used, the eBPF program takes over the communication between service containers and Envoy containers. As a result, the communication path is shortened to achieve mesh acceleration. The eBPF program can also accelerate the communication between pods on the same node.

      RISC-V QEMU

      openEuler 23.09 is released with support for the RISC-V architecture. openEuler 23.09 aims to provide basic support for upper-layer applications and is highly customizable, flexible, and secure. It provides a stable and reliable operating environment for the computing platform of the RISC-V architecture, facilitating the installation and verification of upper-layer applications and promoting the enrichment and quality improvement of the software ecosystem in the RISC-V architecture.

      • The OS kernel is updated to version 6.4.0, which is consistent with mainstream architectures.

      • It features a stable base, including core functions such as processor management, memory management, task scheduling, and device drivers, as well as common utilities.

      For details about the usage, see Starting openEuler for RISC-V in QEMU

      DIM

      Dynamic Integrity Measurement (DIM) measures key data (such as code segments) in memory during program running and compares the measurement result with the reference value to determine whether the data in memory has been tampered with. In this way, attacks can be detected and countermeasures can be taken.

      • Measures user-mode processes, kernel modules, and code segment in the kernel memory.

      • Extends measurements to the PCR register of the TPM 2.0 chip for remote attestation.

      • Configures measurements and verifies measurement signatures.

      • Generates and imports measurement baseline data using tools, and verifies baseline data signatures.

      • Supports the SM3 algorithm.

      Kuasar

      Kuasar is a container runtime that supports unified management of multiple types of sandboxes. It supports multiple mainstream sandbox isolation technologies. Based on the Kuasar container runtime combined with the iSulad container engine and StratoVirt virtualization engine, openEuler builds lightweight full-stack self-developed secure containers for cloud native scenarios, delivering key competitiveness of ultra-low overhead and ultra-fast startup.

      Kuasar 0.1.0 supports the StratoVirt lightweight VM sandbox and StratoVirt secure container instances created through Kubernetes+iSulad.

      • Compatible with the Kubernetes ecosystem when the iSulad container engine interconnects with the Kuasar container.

      • Secure container sandboxes based on the StratoVirt lightweight VM sandbox.

      • StratoVirt secure containers for precise resource restriction and management.

      sysBoost

      SsysBoost is a tool for optimizing the system microarchitecture for applications. The optimization involves assembly instructions, code layout, data layout, memory huge pages, and system calls.

      • Binary file merging: Only full static merging is supported. Applications and their dependent dynamic libraries are merged into one binary file, and segment-level reordering is performed. Multiple discrete code segments or data segments are merged into one to improve application performance.

      • sysBoost daemon: sysBoost registers with systemd to enable out-of-the-box optimization. systemd will start the sysBoost daemon after the system is started. Then, the sysBoost daemon reads the configuration file to obtain the binary files to be optimized and the corresponding optimization methods.

      • RTO binary file loading kernel module: This binary loading module is added to automatically load the optimized binary file when the kernel loads binary files.

      • Huge page pre-loading of binary code or data segments: sysBoost provides the huge page pre-loading function. After binary optimization is complete, sysBoost immediately loads the content to the kernel as a huge page. When an application is started, sysBoost maps the pre-loaded content to the user-mode page table in batches to reduce page faults and memory access delay of the application, thereby improving the application startup speed and running efficiency.

      CTinspector

      CTinspector is a language VM running framework developed by China Telecom e-Cloud Technology Co., Ltd. based on the eBPF instruction set. The CTinspector running framework enables application instances to be quickly expanded to diagnose network performance bottlenecks, storage I/O hotspots, and load balancing, improving the stability and timeliness of diagnosis during system running.

      • CTinspector uses a packet VM of the eBPF instruction set. The minimum size of the packet VM is 256 bytes, covering all VM components, including registers, stack segments, code segments, data segments, and page tables.

      • The packet VM supports independent migration. That is, the code in the packet VM can invoke migrate kernel function to migrate the packet VM to a specified node.

      • The packet VM also supports resumable execution. That is, after being migrated to another node, the packet VM can continue to execute the next instruction from the position where it has been interrupted on the previous node.

      CVE-ease

      CVE-ease is an innovative Common Vulnerabilities and Exposures (CVE) platform developed by China Telecom e-Cloud Technology Co., Ltd. It collects various CVE information released by multiple security platforms and notifies users of the information through multiple channels, such as email, WeChat, and DingTalk. The CVE-ease platform aims to help users quickly learn about and cope with vulnerabilities in the system. In addition to improving system security and stability, users can view CVE details on the CVE-ease platform, including vulnerability description, impact scope, and fixing suggestions, and select a fixing solution as required.

      CVE-ease has the following capabilities:

      • Dynamically tracks CVEs on multiple platforms in real time and integrates the information into the CVE database.

      • Extracts key information from the collected CVE information and updates changed CVE information in real time.

      • Automatically maintains and manages the CVE database.

      • Queries historical CVE information based on various conditions in interactive mode.

      • Reports historical CVE information in real time through WeCom, DingTalk, and email.

      PilotGo

      The PilotGo O&M management platform is a plugin-based O&M management tool developed by the openEuler community. It adopts a lightweight modular design of functional modules that can be iterated and evolved independently, while ensuring stability of core functions. Plugins are used to enhance platform functions and remove barriers between different O&M components, implementing global status awareness and automation.

      PilotGo has the following core functional modules:

      • User management: Manages users by group based on the organizational structure, and imports existing platform accounts, facilitating migration.

      • Permission management: Supports RBAC-based permission management, which is flexible and reliable.

      • Host management: visualized front-end status, software package management, service management, and kernel parameter optimization.

      • Batch management: Concurrently performs O&M operation, which is stable and efficient.

      • Log audit: Traces and records user and plugin change operations, facilitating issue backtracking and security audit.

      • Alarm management: Detects platform exceptions in real time.

      • Real-time exception detection: Extends platform functions and associates plugins to realize automation and reduce manual intervention.

      CPDS

      The wide application of cloud native technologies makes modern application deployment environments more and more complex. The container architecture provides flexibility and convenience, but also brings more monitoring and maintenance challenges. Container Problem Detect System (CPDS) is developed to ensure reliability and stability of containerized applications.

      • Cluster information collection: Node agents are implemented on host machines to monitor key container services using systemd, initv, eBPF, and other technologies. Cross-NS agents are configured on nodes and containers in non-intrusive mode to keep track of the application status, resource consumption, key system function execution status, and I/O execution status of containers. The collected information covers network, kernel, and drive LVM of the nodes.

      • Cluster exception detection: Raw data from each node is collected to detect exceptions based on exception rules and extract key information. Then, the detection results and raw data are uploaded online and saved permanently.

      • Fault/Sub-Health diagnosis on nodes and service containers: Nodes and service containers are diagnosed based on exception detection data. Diagnosis results are saved permanently and can be displayed on the UI for users to view real-time and historical diagnosis data.

      EulerMaker Build System

      EulerMaker is a package build system. It converts source code into binary packages and allows developers to assemble and tailor scenario-specific OSs based on their requirements. It provides incremental/full build, package layer tailoring, and image tailoring capabilities.

      • Incremental/Full build: Analyzes the impact based on software changes and dependencies, obtains the list of packages to be built, and delivers parallel build tasks based on the dependency sequence.

      • Build dependency query: Provides a software package build dependency table in a project, and filters and collects statistics on software package dependencies and depended software packages.

      • Layer tailoring: In a build project, developers can select and configure layer models to tailor patches, build dependencies, installation dependencies, and compilation options for software packages.

      • Image tailoring: Developers can configure the repository source to generate ISO, QCOW2, and container OS images, and tailor the list of software packages for the images.

      Bug Catching

      Buggy Content

      Bug Description

      Submit As Issue

      It's a little complicated....

      I'd like to ask someone.

      PR

      Just a small problem.

      I can fix it online!

      Bug Type
      Specifications and Common Mistakes

      ● Misspellings or punctuation mistakes;

      ● Incorrect links, empty cells, or wrong formats;

      ● Chinese characters in English context;

      ● Minor inconsistencies between the UI and descriptions;

      ● Low writing fluency that does not affect understanding;

      ● Incorrect version numbers, including software package names and version numbers on the UI.

      Usability

      ● Incorrect or missing key steps;

      ● Missing prerequisites or precautions;

      ● Ambiguous figures, tables, or texts;

      ● Unclear logic, such as missing classifications, items, and steps.

      Correctness

      ● Technical principles, function descriptions, or specifications inconsistent with those of the software;

      ● Incorrect schematic or architecture diagrams;

      ● Incorrect commands or command parameters;

      ● Incorrect code;

      ● Commands inconsistent with the functions;

      ● Wrong screenshots.

      Risk Warnings

      ● Lack of risk warnings for operations that may damage the system or important data.

      Content Compliance

      ● Contents that may violate applicable laws and regulations or geo-cultural context-sensitive words and expressions;

      ● Copyright infringement.

      How satisfied are you with this document

      Not satisfied at all
      Very satisfied
      Submit
      Click to create an issue. An issue template will be automatically generated based on your feedback.
      Bug Catching
      编组 3备份