Long-Term Supported Versions

    Innovation Versions

      Key Features

      AI

      AI is redefining OSs by powering intelligent development, deployment, and O&M. openEuler supports general-purpose architectures like Arm, x86, and RISC-V, and next-gen AI processors like NVIDIA and Ascend. Further, openEuler is equipped with extensive AI capabilities that have made it a preferred choice for diversified computing power.

      • openEuler for AI: openEuler offers an efficient development and runtime environment that containerizes software stacks of AI platforms with out-of-the-box availability.

        • openEuler supports TensorFlow and PyTorch frameworks and software development kits (SDKs) of major computing architectures, such as Compute Architecture for Neural Networks (CANN) and Compute Unified Architecture (CUDA), to make it easy to develop and run AI applications.

        • Environment setup is further simplified by containerizing software stacks. openEuler provides three types of container images:

        1. SDK images: Use openEuler as the base image and install the SDK of a computing architecture, for example, Ascend CANN and NVIDIA CUDA.

        2. AI framework images: Use the SDK image as the base and install AI framework software, such as PyTorch and TensorFlow.

        3. Model application images: Provide a complete set of toolchains and model applications.

        For more information, see the openEuler AI Container Image User Guide.

      • AI for openEuler: AI makes openEuler more intelligent. EulerCopilot, an intelligent Q&A platform, is developed using foundation models and openEuler data. It assists in code generation, problem analysis, and system O&M.

        • EulerCopilot: EulerCopilot is accessible via web or shell.

          1. Web: Provides basic OS knowledge, openEuler data, O&M methods, and project introduction and usage guidance.

          2. Shell: Delivers user-friendly experience using natural languages.

        For more information, see the EulerCopilot User Guide.

      Embedded

      openEuler 22.03 LTS SP3 Embedded is equipped with an embedded virtualization base that is available in the Jailhouse virtualization solution or the OpenAMP lightweight hybrid deployment solution. You can select the most appropriate solution to suite your services. openEuler 22.03 LTS SP3 Embedded supports the Robot Operating System (ROS) Humble version, which integrates core software packages such as ros-core, rosbase, and simultaneous localization and mapping (SLAM) to meet the ROS 2 runtime requirements.

      • Southbound ecosystem: Currently, openEuler Embedded supports AArch64 and x86-64 architectures. In 22.03 LTS SP3, RK3588 chips are supported. In the future, Loongson and Phytium processors will be supported.

      • Embedded elastic virtualization base: The elastic virtualization base of openEuler Embedded is a collection of technologies used to enable multiple OSs to run on a system-on-a-chip (SoC). These technologies include bare metal, embedded virtualization, lightweight containers, LibOS, trusted execution environment (TEE), and heterogeneous deployment.

      • Mixed criticality deployment framework: The mixed-criticality (MICA) deployment framework is built on the converged elastic base. The unified framework masks the differences between the technologies used in the underlying elastic virtualization base, enabling Linux to be deployed together with other OSs.

      • Northbound ecosystem: More than 350 common embedded software packages can be built using openEuler. The ROS 2 Humble version is supported, which contains core software packages such as ros-core, ros-base, and SLAM. The ROS SDK is provided to simplify embedded ROS development. The soft real-time capability allows for response to soft real-time interrupts within microseconds. DSoftBus and HiChain for point-to-point authentication of OpenHarmony have been integrated to implement interconnection between openEuler-based embedded devices and between openEuler-based embedded devices and OpenHarmony-based devices. iSulad containers are supported so that openEuler or other OS containers can be deployed on embedded devices to simplify application porting and deployment.

      • UniProton: This hard RTOS features ultra-low latency and flexible MICA deployments. It is suited for industrial control because it supports both microcontroller units and multi-core CPUs.

      What's New in the openEuler Kernel

      openEuler 22.03 LTS SP3 runs on Linux kernel 5.10. It inherits the competitive advantages of community versions and innovative features released in the openEuler community.

      • Dynamic memory isolation and release: Memory pages are dynamically isolated and de-isolated. When isolated, the original memory is migrated safely.

      • Online CPU inspection: To avoid silent data corruption that is a common cause of data loss, faulty cores are detected and isolated to prevent faults before they are exacerbated.

      • Adaptive provisioning of computing power: To ensure consistency and reliability of certain applications (such as cloud desktop systems) running on multi-core servers, computing power is dynamically provisioned based on load changes.

      • Power-aware scheduling: At the service layer, memory access bandwidth, CPU load, and other information are collected to ensure sufficient resources for critical threads. A physical topology is introduced so that the P-state control mechanism extends to new dimensions, further reducing power consumption beyond the limits of single-die frequency and voltage regulation. This feature minimizes power consumption when the service load is low

      • Enhanced core isolation: CPUs are classified into housekeeping and non-housekeeping. The former executes background processes such as periodic system clock maintenance, while the latter executes service processes. Background processes and interrupts are all allocated to housekeeping CPUs to prevent noise from affecting service process. This enhanced core isolation improves service performance, especially needed for HPC workloads.

      • Performance monitor unit (PMU) indicators: When multiple services share node resources, indicators such as PSI are used to measure system contention, service throughput, and delay. These indicators are essential to locating system resource bottlenecks, understanding the resource demand of specific service processes, and dynamically adjusting resource allocation. This improves the quality of online services and system health.

      • KVM TDP MMU: In Linux kernel 5.10 and later, KVM can scale to match demand for memory virtualization. This feature is contributed by Intel to the openEuler community. Compared with the traditional KVM memory management unit (MMU), the two dimensional paging MMU, or TDP MMU, offers more efficient handling of concurrent page faults and better support for large-scale VM deployments, such as those with multiple vCPUs and large memory. In addition, the new Extended Page Tables (EPT) and Nested Page Tables (NPT) traversal interface boosts host memory utilization by removing the dependency on the rmap data structure that is typical in traditional memory virtualization solutions.

      NestOS

      NestOS is a cloud OS incubated in the openEuler community. It runs rpm-ostree and Ignition technologies over a dual rootfs and atomic update design, and uses nestos-assembler for quick integration and build. NestOS is compatible with Kubernetes and OpenStack, and reduces container overheads and provides extensive cluster components in large-scale containerized environments.

      • Out-of-the-box availability: integrates popular container engines such as iSulad, Docker, and Podman to provide lightweight and tailored OSs for the cloud.
      • Easy configuration: uses the Ignition utility to install and configure a large number of cluster nodes with a single configuration.
      • Secure management: runs rpm-ostree to manage software packages and works with the openEuler software package source to ensure secure and stable atomic updates.
      • Hitless node updating: uses Zincati to provide automatic node updates and reboot without interrupting services.
      • Dual rootfs: executes dual rootfs for active/standby switchovers, to ensure integrity and security during system running.

      SysCare

      SysCare is a system-level hotfix software that provides security patches and hot fixing for OSs. It can fix system errors without restarting hosts. SysCare combines kernel-mode and user-mode hot patching to take over system repair, saving time for users to focus on other aspects of their business. It includes hot patch making, hot patch lifecycle management, and and integration of user-mode hot patches for ELF files, kernel hot patches, and user-mode hot patches. The following features are added in openEuler 22.03 LTS SP3:

      • Configures the dependencies of hot patches when they are created.
      • Manages multiple user-mode patches.
      • Detects conflicts between user-mode hot patches.
      • Forcibly overwrites user-mode hot patches when conflicts occur.

      GCC for openEuler

      GCC for openEuler is a high-performance compiler oriented to the openEuler ecosystem for various scenarios. It is developed on the open source GNU Compiler Collection (GCC) and inherits the capabilities of the open source GCC. GCC for openEuler optimizes C, C++, and Fortran deployments in terms of instructions, memory, and automatic vectorization, to adapt to and unleash the compute of hardware platforms, such as Kunpeng, Phytium, and LoongArch. New capabilities of GCC for openEuler include:

      • Multiple GCC versions now support OpenMP, including the gcc-toolset-12 package series that run on GCC 12.3.0. Fortran supports OpenMP 4.5, while C/C++ supports some OpenMP 5.0 specifications.
      • Last-level cache (LLC) allocation is optimized. By analyzing memory multiplexing on the main execution paths in a program, GCC for openEuler determines and sorts hot data. Then, prefetch instructions are inserted to pre-allocate data to the LLC, reducing LLC misses.
      • Optimizations of CPUBench help intelligently identify and reduce instructions while boosting performance.

      A-Ops

      A-Ops is an intelligent O&M platform that covers data collection, health check, and fault diagnosis and rectification. Released with openEuler 22.03 LTS SP3, Apollo is an intelligent patch management framework that integrates core functions such as vulnerability scanning, CVE fixing (with cold/hot patches), and hot patching rollback. Apollo periodically downloads and synchronizes security advisories and sets scheduled tasks to scan for vulnerabilities.

      Apollo enables the intelligent management of kernel patches.

      • Hot patch source management: When openEuler vulnerabilities are released through a security advisory, the software package used for fixing the vulnerabilities is also released in the update repository. By default, after openEuler is installed, the cold patch update repository of the corresponding OS version is provided. Users can also configure the update repository of cold or hot patches.
      • Vulnerability scanning: Manual or periodic cluster scans can be performed to check the impact of CVEs on a cluster and cold or hot patches are provided for repair.
      • Hybrid patch management: Cold and hot patches can be applied independently or together to implement silent incorporation of hot patches on the live network and reduce hot patch maintenance costs.
      • Hot patch lifecycle management: hot patch removal, rollback, and query

      Gazelle

      Gazelle is a high-performance user-mode protocol stack. It directly reads and writes NIC packets in user mode based on the Data Plane Development Kit (DPDK), transmits the packets through shared hugepage memory, and uses the LwIP protocol stack, thereby greatly improving the network I/O throughput of applications and accelerating the network for databases. With Gazelle, high performance and universality can be achieved at the same time. In openEuler 22.03 LTS SP3, support for the UDP protocol and related interfaces is added for Gazelle to enrich the user-mode protocol stack.

      • Available in single VLAN, bond4, and bond6 modes, and supports NIC self-healing after network cables are reinstalled.
      • A single-instance Redis application on Kunpeng 920 VMs supports over 5,000 connections, improving performance by more than 30%.
      • The TCP_STREAM and TCP_RR tests of netperf (packet length less than 1,463 bytes) are supported.
      • Logs of the LStack, lwIP, and gazellectl modules of Gazelle are refined for more accurate fault locating.

      OCI Runtime for iSulad

      Open Container Initiative (OCI) is a lightweight and open governance project dedicated to creating an open industry standard for container formats and runtimes. Developed with the support of the Linux Foundation, it aims to let any container runtimes that support OCI Runtime use OCI images to run containers. iSulad is a lightweight container engine compatible with mainstream container ecosystems, and supports standard southbound OCI APIs and can connect to multiple OCI runtimes, such as runc and kata.

      As OCI has matured dramatically in the last few years, container runtimes that comply with OCI Runtime have been fitting into an expanding scope of application scenarios. runc is the first reference implementation of OCI Runtime. In the current openEuler version:

      • The interconnection between iSulad and OCI Runtime is optimized, known defects are rectified, and the isula top and isula attach interfaces are added.
      • runc is set as the default runtime for iSulad.
      • After the default runtime is switched to runc, the dependency library of isulad-shim connected to OCI Runtime is changed to an independent and tailored static tool library. The switchover avoids existing process breakdowns caused by tool library upgrades, and reduces the memory overhead of containers.

      Distributed Data Management

      The distributed data management system is a data management capability ported from the OpenHarmony community. This system encapsulates over 100 universal APIs that adopt DSoftBus dynamic networking to provide a range of data synchronization, such as strong and weak consistency, for each device node on the network.

      • Feature Description

        • Relational database: manages data based on a relational model. It uses SQLite as the underlying persistent storage engine and supports all SQLite features.

        • KV Store: a key–value (KV) database that runs on SQLite. It manages KV pairs and distributes data across multiple devices and applications.

        • Distributed Data Object: an object-oriented in-memory data management framework that implements data object collaboration for the same application among multiple devices.

        • Distributed Data Service: synchronizes data between trusted devices, delivering a consistent access experience on different devices.

        • DSoftBus: discovers and connects devices at the network link layer.

        • SQLite: an open source component that provides native SQLite capabilities

      • Containerized DSoftBus

        Migrating legacy service software to containers can remove the barriers to modernization. In openEuler 22.03 LTS SP3, DSoftBus can be deployed as a container with its dependencies and multi-client support is enabled, to greatly simplify service installation, deployment, and testing and improve compatibility with service software.

      Memory Overcommitment

      Memory overcommitment is an efficient method to increase the available memory space for cloud native containers.

      • Cgroup memory policies

        • Proactive memory reclamation: The type of reclaimed memory pages can be specified, for example, file pages and anonymous pages.
        • Watermark-based reclamation: Minimum, low, and high watermarks can be configured for passive reclamation. Asynchronous reclamation can be performed in the background to avoid impact on existing services.
        • Memory deduplication: All the memory space used by processes in a container can be included in KSM deduplication, without requiring applications to call the madvise API to mark memory areas beforehand.
        • Swap space: For each independent container, you can configure the swap backend devices (such as zram and storage devices), maximum swap space, proactive swap-in, and enable or disable swap.
      • Basic optimizations

        • Memory compression: Secondary compression with zram leverages multiple compression algorithms to increase the compression ratio and compression/decompression speed.

        • Memory reclamation: TLB refresh is optimized in unmap and migration processes to accelerate memory reclamation and reduce lock conflicts. Transparent huge page swap is optimized as well.

      • Optimal decision-making based on the PSI mechanism

        • PSI is available in cgroup v1 and v2.

        • Memory is proactively reclaimed using the PSI negative feedback mechanism, to improve decisions that are based on service load and cluster information. This design maintains service performance and reliability during memory overcommitment.

      DIM

      Dynamic Integrity Measurement (DIM) enables timely detection and troubleshooting measures to handle attacks. It measures key memory data like code segments during program running and compares the results with the reference values to determine data tampering in the memory.

      • DIM provides the following features:

        • Measures user-mode processes, kernel modules, and code segment in the kernel memory.
        • Extends measurements to the PCR register of the TPM 2.0 chip for remote attestation.
        • Configures measurements and verifies measurement signatures.
        • Generates and imports measurement baseline data using tools, and verifies baseline data signatures.
        • Supports SM3 algorithms.
      • DIM consists of two software packages: dim_tools and dim.

        • dim_tools: provides the dim_gen_baseline command-line tool, which generates code segment measurement baseline in a specified format by parsing the Executable and Linkable Format (ELF) binary file.

        • dim: provides the dim_core and dim_monitor kernel modules. The former is the core module that parses and imports measurements and baselines configured by users, obtains target measurement data from memory, and performs measurement. The latter protects code segments and key data in dim_core to prevent invalid measurement due to dim_core tampering.

      Secure Boot

      Secure Boot relies on public and private key pairs to sign and verify components in the boot process. A typical boot process uses the previous component to verify the digital signature of the next component. If the verification is successful, the next component runs; if the verification fails, the boot stops.

      • Feature Description

        • The Signatrust platform generates and manages public and private key pairs and certificates, and provides the signing service for EulerMaker to build openEuler software packages.

        • The Signatrust platform signs code of the EFI components (shim, GRUB, vmlinux) for Secure Boot when the software packages are built by EulerMaker.

        • Signature verification is performed during system boot to ensure system components are safe and secure.

      • Constraints

        • The Signatrust platform can only sign components built in the openEuler community, but cannot sign files developed by external projects or custom user files.

        • The Signatrust platform supports only the RSA algorithm.

      secDetector

      secDetector is an intrusion detection system designed for OSs. It provides intrusion detection and response for critical infrastructure and reduces development costs while enhancing detection and response for third-party security tools.

      secDetector consists of the detection feature cases, exception detection probes, and attack blocking module. The exception detection probes collect OS attack events that match the MITRE ATT&CK patterns. There are eight types of exception detection probes that can detect advanced persistent threats (APTs): file operation, process management, network access, program behavior, memory tampering, resource consumption, account management, and device operation. The technical implementation architecture of secDetector consists of the SDK, service, detection feature cases, and detection framework (core).

      • The secDetector SDK is provided as a user-mode dynamic link library (DLL) deployed in the security awareness services that require secDetector. The SDK communicates with the secDetector service to complete related operations (such as subscription, unsubscription, and message reading).

      • The secDetector service is a user-mode service application. It manages and maintains the subscriptions of the security awareness services and maintains the probe running statuses.

      • The detection feature cases correspond to a series of exception detection probes, which are in different forms. For example, each probe for detecting kernel exceptions is available in a kernel module (.ko file).

      • The detection framework (core) is the base framework for case management, and provides common functional units required by workflows. The kernel exception detection framework is carried by a kernel module (.ko file).

      EulerMaker

      EulerMaker is a package build system that converts source code into binary packages. It enables developers to assemble and tailor scenario-specific OSs thanks to incremental/full build, gated build, layer tailoring, and image tailoring capabilities.

      • Incremental/Full build: Analyzes the impact of the changes to software and dependencies, obtains the list of packages to be built, and delivers parallel build tasks based on the dependency sequence.
      • Build dependency query: Provides a software package build dependency table in a project, and collects statistics on software package dependencies.
      • Layered tailoring: Overlays configuration layer models based on SPEC or YAML to tailor the software package version, patches, build and installation dependencies, compilation options, and build process to your project.
      • Image tailoring: Developers can configure the repository source to generate ISO, QCOW2, and container OS images, and tailor the list of software packages for the images.
      • Local task reproduction: Reproduces a build task locally using commands, facilitating build problem locating.
      • Easy project creation: Creates projects based on YAML configurations, and packages can be added in batches, greatly simplifying user operations.

      DPUDirect

      DPUDirect creates a collaborative operating environment for services, enabling them to be easily offloaded and ported between hosts and data processing units (DPUs). DPUDirect builds a cross-host collaboration framework at the OS layer of the host and DPU, providing a consistent runtime view for the management-plane processes offloaded to the DPU and the service processes on the host. In this way, applications are unaware of offload. Only a small amount of service code on the management plane needs to be adapted to ensure software compatibility and evolution, as well as reducing component maintenance costs.

      • File system collaboration supports cross-host file system access and provides a consistent file system view for host and DPU processes. It also supports special file systems such as proc, sys, and dev.
      • IPC collaboration enables imperceptible communication between host and DPU processes. It supports FIFO and UNIX domain sockets for cross-host communication.
      • Mounting collaboration performs the mount operation in a specific directory on the host, which can adapt to the container image overlay scenario. The offloaded management-plane process can construct a working directory for the service process on the host, providing a unified cross-node file system view.
      • epoll collaboration supports epoll operations for cross-host access of remote common files and FIFO files, and supports read and write blocking operations.
      • Process collaboration uses the rexec tool to remotely start executable files. The rexec tool takes over the input and output streams of the remotely started processes and monitor the status to ensure the lifecycle consistency of the processes at both ends.

      Live VM Migration with vDPA NIC Passthrough

      The kernel-mode vHost Data Path Acceleration (vDPA) framework provides a device virtualization solution that performs equivalently to passthrough. The vDPA framework unifies the architecture for diverse hardware forms, such as intelligent NICs and DPUs, and supports live migration across different hardware vendors.

      Extended vDPA and vHost APIs are used for live migrating VMs across vDPA devices from the same vendor, addressing the basic live migration requirements of vDPA passthrough VMs. Further, cross-vendor live migration uses embedded code to meet future requirements.

      Lustre Server Software Package

      Lustre is an open source parallel file system designed for high scalability, performance, and availability. Lustre runs on Linux and provides POSIX-compliant UNIX file system interfaces.

      • High scalability and performance: A Lustre system is scalable in terms of the number of client nodes, drive storage capacity, and bandwidth. The scalability and performance depend on the available drives, network bandwidth, and server throughput. The following lists the main features.
        • Client scalability: Up to 100,000 clients are supported. A typical production environment usually has 10,000 to 20,000 clients.
        • Client performance: The I/O performance of a single client is 90% of the network bandwidth. The aggregated I/O performance reaches 50 million IOPS, with an I/O bandwidth of up to 50 TB/s.
        • OSS scalability: A single OSS can manage up to 32 OSTs, each capable of storing 500 million objects, or 1,024 TB. A maximum of 1,000 OSSs and 4,000 OSTs are supported in a Lustre system.
        • OSS performance: A single OSS can deliver 1.5 million IOPS, with an I/O bandwidth of 15 GB/s. The aggregated I/O performance reaches 50 million IOPS, with an I/O bandwidth of up to 50 TB/s.
        • MDS scalability: A single MDS can manage up to four MDTs. A single MDT supports 4 billion files of up to 16 TB when LDISKFS is used as the backend file system, or 64 billion files of up to 64 TB when ZFS is used as the backend file system.
        • MDS performance: 1 million creation operations or 2 million metadata stat operations can be performed within a second.
        • File system scalability: The maximum size of a single file in the LDISKFS backend is 32 PB. An aggregated Lustre system can contain up to 1 trillion files, or 512 PB

      DDE

      Deepin Desktop Environment (DDE) was originally developed for Uniontech OS and has been used in the desktop, server, and dedicated device versions of Uniontech OS.

      DDE focuses on delivering high quality user interactions and visual design. DDE is powered by independently developed core technologies for desktop environments and provides login, screen locking, desktop, file manager, launcher, dock, window manager, control center, and additional functions. Due to its user-friendly interface, excellent interactivity, high reliability, and strong privacy protection, it is one of the most popular desktop environments among users.

      FangTian Window Engine

      The FangTian window engine delivers fundamental display technologies to build a foundation for openEuler's desktop environments. FangTian hosts display services such as window management, graphic drawing and compositing, and screen delivery.

      • Feature Description

        • Window management creates, moves, zooms, arranges, and destroys windows. An independent window policy module is used to support various scenarios on multiple device types, such as mobile phones and PCs.
        • Window display provides capabilities such as buffer allocation and swapping, vertical synchronization, rendering, compositing, and screen display. The data-driven interfaces and unified architecture realize high performance and low memory usage.
        • FT is a display protocol that enables the ArkUI framework to interact with FangTian. It provides unified rendering and data-driven interfaces to lower rendering load, reduce data from cross-process interactions, and enhance application animation performance.
        • ArkUI is a declarative UI development framework for OpenHarmony applications. It is derived from OpenHarmony and has been adapted to openEuler, allowing ArkUI-based OpenHarmony applications to run on openEuler as well.
      • Highlights

        • Linux application support: Native Wayland and OpenHarmony applications can run simultaneously.

        • High-performance display of OpenHarmony applications: 50 application windows can be displayed at 60 FPS.

      • Constraints

        • Only x86_64 applications are supported. The functions of some ArkUI controls are not enabled.

        • Wayland protocol compatibility does not apply to protocol extensions.

      sysMaster

      sysMaster is a collection of ultra-lightweight and highly reliable service management programs. sysMaster manages processes, containers, and VMs centrally and provides fault monitoring and self-healing mechanisms to help deal with Linux initialization and service management challenges. All these features make sysMaster an excellent choice for server, cloud computing, and embedded scenarios.

      • New features

        • devMaster component to manage device hot swap.

        • Live updates and hot reboot operations.

        • VMs now support PID 1.

      • Constraints

        • Only available for 64-bit OSs.

        • sysMaster configuration files must be in TOML format.

        • sysMaster can run only in system containers and VMs.

      migration-tools

      migration-tools, developed by UnionTech Software Technology Co., Ltd., is positioned to meet demand for smooth, stable, and secure migration to the openEuler OS.

      • Server module: the core of migration-tools. This module is developed on the Python Flask Web framework. It receives task requests, processes execution instructions, and distributes the instructions to each Agent.

      • Agent module: installed in the OS to be migrated to receive task requests from the Server module and perform migration.

      • Configuration module: reads configuration files for the Server and Agent modules.

      • Log module: records logs during migration.

      • Migration assessment module: provides assessment reports such as basic environment check, software package comparison and analysis, and pre-migration compatibility checks.

      • Migration function module: provides quick migration, displays the migration progress, and checks the migration result.

      utshell

      utshell is a new shell that introduces new features and inherits the usability of Bash. It enables interaction through command lines, such as responding to user operations to execute commands and providing feedback, and can execute automated scripts to facilitate O&M.

      • Command execution: Runs and sends return values from commands executed on user machines.

      • Job control: Executes, manages, and controls multiple user commands as background jobs.

      • Batch processing: Automates task execution using scripts.

      • Command aliases: Allows users to create aliases for commands to customize their operations.

      • Historical records: Records the commands entered by users.

      utsudo

      sudo is one of the commonly used utilities for Unix-like and Linux OSs. It enables users to run specific commands with the privileges of the super user. utsudo is developed to address issues of security and reliability common in sudo. utsudo uses Rust to deliver more efficient, secure, and flexible privilege escalation. The tool uses modules such as common utility, overall framework, and function plugins.

      • Access control: Limits the commands that can be executed by users, and specifies the required authentication method.
      • Audit log: Records and traces all commands and tasks executed by each user.
      • Temporary privilege escalation: Allows common users to temporarily escalate to a super user for executing privileged commands or tasks.
      • Flexible configuration: Allows users to set arguments such as command aliases, environment variables, and execution parameters to meet system requirements.

      i3

      i3 is a tiling window manager that enables the keyboard to manage the window layouts in a session or across multiple monitors. For more details, see the upstream document.

      Trusted Platform Control Module

      The trusted platform control module (TPCM) is a base and core module that can be integrated into a trusted computing platform to establish and ensure a trust source. As one of the innovations in Trusted Computing 3.0 and the core of active immunity, TPCM implements active control over the entire platform. The overall system design consists of the protection module, computing module, and trusted management center software.

      • Overall system design

        • Trusted management center: This centralized management platform, provided by a third-party vendor, formulates, delivers, maintains, and stores protection policies and reference values for trusted computing nodes.

        • Protection module: This module operates independently of the computing module and provides trusted computing protection functions that feature active measurement and active control to implement security protection during computing. The protection module consists of the TPCM main control firmware, TCB, and TCM.

        • Computing module: This module includes hardware, an OS, and application layer software.

      • Constraints

        • Supported server: TaiShan 200 server (model 2280)

        • Supported BMC card: BC83SMMC

      safeguard

      safeguard helps protect the Linux kernel and the OS based on eBPF by intercepting and auditing security operations. It uses the libbpfgo library and the Go language to implement top-level control.

      • File safeguarding

        • Traces file system activities, including file open, close, read, write, and delete.

        • Modifies the behavior of file systems through the interception of certain file operations and custom security policies.

        • Security policies

          • Operations on files can be intercepted or redirected through eBPF. For example, read and write operations on sensitive files can be intercepted, and access to certain files can be redirected.

          • Access control can be customized. eBPF checks the identity, permissions, and environment of a user who requests access to a file, and allows or denies the request based on custom rules.

          • Audit and monitoring can be customized. For example, eBPF records the information about operations on certain files, such as the operator, time, and action, and outputs the information to the logs

      • Process safeguarding

        • Traces process life cycles, such as process creation and termination.

        • Modifies the behavior of processes, such as injecting or modifying some system calls or implementing custom scheduling policies.

      • Network safeguarding

        • Traces network activities, such as sending, receiving, forwarding, and discarding network packets.

        • Modifies the behavior of networks through filtering and rewriting of network packets and custom routing policies.

      Bug Catching

      Buggy Content

      Bug Description

      Submit As Issue

      It's a little complicated....

      I'd like to ask someone.

      PR

      Just a small problem.

      I can fix it online!

      Bug Type
      Specifications and Common Mistakes

      ● Misspellings or punctuation mistakes;

      ● Incorrect links, empty cells, or wrong formats;

      ● Chinese characters in English context;

      ● Minor inconsistencies between the UI and descriptions;

      ● Low writing fluency that does not affect understanding;

      ● Incorrect version numbers, including software package names and version numbers on the UI.

      Usability

      ● Incorrect or missing key steps;

      ● Missing prerequisites or precautions;

      ● Ambiguous figures, tables, or texts;

      ● Unclear logic, such as missing classifications, items, and steps.

      Correctness

      ● Technical principles, function descriptions, or specifications inconsistent with those of the software;

      ● Incorrect schematic or architecture diagrams;

      ● Incorrect commands or command parameters;

      ● Incorrect code;

      ● Commands inconsistent with the functions;

      ● Wrong screenshots.

      Risk Warnings

      ● Lack of risk warnings for operations that may damage the system or important data.

      Content Compliance

      ● Contents that may violate applicable laws and regulations or geo-cultural context-sensitive words and expressions;

      ● Copyright infringement.

      How satisfied are you with this document

      Not satisfied at all
      Very satisfied
      Submit
      Click to create an issue. An issue template will be automatically generated based on your feedback.
      Bug Catching
      编组 3备份