Key Features

AI

AI is redefining OSs by powering intelligent development, deployment, and O&M. openEuler supports general-purpose architectures like Arm, x86, and RISC-V, and next-gen AI processors like NVIDIA and Ascend. Further, openEuler is equipped with extensive AI capabilities that have made it a preferred choice for diversified computing power.

  • OS for AI: openEuler offers an efficient development and runtime environment that containerizes software stacks of AI platforms with out-of-the-box availability. It also provides various AI frameworks to facilitate AI development.
    • openEuler supports TensorFlow, PyTorch, and MindSpore frameworks and software development kits (SDKs) of major computing architectures, such as Compute Architecture for Neural Networks (CANN) and Compute Unified Architecture (CUDA), to make it easy to develop and run AI applications.
    • Environment setup is further simplified by containerizing software stacks. openEuler provides three types of container images:
      1. SDK images: Use openEuler as the base image and install the SDK of a computing architecture, for example, Ascend CANN and NVIDIA CUDA.
      2. AI framework images: Use an SDK image as the base and install AI framework software, such as PyTorch and TensorFlow. You can use an AI framework image to quickly build a distributed AI framework, such as Ray.
      3. Model application images: Provide a complete set of toolchains and model applications. For details, see the openEuler AI Container Image User Guide.
    • The sysHax LLM heterogeneous acceleration runtime enhances model inference performance in single-server, multi-device setups by optimizing Kunpeng + xPU (GPU/NPU) resource synergy.
      1. Dynamic resource allocation: Intelligently balances Kunpeng CPU and xPU workloads to maximize compute efficiency.
      2. CPU inference acceleration: Improves throughput via NUMA-aware scheduling, parallelized matrix operations, and SVE-optimized inference operators.
  • AI for OS: AI makes openEuler more intelligent. The openEuler Copilot System, an intelligent Q&A platform, is developed using foundation models and openEuler data. It assists in code generation, problem analysis, and system O&M.
    • AI application development framework: Foundation model applications are key to enterprise AI adoption. Combining retrieval-augmented generation (RAG) with foundation models effectively addresses gaps in domain-specific data. To support this, the openEuler community has developed an AI application development framework that provides intelligent tools for individual and enterprise developers, enabling rapid AI application development through streamlined workflows. It lowers technical barriers, improves efficiency and data quality, and meets diverse needs for data processing, model training, and content management. Its core features include:
      1. Multi-path enhanced RAG for improved Q&A accuracy: Overcomes limitations of traditional native RAG (low accuracy, weak guidance) using techniques like corpus governance, prompt rewriting, and multi-path retrieval comparison.
      2. Document processing and optimization: Supports incremental corpus updates, deduplication, sensitive data masking, and standardization (such as summarization, code annotation, and case organization).
      3. Embedding model fine-tuning: Enables rapid tuning and evaluation of embedding models (such as BGE models) for domain-specific performance gains.
    • Intelligent Q&A: The openEuler Copilot System is accessible via web or shell.
      1. Workflow scheduling:
        • Atomic agent operations: Multiple agent operations can be combined into a multi-step workflow that is internally ordered and associated, and is executed as an inseparable atomic operation.
        • Real-time data processing: Data generated in each step of the workflow can be processed immediately and then transferred to the next step.
        • Intelligent interaction: When the openEuler Copilot System receives a vague or complex user instruction, it proactively asks the user to clarify and provide more details.
      2. Task recommendation:
        • Intelligent response: The openEuler Copilot System can analyze the semantic information entered.
        • Intelligent guidance: The openEuler Copilot System comprehensively analyzes the execution status, function requirements, and associated tasks of the current workflow to provide next-step operation suggestions.
      3. RAG: The RAG technology used by openEuler Copilot System supports more document formats and content scenarios, and enhances Q&A quality while adding little system load.
      4. Corpus governance: Corpus governance is a core RAG capability. It imports corpuses into the knowledge base in a supported format using fragment relationship extraction, fragment derivative construction, and optical character recognition (OCR). This increases the retrieval hit rate. For details, see the openEuler Copilot System Intelligent Q&A Service User Guide.
    • Intelligent tuning: The openEuler Copilot System supports the intelligent shell entry. Through this entry, you can interact with the openEuler Copilot System using a natural language and perform heuristic tuning operations such as performance data collection, system performance analysis, and system performance tuning.
    • Intelligent diagnosis:
      1. Inspection: The Inspection Agent checks for abnormalities of designated IP addresses and provides an abnormality list that contains associated container IDs and abnormal metrics (such as CPU and memory).
      2. Demarcation: The Demarcation Agent analyzes and demarcates a specified abnormality contained in the inspection result and outputs the top 3 metrics of the root cause.
      3. Location: The Detection Agent performs profiling location analysis on the root cause, and provides useful hotspot information such as the stack, system time, and performance metrics related to the root cause.
    • Intelligent vulnerability patching: The openEuler intelligent vulnerability patching tool provides automated vulnerability management and repair capabilities for the openEuler kernel repository. This feature analyzes the impact of vulnerabilities on openEuler versions using the /analyze command and enables minute-level automated patch pull request (PR) creation via the /create_pr command.
    • Intelligent container images: The openEuler Copilot System can invoke environment resources through a natural language, assist in pulling container images for local physical resources, and establish a development environment suitable for debugging on existing compute devices. This system supports three types of containers, and container images have been released on Docker Hub. You can manually pull and run these container images.
      1. SDK layer: encapsulates only the component libraries that enable AI hardware resources, such as CUDA and CANN.
      2. SDKs + training/inference frameworks: accommodates TensorFlow, PyTorch, and other frameworks (for example, tensorflow2.15.0-cuda12.2.0 and pytorch2.1.0.a1-cann7.0.RC1) in addition to the SDK layer.
      3. SDKs + training/inference frameworks + LLMs: encapsulates several models (for example, llama2-7b and chatglm2-13b) based on the second type of containers.

openEuler Embedded

openEuler 25.03 Embedded is designed for embedded applications, offering significant progress in southbound and northbound ecosystems and infrastructure. openEuler Embedded provides a closed loop framework often found in operational technology (OT) applications such as manufacturing and robotics, whereby innovations help optimize its embedded system software stack and ecosystem. For southbound compatibility, the "openEuler Embedded Ecosystem Expansion Initiative" has strengthened hardware support including Kunpeng 920 and TaishanPi, while achieving successful adaptation of STMicroelectronics' STM32MP257 high-performance microprocessor for industry applications through collaboration with MYIR. On the northbound front, capabilities have been enriched with industrial middleware and graphical middleware solutions, enabling practical implementations in manufacturing automation and robotics. Looking ahead, openEuler Embedded will collaborate with community partners, users, and developers to expand support for new processor architectures like LoongArch, enhance southbound hardware compatibility, and advance key capabilities including industrial middleware, embedded AI, edge computing, and simulation systems to establish a comprehensive embedded software platform solution.

  • Southbound ecosystem: openEuler Embedded Linux supports mainstream processor architectures like AArch64, x86_64, AArch32, and RISC-V, and will extend support to LoongArch in the future. openEuler 24.03 and later versions have a rich southbound ecosystem and support chips from Raspberry Pi, HiSilicon, Rockchip, Renesas, TI, Phytium, StarFive, and Allwinner.
  • Embedded virtualization base: openEuler Embedded uses an elastic virtualization base that enables multiple OSs to run on a system-on-a-chip (SoC). The base incorporates a series of technologies including bare metal, embedded virtualization, lightweight containers, LibOS, trusted execution environment (TEE), and heterogeneous deployment.
    1. The bare metal hybrid deployment solution runs on OpenAMP to manage peripherals by partition at a high performance level; however, it delivers poor isolation and flexibility. This solution supports the hybrid deployment of UniProton/Zephyr/RT-Thread and openEuler Embedded Linux.
    2. Partitioning-based virtualization is an industrial-grade hardware partition virtualization solution that runs on Jailhouse. It offers superior performance and isolation but inferior flexibility. This solution supports the hybrid deployment of UniProton/Zephyr/FreeRTOS and openEuler Embedded Linux or of OpenHarmony and openEuler Embedded Linux.
    3. Real-time virtualization is available as two community hypervisors, ZVM (for real-time VM monitoring) and Rust-Shyper (for Type-I embedded VM monitoring).
  • MICA deployment framework: The MICA deployment framework is a unified environment that masks the differences between technologies that comprise the embedded elastic virtualization base. The multi-core capability of hardware combines the universal Linux OS and a dedicated real-time operating system (RTOS) to make full use of all OSs. The MICA deployment framework covers lifecycle management, cross-OS communication, service-oriented framework, and multi-OS infrastructure.
    • Lifecycle management provides operations to load, start, suspend, and stop the client OS.
    • Cross-OS communication uses a set of communication mechanisms between different OSs based on shared memory.
    • Service-oriented framework enables different OSs to provide their own services. For example, Linux provides common file system and network services, and the RTOS provides real-time control and computing.
    • Multi-OS infrastructure integrates OSs through a series of mechanisms, covering resource expression and allocation and unified build. The MICA deployment framework provides the following functions:
    • Lifecycle management and cross-OS communication for openEuler Embedded Linux and the RTOS (Zephyr or UniProton) in bare metal mode
    • Lifecycle management and cross-OS communication for openEuler Embedded Linux and the RTOS (FreeRTOS or Zephyr) in partitioning-based virtualization mode
  • Northbound ecosystem: Over 600 common embedded software packages can be built using openEuler.Soft real-time kernel helps respond to soft real-time interrupts within microseconds. The distributed soft bus system (DSoftBus) of openEuler Embedded integrates the DSoftBus and point-to-point authentication module of OpenHarmony, implementing interconnection between openEuler-based embedded devices and OpenHarmony-based devices as well as between openEuler-based embedded devices. With iSula containers, openEuler and other OS containers can be deployed on embedded devices to simplify application porting and deployment. Embedded container images can be compressed to 5 MB, and can be easily deployed into the OS on another container.
  • UniProton: UniProton is an RTOS that features ultra-low latency and flexible MICA deployments. It is suited for industrial control because it supports both microcontroller units and multi-core CPUs. UniProton provides the following capabilities:
    • Compatible with processor architectures like Cortex-M, AArch64, x86_64, and riscv64, and supports M4, RK3568, RK3588, x86_64, Hi3093, Raspberry Pi 4B, Kunpeng 920, Ascend 310, and Allwinner D1s.
    • Connects with openEuler Embedded Linux on Raspberry Pi 4B, Hi3093, RK3588, and x86_64 devices in bare metal mode.
    • Can be debugged using GDB on openEuler Embedded Linux.

epkg

epkg is a new software package manager that supports the installation and use of non-service software packages. It solves version compatibility issues so that users can install and run software of different versions on the same OS by using simple commands to create, enable, and switch between environments.

  • Multi-version compatibility: Enables installation of multiple software versions, resolving version conflicts.
  • Flexible installation modes: Supports both privileged (system-wide) and unprivileged (user-specific) installations, enabling minimal-footprint deployments and self-contained installations.
  • Environment management: Facilitates environment lifecycle operations (create, delete, activate, register, and view), supporting multiple environments with distinct software repositories. Enables multi-environment version control, with runtime registration for multiple environments and exclusive environment activation for development debugging.
  • Environment rollback: Maintains operational history tracking and provides state restoration capabilities, allowing recovery from misoperations or faulty package installations.
  • Package management: Implements core package operations (install, remove, and query) with RPM/DNF-level functionality parity, meeting daily usage requirements for typical users and scenarios.

GCC Compilation and Linking Acceleration

To improve the compilation efficiency of openEuler software packages and enhance CI pipeline and developer productivity, optimization techniques for C/C++ components are implemented through compiler and linker enhancements. The combination of GCC 12.3 with profile-guided optimization (PGO) and link time optimization (LTO), alongside the modern mold linker, reduced the total compilation time for the top 90+ software packages by approximately 9.5%. The following key capabilities are supported:

  1. GCC 12.3 is configured to generate binaries with PGO and LTO, accelerating the compilation process.
  2. Applications specified in the allowlist automatically switch to the mold linker to optimize linking efficiency.

Kernel Innovations

openEuler 25.03 runs on Linux kernel 6.6 and inherits the competitive advantages of community versions and innovative features released in the openEuler community.

  • Kernel replication: This feature optimizes Linux kernel performance bottlenecks in non-uniform memory access (NUMA) architectures. Research shows critical data center applications like Apache, MySQL, and Redis experience significant performance impacts from kernel operations: kernel execution accounts for 61% of application CPU cycles, 57% of total instructions executed, 61% of I-cache misses, and 46% of I-TLB misses. Traditional Linux kernels restrict code segments, read-only data segments, and kernel page tables (swapper_pg_dir) to primary NUMA nodes without migration capability. This forces frequent cross-NUMA operations during system calls when multi-threaded applications are deployed across multiple NUMA nodes, increasing memory access latency and degrading system performance. The kernel replication feature extends the pgd global page directory table in mm_struct by automatically creating NUMA-local replicas of kernel code, data segments, and page tables during kernel initialization. This mechanism maps identical kernel virtual addresses to physical addresses within their respective NUMA nodes, enhancing memory locality and reducing cross-NUMA overhead. The implementation supports vmalloc, dynamic module loading, dynamic instruction injection mechanisms (Kprobe, KGDB, and BPF), security features (KPTI, KASLR, and KASAN), and 64 KB huge pages. A new boot-time cmdline configuration option (disabled by default) enables dynamic control for compatibility management. This feature benefits high-concurrency, multi-threaded server workloads.
  • HAOC 3.0 security feature: Hardware-assisted OS compartmentalization (HAOC) leverages x86 and Arm processor capabilities to implement a dual-architecture kernel design. It creates isolated execution environments (IEE) within the kernel to prevent attackers from performing lateral movement and privilege escalation. The current version establishes IEE as a protected domain where sensitive resources can be incrementally isolated. These resources become accessible exclusively through controlled IEE interfaces, preventing unauthorized access by standard kernel code.

NestOS

NestOS is a community cloud OS that uses nestos-assembler for quick integration and build. It runs rpm-ostree and Ignition tools over a dual rootfs and atomic update design, and enables easy cluster setup in large-scale containerized environments. Compatible with Kubernetes and OpenStack, NestOS also reduces container overheads.

  • Out-of-the-box availability: integrates popular container engines such as iSulad, Docker, and Podman to provide lightweight and tailored OSs for the cloud.
  • Easy configuration: uses the Ignition utility to install and configure a large number of cluster nodes with a single configuration.
  • Secure management: runs rpm-ostree to manage software packages and works with the openEuler software package source to ensure secure and stable atomic updates.
  • Hitless node updating: uses Zincati to provide automatic node updates and reboot without interrupting services.
  • Dual rootfs: executes dual rootfs for active/standby switchovers, to ensure integrity and security during system running.

oeAware Enhancements

oeAware is a framework that provides low-load collection, sensing, and tuning upon detecting defined system behaviors on openEuler. The framework divides the tuning process into three layers: collection, sensing, and tuning. Each layer is associated through subscription and developed as plugins, overcoming the limitations of traditional tuning techniques that run independently and are statically enabled or disabled. Every oeAware plugin is a dynamic library that utilizes oeAware interfaces. The plugins comprise multiple instances that each contains several topics and deliver collection or sensing results to other plugins or external applications for tuning and analysis purposes. openEuler 25.03 introduces the transparent_hugepage_tune and preload_tune plugins.

  • The SDK enables subscription to plugin topics, with a callback function handling data from oeAware. This allows external applications to create tailored functionalities, such as cross-cluster information collection or local node analysis.
  • The Performance monitoring unit (PMU) information collection plugin gathers performance records from the system PMU.
  • The Docker information collection plugin retrieves specific parameter details about the Docker environment.
  • The system information collection plugin captures kernel parameters, thread details, and resource information (CPU, memory, I/O, network) from the current environment.
  • The thread sensing plugin monitors key information about threads.
  • The evaluation plugin examines system NUMA and network information during service operations, suggesting optimal tuning methods.
  • The system tuning plugins comprise stealtask for enhanced CPU tuning, smc_tune which leverages shared memory communication in the kernel space to boost network throughput and reduce latency, xcall_tune which bypasses non-essential code paths to minimize system call processing overhead, transparent_hugepage_tune which enables transparent huge pages to boost the TLB hit ratio, and preload_tune which seamlessly loads dynamic libraries.
  • The Docker tuning plugin addresses CPU performance issues during sudden load spikes by utilizing the CPU burst feature.
  • smc_tune: SMC acceleration must be enabled before the server-client connection is established. This feature is most effective in scenarios with numerous persistent connections.
  • Docker tuning is not compatible with Kubernetes containers.
  • xcall_tune: The FAST_SYSCALL kernel configuration option must be activated.

A-Ops with CVE Fixes and Configuration Source Tracing

A-Ops empowers intelligent O&M through interactive dialogs and wizard-based operations. The intelligent interactive dialogs, featuring CVE prompts and fixes, configuration source tracing, configuration exception tracing, and configuration baseline synchronization, enable the O&M assistant to streamline routine O&M operations. A-Ops integrates the intelligent O&M assistant based on the openEuler Intelligent Interaction Platform for intelligent CVE fixing and configuration source tracing.

  • CVE fixing: A-Ops displays cluster CVE status, prompts high-score and high-severity CVEs, and offers corresponding fixes. You can apply these fixes and check results using the assistant or WebUI.
  • Configuration source tracing: You can use the assistant to find the machines with abnormal baseline configurations. The assistant shows these machines and incorrect configuration items. It then intelligently gives you summaries and suggests fixes. You can correct the configurations using the assistant or WebUI.

k8s-install

k8s-install is an online utility designed to provision cloud-native infrastructure on a wide range of Linux distributions and architectures. It also serves as a tool for creating offline installation packages. It supports installation, deployment, and secure updates of cloud-native infrastructure suites across multiple dimensions with just a few clicks, greatly reducing deployment and adaptation time while ensuring a standardized and traceable workflow. Currently, the following issues are present:

  • openEuler suffers from outdated cloud-native toolchain versions and lacks maintenance for multiple version baselines (such as Kubernetes 1.20, 1.25, and 1.29) within the same release. Consequently, released branches cannot be updated to major versions, requiring users to independently adapt and maintain later versions to meet business requirements.
  • Service parties commonly use tools like Ansible to deploy cloud infrastructure, often relying on non-standard packages, static binaries, and tarballs instead of distribution-managed packages. This practice inherently lacks support for secure CVE fixes, thereby posing security risks.
  • Version synchronization between offline and online installations is challenging. Furthermore, upgrading or modifying offline packages is difficult.
  • The lack of standardized installation and deployment processes results in inconsistent component versions, leading to incompatibilities and configuration differences that make issue resolution time-consuming and root cause analysis difficult.
  • The installer detects, installs, and updates the runC, containerd, Docker, and Kubernetes components and their dependent system libraries.
  • The configuration library stores configuration file templates for Docker and Kubernetes software.
  • The package library stores RPM packages for various versions and architectures of runC, containerd, Docker, Kubernetes, and their dependent system libraries.
  • The image library stores images required for Kubernetes software startup, such as various versions of kube-apiserver, kube-scheduler, etcd, and coredns. It also includes images for basic network plugins like Flannel.
  • The publisher encapsulates the latest code scripts, RPM packages, images, and configurations to create online and offline installation packages. Written in Bash, the main k8s-install program does not need to be compiled or linked. Its online installation package is encapsulated into an RPM package, built using spec files.

k8s-install Installers

k8s-install is a tool used to install and securely update cloud-native infrastructure. Version adaptation: openEuler suffers from outdated cloud-native toolchain versions from the upstream and released branches cannot be updated to major versions, requiring users to independently adapt and maintain later versions to meet business requirements. k8s-install supports multiple baseline versions to meet service requirements, preventing deployment failures or function exceptions caused by version incompatibilities. Improved deployment efficiency and standardization: The lack of standardized installation and deployment processes across departments or projects leads to inconsistent component versions, resulting in frequent adaptation issues and time-consuming resolutions. k8s-install enables standard deployment, ensuring component version compatibility, reducing fault locating time, and improving overall deployment efficiency. Enhanced security and maintainability: Service parties often deploy static binaries and tarballs, which lack support for secure CVE fixes. k8s-install can fix CVEs a timely manner, ensuring system security and stability. In addition, the code for all components has been committed to both the company's internal repository and the openEuler repository, which facilitates version tracing and fault locating and enhances system maintainability. Promoting open source and collaboration: By establishing and actively maintaining a repository within the openEuler community, k8s-install promotes technology sharing, fosters the growth of the community ecosystem, attracts more developers, enhances project influence, and promotes the continuous progress of cloud-native technologies. The installers provides the following core functions:

  • Multi-version support: It supports multiple baseline Kubernetes versions, including 1.20, 1.25, and 1.29, to meet the version requirements of various business scenarios and enable on-demand deployment.
  • Multi-architecture support: With compatibility for various architectures including x86_64, AArch64, and LoongArch64, it is suitable for diverse hardware environments, thereby expanding its application scope.
  • Multi-component management: It integrates installation and configuration of Go, runC, containerd, Docker, Kubernetes, and related components, streamlining the deployment of complex components and improving efficiency.
  • Online and offline deployment: An online installer k8s-install and an offline installer k8s-install-offline are available. Combined with the publish.sh publisher, these installers ensure flexible and stable deployment across various network conditions.

k8s-install Publisher

publish.sh is the publisher in the k8s-install tool chain. It has the following advantages:

  • Ensuring offline deployment: In network-restricted or offline environments, such as certain data centers or specialized production setups, direct access to online repositories is not possible. publish.sh can generate complete offline installation packages, ensuring successful cloud-native infrastructure deployment in these scenarios and broadening the application scope of the tools.
  • Efficient version iteration and release management: With the continuous updates of the k8s-install tool and its components, publish.sh enables automated build, test, and release processes. This enhances the efficiency of version iteration, ensures timely and accurate delivery of new versions to users, and facilitates the ongoing evolution of the system.
  • Improving the stability and reliability of resource acquisition: Online repositories can face issues with package or image availability due to network fluctuations or delayed updates. publish.sh fetches resources from official or trusted online repositories and ensures their stability and reliability through integration and testing, preventing deployment failures caused by resource issues.
  • Facilitating multi-team collaboration and resource synchronization: In large projects, different teams may manage various components or modules. publish.sh can integrate and publish the updates from each team, ensuring resource consistency across teams. It facilitates collaboration and improves overall project progress and quality.

Functions

  • Offline package generation and release: It pulls the latest software packages and images from online Yum and image repositories, combines them with the latest configuration files and installer, and packages them into an offline .tgz installation package to meet the deployment needs of offline environments.
  • Online code update and release: It uploads the updated code to the Git repository, selects the configuration library and installer for source code packaging, uploads it to the OBS server for official compilation after local build testing, and publishes it to the Yum repository to achieve online resource update and synchronization.

Trace IO

Trace IO (TrIO) is designed to optimize the on-demand loading of container images using EROFS over fscache. It achieves this by accurately tracking I/Os during container startup and efficiently orchestrating I/Os into container images to improve the cold startup process of containers. Compared with existing container image loading solutions, TrIO can significantly reduce the cold startup latency of container jobs and improve bandwidth utilization. TrIO comprises both kernel-space and user-space modules. The kernel-space module includes adaptations within the EROFS file system. The user-space module provides tools for capturing I/O traces during container runtime and offers an adaptation modification guidance based on Nydus snapshotter. This allows container users to leverage TrIO without modifying containerd and runC, ensuring compatibility with existing container management tools. The core advantage of TrIO lies in its ability to aggregate I/O operations during on-demand container loading. By orchestrating the runtime I/O traces of container jobs, TrIO accurately fetches the necessary I/O data during container execution. This greatly improves the efficiency of pulling image data during container startup, thereby achieving low latency. TrIO's functionality comprises two main aspects: capturing container runtime I/Os and utilizing the runtime I/Os during container startup. Container runtime I/Os are captured by using eBPF to trace I/O operations in the file system. This allows for obtaining the I/O read requests during container job startup, and orchestrating the corresponding data to build a minimal runtime image. During container startup, a custom snapshotter plugin module pulls the minimal runtime image using large I/O operations and imports it into the kernel. Subsequently, all I/O operations during container job execution will preferentially be read from this minimal runtime image. Compared with the existing on-demand container loading solutions, TrIO has the following advantages:

  • No I/O amplification: TrIO accurately captures runtime I/Os and use them for job startup. It ensures that I/Os are not amplified during container job startup.
  • I/O aggregation: During container job startup, TrIO uses large I/O operations to pull all the necessary data for the startup process to the container node at once. This improves the efficiency of loading image data while reducing startup latency.

Kuasar Integration with virtCCA

The Kuasar confidential container leverages the virtCCA capability of Kunpeng 920 processors. It connects northbound to the iSulad container engine and southbound to Kunpeng virtCCA hardware, enabling seamless integration of Kunpeng confidential computing with the cloud-native technology stack. Kuasar fully utilizes the advantages of the Sandboxer architecture to deliver a high-performance, low-overhead confidential container runtime. Kuasar-sandboxer integrates the virtCCA capability of openEuler QEMU to manage the lifecycle of confidential sandboxes, allowing users to create confidential sandboxes on confidential hardware and ensuring containers run within a trusted execution environment (TEE). Kuasar-task offers a Task API for iSulad to manage lifecycles of containers within secure sandboxes. Container images are securely pulled into encrypted sandbox memory through Kuasar-task's image retrieval capability.

Technical Constraints

  1. Remote attestation support of Kuasar is planned for integration via secGear in the SP versions of openEuler 24.03 LTS.
  2. Image encryption/decryption capabilities will be added after secGear integration.

Feature Description

Kuasar has expanded its capabilities to include confidential container support while maintaining existing secure container functionality. You can enable this feature through iSulad runtime configuration.

  • Native integration with the iSulad container engine preserves Kubernetes ecosystem compatibility.
  • Hardware-level protection via Kunpeng virtCCA technology ensures confidential workloads are deployed in trusted environments.

vKernel for Advanced Container Isolation

The virtual kernel (vKernel) architecture represents a breakthrough in container isolation, addressing the inherent limitations of shared-kernel architectures while preserving container performance efficiency. vKernel creates independent system call tables and file permission tables to enhance foundational security. It implements isolated kernel parameters, enabling containers to customize both macro-level resource policies and micro-level resource configurations. By partitioning kernel data ownership, leveraging hardware features to protect kernel privilege data, and building isolated kernel page tables for user data protection, vKernel further reinforces security. Future iterations will explore kernel data related to performance interference to strengthen container performance isolation capabilities.

secGear with Secure Key Hosting for Confidential Container Images

The remote attestation service of secGear provides secure key hosting capabilities for confidential container images, establishing a management system that encompasses secure key storage, dynamic fine-grained authorization, and cross-environment collaborative distribution. By integrating zero-trust policies and automated auditing capabilities, secGear ensures data confidentiality and operational traceability while optimizing the balance between key governance and operational costs. This delivers a unified "encrypt by default, decrypt on demand" security framework for cloud-native environments. secGear combines remote attestation technologies to build a layered key hosting architecture.

Attestation service

A centralized key hosting server leverages the remote attestation mechanism of TEEs to securely store and manage image encryption keys throughout their lifecycle. It offers authorized users granular policy configuration interfaces for tailored access control.

Attestation agent

Lightweight attestation agent components deployed within confidential compute nodes expose local RESTful APIs. The confidential container runtime invokes these APIs to validate the integrity of the confidential execution environment and establish secure dynamic sessions with the server, enabling encrypted key transmission.

RA-TLS

RA-TLS integrates remote attestation of confidential computing into TLS negotiation procedures, ensuring secure transmission of sensitive data into TEEs while simplifying secure channel establishment for confidential computing workloads, thereby lowering adoption barriers.

One-way authentication

In deployments where TLS servers operate within confidential environments and clients reside in regular environments, RA-TLS validates the legitimacy of the server confidential environment and applications through remote attestation before TLS key negotiation.

Two-way authentication

For scenarios where both TLS servers and clients operate within confidential environments, RA-TLS enforces mutual verification of peer environments and applications via remote attestation before TLS key negotiation.

Technical Constraints

Confidential computing environments must maintain network accessibility (such as virtCCA-enabled configurations).

openAMDC for High-Performance In-Memory Data Caching and KV Storage

openAMDC stands for open advanced in-memory data cache. It stores and caches data in memory to accelerate access, enhance application concurrency, minimize latency, and can serve as both a message broker and in-memory database.

Feature Description

  • Core capabilities: openAMDC, compatible with the Redis Serialization Protocol (RESP), delivers comprehensive caching for strings, lists, hashes, and sets while supporting active-standby, cluster, and sentinel deployment options.
  • Architectural features: openAMDC employs a multi-threaded architecture to significantly enhance in-memory caching performance, while integrating a hot-cold data tiering mechanism to enable hybrid memory-drive storage.
    1. Multi-thread architecture: During initialization, openAMDC spawns multiple worker threads, each running an event loop for network monitoring. By enabling SO_REUSEPORT for socket listeners, kernel-level load balancing is implemented across threads sharing the same port. This approach eliminates resource contention from shared listening sockets through dedicated per-thread socket queues, substantially improving concurrency throughput.
    2. Data exchange architecture: Built upon the multi-threaded foundation, openAMDC implements data exchange capabilities supporting hybrid memory-drive storage, effectively optimizing total cost of ownership while maintaining performance efficiency.

OpenStack Antelope

OpenStack is an open source project that provides a cloud computing management platform. It aims to deliver scalable and flexible cloud computing services to support private and public cloud environments.

Feature Description

OpenStack offers a series of services and tools to help build and manage public, private, and hybrid clouds. The service types include:

  • Compute service: creates, manages, and monitors VMs. It empowers users to quickly create, deploy, and destroy VMs and container instances, enabling flexible management and optimal utilization of computing resources.
  • Storage service: provides object storage, block storage, file storage, and other storage. Block storage services, such as Cinder, allow users to dynamically allocate and manage persistent block storage devices, such as VM drives. Object storage services, such as Swift, provide a scalable and distributed object storage solution, facilitating storage of large amounts of unstructured data.
  • Network service: empowers users to create, manage, and monitor virtual networks, and provides capabilities for topology planning, subnet management, and security group configuration. These features enable building of complex network structures while ensuring security and reliability.
  • Identity authentication service: provides comprehensive identity management and access control capabilities, including user, role, and permissions management. It ensures secure access and management of cloud resources while safeguarding data confidentiality and integrity.
  • Image service: enables image creation, management, and sharing through image uploading, downloading, and deletion. Users can perform management operations on images with ease and quickly deploy VM instances.
  • Orchestration service: automates application deployment and management, and facilitates service collaboration and integration. Orchestration services like Heat help streamline application deployment and management by automatically perform related tasks based on user-defined templates.

openEuler DevStation

openEuler DevStation is a Linux desktop OS built for developers, streamlining workflows while ensuring ecosystem compatibility. The latest release delivers major upgrades across three dimensions: supercharged toolchain, smarter GUI, and extended hardware support. These improvements create a more powerful, secure, and versatile development platform.

Feature Description

  • Developer-centric community toolchain

    1. Comprehensive development suite: Pre-configured with VSCodium (an open source, telemetry-free IDE) and development environments for major languages including Python, Java, Go, Rust, and C/C++.
    2. Enhanced tool ecosystem: Features innovative tools like oeDeploy for seamless deployment, epkg for extended package management, DevKit utilities, and an AI-powered coding assistant, delivering complete workflow support from environment configuration to production-ready code.
    3. oeDevPlugin Extension: A specialized VSCodium plugin for openEuler developers, providing, visual issue/PR dashboards, quick repository cloning and PR creation, automated code quality checks (such as license headers, formatting), real-time community task tracking.
    4. Intelligent assistant: Generates code from natural language prompts, creates API documentation with few clicks, and explains Linux commands, with a privacy-focused offline operation mode.
  • Enhanced GUI and productivity suite

    1. Smart navigation and workspace: Features an adaptive navigation bar that intelligently organizes shortcuts for development tools, system utilities, and common applications—all with customizable workspace layouts.
    2. Built-in productivity applications: Comes with the Thunderbird email client pre-installed for seamless office workflows.
  • Hardware compatibility upgrades

    1. Notebook-ready support: Comprehensive compatibility with modern laptop components, including precision touchpads, Wi-Fi 6/Bluetooth stacks, and multi-architectural drivers, delivering 20% faster AI and rendering workloads.
    2. Raspberry Pi DevStation image: Provides an Arm-optimized development environment out of the box, featuring a lightweight desktop environment with pre-installed IoT development tools (VScodium and oeDevPlugin) and accelerated performance for Python scientific computing libraries like NumPy and pandas.

oeDeploy for Simplified Software Deployment

oeDeploy revolutionizes software deployment as a lightweight yet powerful tool that accelerates environment setup across single-node and distributed systems with unmatched efficiency.

Feature Description

  • Universal deployment: Seamlessly handles both standalone and clustered deployments through automation, eliminating manual processes and slashing setup times.
  • Pre-built software solutions: Comes with optimized deployment solutions for industry-standard software, with continuous expansion through a growing plugin ecosystem.
  • Customizable architecture: Features an open plugin framework that empowers developers to build tailored deployment solutions aligned with their unique technical requirements.
  • Developer-centric design: Combines robust CLI capabilities with upcoming GUI tools and a plugin marketplace, letting developers concentrate on innovation rather than infrastructure.