Feature Description
Container Technology
NestOS provides computing resources for applications using a containerized computing environment. Applications share a system kernel and resources, but are invisible to each other. This means that applications are no longer directly installed in the OS. Instead, they run in containers through Docker. This greatly reduces the coupling among the OS, applications, and running environment. Compared with the traditional application deployment mode, the NestOS cluster provides more flexible and convenient application deployment, less interference between application running environments , and the easier maintenance of OSs.
rpm-ostree
System Upgrade
rpm-ostree is a hybrid image/package system that combines RPM and OSTree. It provides RPM-based software package installation and management, and OSTree-based OS update and upgrade. rpm-ostree sees the two operations as updates to the OS. Each update to the system is similar to a transaction submitted by rpm-ostree. This ensures that the update completely succeeds or fails completely and allows the system to be rolled back to the status before the update.
When updating the OS, rpm-ostree keeps two bootable deployments: one before the update and one after the update. The update takes effect only after the OS is restarted. If an error occurs during software installation or upgrade, the rpm-ostree rollback allows NestOS to revert to the previous deployment. The /ostree/ and /boot/ directories of NestOS are the OSTree repository environment and show which OSTree deployment is booted into.
File System
In the rpm-ostree file system layout, only the /etc and /var directories are writable. Any data in the /var directory is not touched and is shared across upgrades. During the system upgrade, rpm-ostree takes the new default /etc and adds the changes on the top. This means that the upgrades will receive new default files in /etc, which is a critical feature.
OSTree is designed to parallel install multiple versions of multiple independent operating systems. OSTree relies on a new top-level ostree directory; it can in fact parallel install inside an existing OS or distribution occupying the physical /root. On each client machine, there is an OSTree repository stored in /ostree/repo, and a set of deployments stored in /ostree/deploy/$STATEROOT/$CHECKSUM. Each deployment is primarily composed of a set of hard links into the repository. This means each version is deduplicated; an upgrade process only costs disk space proportional to the new files, plus some constant overhead.
The model OSTree emphasizes is that the OS read-only content is kept in /usr; it comes with code to create a Linux read-only bind mount to prevent inadvertent corruption. There is exactly one /var writable directory shared between each deployment for a given OS. The OSTree core code does not touch content in this directory; it is up to the code in each operating system for how to manage and upgrade state.
OS Extensions
NestOS keeps the base image as simple and small as possible for security and maintainability reasons. However, in some cases it is necessary to add software to the base OS itself. For example, drivers or VPN software are potential candidates because they are harder to containerize. These software packages extend the functionality of the base OS rather than providing runtimes for user applications. For this reason, rpm-ostree treats these packages as extensions. That said, there are no restrictions on which packages you can actually install. By default, packages are downloaded from the openEuler repositories.
To layer a software package, you need to write a systemd unit that executes the rpm-ostree
command to install the wanted package. The changes are added to a new deployment, which takes effect after restart.
nestos-installer
nestos-installer helps with NestOS installation. It provides the following functions:
(1) Installing the OS to a target disk, optionally customizing it with an Ignition configuration or first-boot kernel parameters (nestos-installer install
)
(2) Downloading and verify an OS image for various cloud, virtualization, or bare metal platforms (nestos-installer download
)
(3) Listing NestOS images available for download (nestos-installer list-stream
)
(4) Embed an Ignition configuration in a live ISO image to customize the running system that boots from it (nestos-installer iso ignition
)
(5) Wrap an Ignition configuration in an initrd image that can be appended to the live PXE initramfs to customize the running system that boots from it (nestos-installer pxe ignition
)
Zincati
Zincati is an auto-update agent for NestOS hosts. It works as a client for Cincinnati and rpm-ostree, taking care of automatically updating/rebooting machines. Zincati has the following features:
(1) Agent for continuous automatic updates, with support for phased rollouts
(2) Runtime customization via TOML dropins, allowing users to overwrite the default configuration.
(3) Multiple update strategies
(4) Local maintenance windows on a weekly schedule for planned upgrades
(5) Tracks and exposes Zincati internal metrics to Prometheus to ease monitoring tasks across a large fleet of nodes
(6) Logging with configurable priority levels
(7) Support for complex update-graphs via Cincinnati protocol
(8) Support for cluster-wide reboot orchestration, via an external lock-manager
System Initialization (Ignition)
Ignition is a distribution-agnostic provisioning utility that not only installs, but also reads configuration files (in JSON format) to initialize NestOS. Configurable components include storage and file systems, systemd units, and users.
Ignition runs only once during the first boot of the system (while in the initramfs). Because Ignition runs so early in the boot process, it can re-partition disks, format file systems, create users, and write files before the userspace begins to boot. As a result, systemd services are already written to disk when systemd starts, speeding the time to boot.
(1) Ignition runs only on the first boot
Ignition is designed to be used as a provisioning tool, not as a configuration management tool. Ignition encourages immutable infrastructure, in which machine modification requires that users discard the old node and re-provision the machine.
(2) Ignition produces the machine specified or no machine at all
Ignition does what it needs to make the system match the state described in the Ignition configuration. If for any reason Ignition cannot deliver the exact machine that the configuration asked for, Ignition prevents the machine from booting successfully. For example, if the user wanted to fetch the document hosted at https://example.com/foo.conf and write it to disk, Ignition would prevent the machine from booting if it were unable to resolve the given URL.
(3) Ignition configurations are declarative
Ignition configurations describe the state of a system. Ignition configurations do not list a series of steps that Ignition should take.
Ignition configurations do not allow users to provide arbitrary logic (including scripts for Ignition to run). Users describe which file systems must exist, which files must be created, which users must exist, and more. Any further customization must use systemd services, created by Ignition.
(4) Ignition configurations should not be written by hand
Ignition configurations were designed to be human readable, but difficult to write, to discourage users from attempting to write configs by hand. Use Butane, or a similar tool, to generate Ignition configurations.
Afterburn
Afterburn is a one-shot agent for cloud-like platforms which interacts with provider-specific metadata endpoints. It is typically used in conjunction with Ignition.
Afterburn comprises several modules which may run at different times during the lifecycle of an instance. Depending on the specific platform, the following services may run in the initramfs on first boot:
setting local hostname
injecting network command-line arguments
The following features are conditionally available on some platforms as systemd service units:
installing public SSH keys for local system users
retrieving attributes from instance metadata
checking in to the provider in order to report a successful boot or instance provisioning