Container-based Kubernetes Deployment Using NestOS
Solution Overview
Kubernetes (K8s) is a portable container orchestration and management tool developed for container services. This guide provides a solution for quickly deploying Kubernetes containers using NestOS. In this solution, multiple NestOS nodes are created on the virtualization platform as the verification environment for the Kubernetes cluster deployment. The environment required by Kubernetes is configured in a YAML-formatted Ignition configuration file in advance. The resources required by Kubernetes are deployed and nodes are created when NestOS is installed. In the bare metal environment, you can also deploy Kubernetes clusters by referring to this document and the NestOS bare metal installation document.
Software versions
NestOS image: 22.09
Kubernetes: v1.23.10
isulad: 2.0.16
Installation requirements
- Each machine has 2 GB or more RAM and 2 or more CPU cores.
- All machines in the cluster can communicate with each other.
- Each node has a unique host name.
- The external network can be accessed for pulling images.
- The swap partition disabled.
- SELinux is disabled.
Deployment content
- NestOS image that integrates isulad, kubeadm, kubelet, kubectl, and other binary files
- Kubernetes master node
- Container network plugins
- Kubernetes nodes to be added to the Kubernetes cluster
Kubernetes Node Configuration
NestOS uses Ignition to implement batch node configuration. This section describes how to generate an Ignition file and provides an Ignition configuration example for deploying Kubernetes containers. The system configurations of a NestOS node are as follows:
Item | Description |
---|---|
passwd | Configures the node login user and access authentication information |
hostname | Configures the host name of a node |
Time zone | Configures the default time zone of a node |
Kernel parameters | Some kernel parameters need to be enabled for Kubernetes deployment. |
SELinux | SELinux needs to be disabled for Kubernetes deployment. |
Time synchronization | The chronyd service is used to synchronize the cluster time in the Kubernetes environment. |
Generating a Login Password
To access a NestOS instance using a password, run the following command to generate ${PASSWORD_HASH} for Ignition configuration:
openssl passwd -1 -salt yoursalt
Generating an SSH Key Pair
To access a NestOS instance using an SSH public key, run the following command to generate an SSH key pair:
ssh-keygen -N '' -f /root/.ssh/id_rsa
View the public key file id_rsa.pub and obtain the SSH public key information for Ignition configuration:
cat /root/.ssh/id_rsa.pub
Compiling the Butane Configuration File
Configure the following fields in the configuration file example below based on the actual deployment. See the sections above for how to generate values of some fields.
- ${PASSWORD_HASH}: password for logging in to the node
- ${SSH-RSA}: public key of the node
- ${MASTER_NAME}: host name of the master node
- ${MASTER_IP}: IP address of the master node
- ${MASTER_SEGMENT}: Subnet where the master node is located
- ${NODE_NAME}: host name of the node
- ${NODE_IP}: IP address of the node
- ${GATEWAY}: gateway of the node
- ${service-cidr}: IP address range allocated to the services
- ${pod-network-cidr}: IP address range allocated to the pods
- ${image-repository}: image registry address, for example, https://registry.cn-hangzhou.aliyuncs.com
- ${token}: token information for joining the cluster, which is obtained from the master node
Example Butane configuration file for the master node:
variant: fcos
version: 1.1.0
## Password-related configurations
passwd:
users:
- name: root
## Password
password_hash: "${PASSWORD_HASH}"
"groups": [
"adm",
"sudo",
"systemd-journal",
"wheel"
]
## SSH public key
ssh_authorized_keys:
- "${SSH-RSA}"
storage:
directories:
- path: /etc/systemd/system/kubelet.service.d
overwrite: true
files:
- path: /etc/hostname
mode: 0644
contents:
inline: ${MASTER_NAME}
- path: /etc/hosts
mode: 0644
overwrite: true
contents:
inline: |
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
${MASTER_IP} ${MASTER_NAME}
${NODE_IP} ${NODE_NAME}
- path: /etc/NetworkManager/system-connections/ens2.nmconnection
mode: 0600
overwrite: true
contents:
inline: |
[connection]
id=ens2
type=ethernet
interface-name=ens2
[ipv4]
address1=${MASTER_IP}/24,${GATEWAY}
dns=8.8.8.8
dns-search=
method=manual
- path: /etc/sysctl.d/kubernetes.conf
mode: 0644
overwrite: true
contents:
inline: |
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
- path: /etc/isulad/daemon.json
mode: 0644
overwrite: true
contents:
inline: |
{
"exec-opts": ["native.cgroupdriver=systemd"],
"group": "isula",
"default-runtime": "lcr",
"graph": "/var/lib/isulad",
"state": "/var/run/isulad",
"engine": "lcr",
"log-level": "ERROR",
"pidfile": "/var/run/isulad.pid",
"log-opts": {
"log-file-mode": "0600",
"log-path": "/var/lib/isulad",
"max-file": "1",
"max-size": "30KB"
},
"log-driver": "stdout",
"container-log": {
"driver": "json-file"
},
"hook-spec": "/etc/default/isulad/hooks/default.json",
"start-timeout": "2m",
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"registry-mirrors": [
"docker.io"
],
"insecure-registries": [
"${image-repository}"
],
"pod-sandbox-image": "k8s.gcr.io/pause:3.6",
"native.umask": "secure",
"network-plugin": "cni",
"cni-bin-dir": "/opt/cni/bin",
"cni-conf-dir": "/etc/cni/net.d",
"image-layer-check": false,
"use-decrypted-key": true,
"insecure-skip-verify-enforce": false,
"cri-runtimes": {
"kata": "io.containerd.kata.v2"
}
}
- path: /root/pull_images.sh
mode: 0644
overwrite: true
contents:
inline: |
#!/bin/sh
KUBE_VERSION=v1.23.10
KUBE_PAUSE_VERSION=3.6
ETCD_VERSION=3.5.1-0
DNS_VERSION=v1.8.6
CALICO_VERSION=v3.19.4
username=${image-repository}
images=(
kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
)
for image in ${images[@]}
do
isula pull ${username}/${image}
isula tag ${username}/${image} k8s.gcr.io/${image}
isula rmi ${username}/${image}
done
isula pull ${username}/coredns:${DNS_VERSION}
isula tag ${username}/coredns:${DNS_VERSION} k8s.gcr.io/coredns/coredns:${DNS_VERSION}
isula rmi ${username}/coredns:${DNS_VERSION}
isula pull calico/node:${CALICO_VERSION}
isula pull calico/cni:${CALICO_VERSION}
isula pull calico/kube-controllers:${CALICO_VERSION}
isula pull calico/pod2daemon-flexvol:${CALICO_VERSION}
touch /var/log/pull-images.stamp
- path: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
mode: 0644
contents:
inline: |
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
- path: /root/init-config.yaml
mode: 0644
contents:
inline: |
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
nodeRegistration:
criSocket: /var/run/isulad.sock
name: k8s-master01
kubeletExtraArgs:
volume-plugin-dir: "/opt/libexec/kubernetes/kubelet-plugins/volume/exec/"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
controllerManager:
extraArgs:
flex-volume-plugin-dir: "/opt/libexec/kubernetes/kubelet-plugins/volume/exec/"
kubernetesVersion: v1.23.10
imageRepository: k8s.gcr.io
controlPlaneEndpoint: "${MASTER_IP}:6443"
networking:
serviceSubnet: "${service-cidr}"
podSubnet: "${pod-network-cidr}"
dnsDomain: "cluster.local"
dns:
type: CoreDNS
imageRepository: k8s.gcr.io/coredns
imageTag: v1.8.6
links:
- path: /etc/localtime
target: ../usr/share/zoneinfo/Asia/Shanghai
systemd:
units:
- name: kubelet.service
enabled: true
contents: |
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
- name: set-kernel-para.service
enabled: true
contents: |
[Unit]
Description=set kernel para for Kubernetes
ConditionPathExists=!/var/log/set-kernel-para.stamp
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=modprobe br_netfilter
ExecStart=sysctl -p /etc/sysctl.d/kubernetes.conf
ExecStart=/bin/touch /var/log/set-kernel-para.stamp
[Install]
WantedBy=multi-user.target
- name: pull-images.service
enabled: true
contents: |
[Unit]
Description=pull images for kubernetes
ConditionPathExists=!/var/log/pull-images.stamp
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=systemctl start isulad
ExecStart=systemctl enable isulad
ExecStart=sh /root/pull_images.sh
[Install]
WantedBy=multi-user.target
- name: disable-selinux.service
enabled: true
contents: |
[Unit]
Description=disable selinux for kubernetes
ConditionPathExists=!/var/log/disable-selinux.stamp
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=bash -c "sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config"
ExecStart=setenforce 0
ExecStart=/bin/touch /var/log/disable-selinux.stamp
[Install]
WantedBy=multi-user.target
- name: set-time-sync.service
enabled: true
contents: |
[Unit]
Description=set time sync for kubernetes
ConditionPathExists=!/var/log/set-time-sync.stamp
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=bash -c "sed -i '3aserver ntp1.aliyun.com iburst' /etc/chrony.conf"
ExecStart=bash -c "sed -i '24aallow ${MASTER_SEGMENT}' /etc/chrony.conf"
ExecStart=bash -c "sed -i '26alocal stratum 10' /etc/chrony.conf"
ExecStart=systemctl restart chronyd.service
ExecStart=/bin/touch /var/log/set-time-sync.stamp
[Install]
WantedBy=multi-user.target
- name: init-cluster.service
enabled: true
contents: |
[Unit]
Description=init kubernetes cluster
Requires=set-kernel-para.service pull-images.service disable-selinux.service set-time-sync.service
After=set-kernel-para.service pull-images.service disable-selinux.service set-time-sync.service
ConditionPathExists=/var/log/set-kernel-para.stamp
ConditionPathExists=/var/log/set-time-sync.stamp
ConditionPathExists=/var/log/disable-selinux.stamp
ConditionPathExists=/var/log/pull-images.stamp
ConditionPathExists=!/var/log/init-k8s-cluster.stamp
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=kubeadm init --config=/root/init-config.yaml --upload-certs
ExecStart=/bin/touch /var/log/init-k8s-cluster.stamp
[Install]
WantedBy=multi-user.target
- name: install-cni-plugin.service
enabled: true
contents: |
[Unit]
Description=install cni network plugin for kubernetes
Requires=init-cluster.service
After=init-cluster.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=bash -c "curl https://docs.projectcalico.org/v3.19/manifests/calico.yaml -o /root/calico.yaml"
ExecStart=/bin/sleep 6
ExecStart=bash -c "sed -i 's#usr/libexec/#opt/libexec/#g' /root/calico.yaml"
ExecStart=kubectl apply -f /root/calico.yaml --kubeconfig=/etc/kubernetes/admin.conf
[Install]
WantedBy=multi-user.target
Example Butane configuration file for a node:
variant: fcos
version: 1.1.0
passwd:
users:
- name: root
password_hash: "${PASSWORD_HASH}"
"groups": [
"adm",
"sudo",
"systemd-journal",
"wheel"
]
ssh_authorized_keys:
- "${SSH-RSA}"
storage:
directories:
- path: /etc/systemd/system/kubelet.service.d
overwrite: true
files:
- path: /etc/hostname
mode: 0644
contents:
inline: ${NODE_NAME}
- path: /etc/hosts
mode: 0644
overwrite: true
contents:
inline: |
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
${MASTER_IP} ${MASTER_NAME}
${NODE_IP} ${NODE_NAME}
- path: /etc/NetworkManager/system-connections/ens2.nmconnection
mode: 0600
overwrite: true
contents:
inline: |
[connection]
id=ens2
type=ethernet
interface-name=ens2
[ipv4]
address1=${NODE_IP}/24,${GATEWAY}
dns=8.8.8.8;
dns-search=
method=manual
- path: /etc/sysctl.d/kubernetes.conf
mode: 0644
overwrite: true
contents:
inline: |
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
- path: /etc/isulad/daemon.json
mode: 0644
overwrite: true
contents:
inline: |
{
"exec-opts": ["native.cgroupdriver=systemd"],
"group": "isula",
"default-runtime": "lcr",
"graph": "/var/lib/isulad",
"state": "/var/run/isulad",
"engine": "lcr",
"log-level": "ERROR",
"pidfile": "/var/run/isulad.pid",
"log-opts": {
"log-file-mode": "0600",
"log-path": "/var/lib/isulad",
"max-file": "1",
"max-size": "30KB"
},
"log-driver": "stdout",
"container-log": {
"driver": "json-file"
},
"hook-spec": "/etc/default/isulad/hooks/default.json",
"start-timeout": "2m",
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"registry-mirrors": [
"docker.io"
],
"insecure-registries": [
"${image-repository}"
],
"pod-sandbox-image": "k8s.gcr.io/pause:3.6",
"native.umask": "secure",
"network-plugin": "cni",
"cni-bin-dir": "/opt/cni/bin",
"cni-conf-dir": "/etc/cni/net.d",
"image-layer-check": false,
"use-decrypted-key": true,
"insecure-skip-verify-enforce": false,
"cri-runtimes": {
"kata": "io.containerd.kata.v2"
}
}
- path: /root/pull_images.sh
mode: 0644
overwrite: true
contents:
inline: |
#!/bin/sh
KUBE_VERSION=v1.23.10
KUBE_PAUSE_VERSION=3.6
ETCD_VERSION=3.5.1-0
DNS_VERSION=v1.8.6
CALICO_VERSION=v3.19.4
username=${image-repository}
images=(
kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
)
for image in ${images[@]}
do
isula pull ${username}/${image}
isula tag ${username}/${image} k8s.gcr.io/${image}
isula rmi ${username}/${image}
done
isula pull ${username}/coredns:${DNS_VERSION}
isula tag ${username}/coredns:${DNS_VERSION} k8s.gcr.io/coredns/coredns:${DNS_VERSION}
isula rmi ${username}/coredns:${DNS_VERSION}
touch /var/log/pull-images.stamp
- path: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
mode: 0644
contents:
inline: |
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
- path: /root/join-config.yaml
mode: 0644
contents:
inline: |
apiVersion: kubeadm.k8s.io/v1beta3
caCertPath: /etc/kubernetes/pki/ca.crt
discovery:
bootstrapToken:
apiServerEndpoint: ${MASTER_IP}:6443
token: ${token}
unsafeSkipCAVerification: true
timeout: 5m0s
tlsBootstrapToken: ${token}
kind: JoinConfiguration
nodeRegistration:
criSocket: /var/run/isulad.sock
imagePullPolicy: IfNotPresent
name: ${NODE_NAME}
taints: null
links:
- path: /etc/localtime
target: ../usr/share/zoneinfo/Asia/Shanghai
systemd:
units:
- name: kubelet.service
enabled: true
contents: |
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
- name: set-kernel-para.service
enabled: true
contents: |
[Unit]
Description=set kernel para for kubernetes
ConditionPathExists=!/var/log/set-kernel-para.stamp
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=modprobe br_netfilter
ExecStart=sysctl -p /etc/sysctl.d/kubernetes.conf
ExecStart=/bin/touch /var/log/set-kernel-para.stamp
[Install]
WantedBy=multi-user.target
- name: pull-images.service
enabled: true
contents: |
[Unit]
Description=pull images for kubernetes
ConditionPathExists=!/var/log/pull-images.stamp
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=systemctl start isulad
ExecStart=systemctl enable isulad
ExecStart=sh /root/pull_images.sh
[Install]
WantedBy=multi-user.target
- name: disable-selinux.service
enabled: true
contents: |
[Unit]
Description=disable selinux for kubernetes
ConditionPathExists=!/var/log/disable-selinux.stamp
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=bash -c "sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config"
ExecStart=setenforce 0
ExecStart=/bin/touch /var/log/disable-selinux.stamp
[Install]
WantedBy=multi-user.target
- name: set-time-sync.service
enabled: true
contents: |
[Unit]
Description=set time sync for kubernetes
ConditionPathExists=!/var/log/set-time-sync.stamp
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=bash -c "sed -i '3aserver ${MASTER_IP}' /etc/chrony.conf"
ExecStart=systemctl restart chronyd.service
ExecStart=/bin/touch /var/log/set-time-sync.stamp
[Install]
WantedBy=multi-user.target
- name: join-cluster.service
enabled: true
contents: |
[Unit]
Description=node join kubernetes cluster
Requires=set-kernel-para.service pull-images.service disable-selinux.service set-time-sync.service
After=set-kernel-para.service pull-images.service disable-selinux.service set-time-sync.service
ConditionPathExists=/var/log/set-kernel-para.stamp
ConditionPathExists=/var/log/set-time-sync.stamp
ConditionPathExists=/var/log/disable-selinux.stamp
ConditionPathExists=/var/log/pull-images.stamp
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=kubeadm join --config=/root/join-config.yaml
[Install]
WantedBy=multi-user.target
Generating an Ignition File
To facilitate reading and writing of Ignition configurations for human users, Ignition provides conversion from a YAML-formatted Butane file to a JSON-formatted Ignition file, which is used to boot the NestOS image. Run the following command to convert a Butane configuration file to an Ignition configuration file:
podman run --interactive --rm quay.io/coreos/butane:release --pretty --strict < your_config.bu > transpiled_config.ign
Kubernetes Cluster Setup
Run the following command to create the master node of the Kubernetes cluster based on the Ignition file generated in the previous section. You can adjust the vcpus
, ram
, and disk
parameters. For details, see the virt-install manual.
virt-install --name=${NAME} --vcpus=4 --ram=8192 --import --network=bridge=virbr0 --graphics=none --qemu-commandline="-fw_cfg name=opt/com.coreos/config,file=${IGNITION_FILE_PATH}" --disk=size=40,backing_store=${NESTOS_RELEASE_QCOW2_PATH} --network=bridge=virbr1 --disk=size=40
After NestOS is successfully installed on the master node, a series of environment configuration services are started in the background: set-kernel-para.service configures kernel parameters, pull-images.service pulls images required by the cluster, disable-selinux.service disables SELinux, set-time-sync.service sets time synchronization, init-cluster.service initializes the cluster, and then install-cni-plugin.service installs CNI network plugins. Wait a few minutes for the cluster to pull images.
Run the kubectl get pods -A
command to check whether all pods are in the running state.
Run the following command on the master node to view the token:
kubeadm token list
Add the queried token information to the Ignition file of the node and use the Ignition file to create the node. After the node is created, run the kubectl get nodes
command on the master node to check whether the node is added to the cluster.
If yes, Kubernetes is successfully deployed.
Using rpm-ostree
Installing Software Packages Using rpm-ostree
Install wget.
rpm-ostree install wget
Restart the system. During the startup, use the up and down arrow keys on the keyboard to enter system before or after the RPM package installation. ostree:0 indicates the version after the installation.
systemctl reboot
Check whether wget is successfully installed.
rpm -qa | grep wget
Manually Upgrading NestOS Using rpm-ostree
Run the following command in NestOS to view the current rpm-ostree status and version:
rpm-ostree status
Run the check command to check whether a new version is available.
rpm-ostree upgrade --check
Preview the differences between the versions.
rpm-ostree upgrade --preview
In the latest version, the nano package is imported. Run the following command to download the latest OSTree and RPM data without performing the deployment.
rpm-ostree upgrade --download-only
Restart NestOS. After the restart, the old and new versions of the system are available. Enter the latest version.
rpm-ostree upgrade --reboot
Comparing NestOS Versions
Check the status. Ensure that two versions of OSTree exist: LTS.20210927.dev.0 and LTS.20210928.dev.0.
rpm-ostree status
Compare the OSTree versions based on commit IDs.
rpm-ostree db diff 55eed9bfc5ec fe2408e34148
Rolling Back the System
When a system upgrade is complete, the previous NestOS deployment is still stored on the disk. If the upgrade causes system problems, you can roll back to the previous deployment.
Temporary Rollback
To temporarily roll back to the previous OS deployment, hold down Shift during system startup. When the boot load menu is displayed, select the corresponding branch from the menu.
Permanent Rollback
To permanently roll back to the previous OS deployment, log in to the target node and run the rpm-ostree rollback
command. This operation sets the previous OS deployment as the default deployment to boot into.
Run the following command to roll back to the system before the upgrade:
rpm-ostree rollback
Switching Versions
NestOS is rolled back to an older version. You can run the following command to switch the rpm-ostree version used by NestOS to a newer version.
rpm-ostree deploy -r 22.03.20220325.dev.0
After the restart, check whether NestOS uses the latest OSTree version.
Using Zincati for Automatic Update
Zincati automatically updates NestOS. Zincati uses the Cincinnati backend to check whether a new version is available. If a new version is available, Zincati downloads it using rpm-ostree.
Currently, the Zincati automatic update service is disabled by default. You can modify the configuration file to set the automatic startup upon system startup for Zincati.
vi /etc/zincati/config.d/95-disable-on-dev.toml
Set updates.enabled to true. Create a configuration file to specify the address of the Cincinnati backend.
vi /etc/zincati/config.d/update-cincinnati.toml
Add the following content:
[cincinnati]
base_url="http://nestos.org.cn:8080"
Restart the Zincati service.
systemctl restart zincati.service
When a new version is available, Zincati automatically detects the new version. Check the rpm-ostree status. If the status is busy, the system is being upgraded.
After a period of time, NestOS automatically restarts. Log in to NestOS again and check the rpm-ostree status. If the status changes to idle and the current version is 20220325, rpm-ostree has been upgraded.
View the zincati service logs to check the upgrade process and system restart logs. In addition, the information "auto-updates logic enabled" in the logs indicates that the update is automatic.
Customizing NestOS
You can use the nestos-installer tool to customize the original NestOS ISO file and package the Ignition file to generate a customized NestOS ISO file. The customized NestOS ISO file can be used to automatically install NestOS after the system is started for easy installation.
Before customizing NestOS, make the following preparations:
- Downloading the NestOS ISO.
- Preparing a config.ign File.
Generating a Customized NestOS ISO File
Setting Parameter Variables
$ export COREOS_ISO_ORIGIN_FILE=nestos-22.03.20220324.x86_64.iso
$ export COREOS_ISO_CUSTOMIZED_FILE=my-nestos.iso
$ export IGN_FILE=config.ign
Checking the ISO File
Ensure that the original NestOS ISO file does not contain the Ignition configuration.
$ nestos-installer iso ignition show $COREOS_ISO_ORIGIN_FILE
Error: No embedded Ignition config.
Generating a Customized NestOS ISO File
Package the Ignition file into the original NestOS ISO file to generate a customized NestOS ISO file.
$ nestos-installer iso ignition embed $COREOS_ISO_ORIGIN_FILE --ignition-file $IGN_FILE $COREOS_ISO_ORIGIN_FILE --output $COREOS_ISO_CUSTOMIZED_FILE
Checking the ISO File
Ensure that the customized NestOS ISO file contains the Ignition configuration.
$ nestos-installer iso ignition show $COREOS_ISO_CUSTOMIZED_FILE
The previous command displays the Ignition configuration.
Installing the Customized NestOS ISO File
The customized NestOS ISO file can be used to directly boot the installation. NestOS is automatically installed based on the Ignition configuration. After the installation is complete, you can use nest/password to log in to NestOS on the VM console.