长期支持版本

    基于NestOS容器化部署Kubernetes

    整体方案

    Kubernetes(k8s)是为容器服务而生的一个可移植容器的编排管理工具。本指南旨在提供NestOS快速容器化部署k8s的解决方案。该方案以虚拟化平台创建多个NestOS节点作为部署k8s的验证环境,并通过编写Ignition文件的方式,提前将k8s所需的环境配置到一个yaml文件中。在安装NestOS操作系统的同时,即可完成对k8s所需资源的部署并创建节点。裸金属环境也可以参考本文并结合NestOS裸金属安装文档完成k8s部署。

    • 版本信息:

      • NestOS镜像版本:22.09

      • k8s版本:v1.23.10

      • isulad版本:2.0.16

    • 安装要求

      • 每台机器2GB或更多的RAM
      • CPU2核心及以上
      • 集群中所有机器之间网络互通
      • 节点之中不可以有重复的主机名
      • 可以访问外网,需要拉取镜像
      • 禁止swap分区
      • 关闭selinux
    • 部署内容

      • NestOS镜像以集成isulad和kubeadm、kubelet、kubectl等二进制文件
      • 部署k8s Master节点
      • 部署容器网络插件
      • 部署k8s Node节点,将节点加入k8s集群中

    K8S节点配置

    NestOS通过Ignition文件机制实现节点批量配置。本章节简要介绍Ignition文件的生成方法,并提供容器化部署k8s时的Ignition配置示例。NestOS节点系统配置内容如下:

    配置项用途
    passwd配置节点登录用户和访问鉴权等相关信息
    hostname配置节点的hostname
    时区配置节点的默认时区
    内核参数k8s部署环境需要开启部分内核参数
    关闭selinuxk8s部署环境需要关闭selinux
    设置时间同步k8s部署环境通过chronyd服务同步集群时间

    生成登录密码

    使用密码登录方式访问NestOS实例,可使用下述命令生成${PASSWORD_HASH} 供点火文件配置使用:

    openssl passwd -1 -salt yoursalt
    

    生成ssh密钥对

    采用ssh公钥方式访问NestOS实例,可通过下述命令生成ssh密钥对:

    ssh-keygen -N '' -f /root/.ssh/id_rsa
    

    查看公钥文件id_rsa.pub,获取ssh公钥信息后供Ignition文件配置使用:

    cat /root/.ssh/id_rsa.pub
    

    编写butane配置文件

    本配置文件示例中,下列字段均需根据实际部署情况自行配置。部分字段上文提供了生成方法:

    • ${PASSWORD_HASH}:指定节点的登录密码
    • ${SSH-RSA}:配置节点的公钥信息
    • ${MASTER_NAME}:配置主节点的hostname
    • ${MASTER_IP}:配置主节点的IP
    • ${MASTER_SEGMENT}:配置主节点的网段
    • ${NODE_NAME}:配置node节点的hostname
    • ${NODE_IP}:配置node节点的IP
    • ${GATEWAY}:配置节点网关
    • ${service-cidr}:指定service分配的ip段
    • ${pod-network-cidr}:指定pod分配的ip段
    • ${image-repository}:指定镜像仓库地址,例:https://registry.cn-hangzhou.aliyuncs.com
    • ${token}:加入集群的token信息,通过master节点获取

    master节点butane配置文件示例:

    variant: fcos
    version: 1.1.0
    ##passwd相关配置
    passwd:
      users:
        - name: root
          ##登录密码
          password_hash: "${PASSWORD_HASH}"
          "groups": [
              "adm",
              "sudo",
              "systemd-journal",
              "wheel"
            ]
          ##ssh公钥信息
          ssh_authorized_keys:
            - "${SSH-RSA}"
    storage:
      directories:
      - path: /etc/systemd/system/kubelet.service.d
        overwrite: true
      files:
        - path: /etc/hostname
          mode: 0644
          contents:
            inline: ${MASTER_NAME}
        - path: /etc/hosts
          mode: 0644
          overwrite: true
          contents:
            inline: |
              127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
              ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
              ${MASTER_IP} ${MASTER_NAME}
              ${NODE_IP} ${NODE_NAME}          
        - path: /etc/NetworkManager/system-connections/ens2.nmconnection
          mode: 0600
          overwrite: true
          contents:
            inline: |
              [connection]
              id=ens2
              type=ethernet
              interface-name=ens2
              [ipv4]
              address1=${MASTER_IP}/24,${GATEWAY}
              dns=8.8.8.8
              dns-search=
              method=manual          
        - path: /etc/sysctl.d/kubernetes.conf
          mode: 0644
          overwrite: true
          contents:
            inline: |
              net.bridge.bridge-nf-call-iptables=1
              net.bridge.bridge-nf-call-ip6tables=1
              net.ipv4.ip_forward=1          
        - path: /etc/isulad/daemon.json
          mode: 0644
          overwrite: true
          contents:
            inline: |
              {
                  "exec-opts": ["native.cgroupdriver=systemd"],
                  "group": "isula",
                  "default-runtime": "lcr",
                  "graph": "/var/lib/isulad",
                  "state": "/var/run/isulad",
                  "engine": "lcr",
                  "log-level": "ERROR",
                  "pidfile": "/var/run/isulad.pid",
                  "log-opts": {
                      "log-file-mode": "0600",
                      "log-path": "/var/lib/isulad",
                      "max-file": "1",
                      "max-size": "30KB"
                  },
                  "log-driver": "stdout",
                  "container-log": {
                      "driver": "json-file"
                  },
                  "hook-spec": "/etc/default/isulad/hooks/default.json",
                  "start-timeout": "2m",
                  "storage-driver": "overlay2",
                  "storage-opts": [
                      "overlay2.override_kernel_check=true"
                  ],
                  "registry-mirrors": [
                      "docker.io"
                  ],
                  "insecure-registries": [
                      "${image-repository}"
                  ],
                  "pod-sandbox-image": "k8s.gcr.io/pause:3.6",
                  "native.umask": "secure",
                  "network-plugin": "cni",
                  "cni-bin-dir": "/opt/cni/bin",
                  "cni-conf-dir": "/etc/cni/net.d",
                  "image-layer-check": false,
                  "use-decrypted-key": true,
                  "insecure-skip-verify-enforce": false,
                  "cri-runtimes": {
                      "kata": "io.containerd.kata.v2"
                  }
              }          
        - path: /root/pull_images.sh
          mode: 0644
          overwrite: true
          contents:
            inline: |
              #!/bin/sh
              KUBE_VERSION=v1.23.10
              KUBE_PAUSE_VERSION=3.6
              ETCD_VERSION=3.5.1-0
              DNS_VERSION=v1.8.6
              CALICO_VERSION=v3.19.4
              username=${image-repository}
              images=(
                      kube-proxy:${KUBE_VERSION}
                      kube-scheduler:${KUBE_VERSION}
                      kube-controller-manager:${KUBE_VERSION}
                      kube-apiserver:${KUBE_VERSION}
                      pause:${KUBE_PAUSE_VERSION}
                      etcd:${ETCD_VERSION}
              )
              for image in ${images[@]}
              do
                  isula pull ${username}/${image}
                  isula tag ${username}/${image} k8s.gcr.io/${image}
                  isula rmi ${username}/${image}
              done
              isula pull ${username}/coredns:${DNS_VERSION}
              isula tag ${username}/coredns:${DNS_VERSION} k8s.gcr.io/coredns/coredns:${DNS_VERSION}
              isula rmi ${username}/coredns:${DNS_VERSION}
              isula pull calico/node:${CALICO_VERSION}
              isula pull calico/cni:${CALICO_VERSION}
              isula pull calico/kube-controllers:${CALICO_VERSION}
              isula pull calico/pod2daemon-flexvol:${CALICO_VERSION}
              touch /var/log/pull-images.stamp          
        - path: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
          mode: 0644
          contents:
            inline: |
              # Note: This dropin only works with kubeadm and kubelet v1.11+
              [Service]
              Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
              Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
              # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
              EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
              # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
              # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
              EnvironmentFile=-/etc/sysconfig/kubelet
              ExecStart=
              ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS          
        - path: /root/init-config.yaml
          mode: 0644
          contents:
            inline: |
              apiVersion: kubeadm.k8s.io/v1beta2
              kind: InitConfiguration
              nodeRegistration:
                criSocket: /var/run/isulad.sock
                name: k8s-master01
                kubeletExtraArgs:
                  volume-plugin-dir: "/opt/libexec/kubernetes/kubelet-plugins/volume/exec/"
              ---
              apiVersion: kubeadm.k8s.io/v1beta2
              kind: ClusterConfiguration
              controllerManager:
                extraArgs:
                  flex-volume-plugin-dir: "/opt/libexec/kubernetes/kubelet-plugins/volume/exec/"
              kubernetesVersion: v1.23.10
              imageRepository: k8s.gcr.io
              controlPlaneEndpoint: "${MASTER_IP}:6443"
              networking:
                serviceSubnet: "${service-cidr}"
                podSubnet: "${pod-network-cidr}"
                dnsDomain: "cluster.local"
              dns:
                type: CoreDNS
                imageRepository: k8s.gcr.io/coredns
                imageTag: v1.8.6          
      links:
        - path: /etc/localtime
          target: ../usr/share/zoneinfo/Asia/Shanghai
    
    systemd:
      units:
        - name: kubelet.service
          enabled: true
          contents: |
            [Unit]
            Description=kubelet: The Kubernetes Node Agent
            Documentation=https://kubernetes.io/docs/
            Wants=network-online.target
            After=network-online.target
    
            [Service]
            ExecStart=/usr/bin/kubelet
            Restart=always
            StartLimitInterval=0
            RestartSec=10
    
            [Install]
            WantedBy=multi-user.target        
    
        - name: set-kernel-para.service
          enabled: true
          contents: |
            [Unit]
            Description=set kernel para for Kubernetes
            ConditionPathExists=!/var/log/set-kernel-para.stamp
    
            [Service]
            Type=oneshot
            RemainAfterExit=yes
            ExecStart=modprobe br_netfilter
            ExecStart=sysctl -p /etc/sysctl.d/kubernetes.conf
            ExecStart=/bin/touch /var/log/set-kernel-para.stamp
    
            [Install]
            WantedBy=multi-user.target        
    
        - name: pull-images.service
          enabled: true
          contents: |
            [Unit]
            Description=pull images for kubernetes
            ConditionPathExists=!/var/log/pull-images.stamp
    
            [Service]
            Type=oneshot
            RemainAfterExit=yes
            ExecStart=systemctl start isulad
            ExecStart=systemctl enable isulad
            ExecStart=sh /root/pull_images.sh
    
            [Install]
            WantedBy=multi-user.target        
    
        - name: disable-selinux.service
          enabled: true
          contents: |
            [Unit]
            Description=disable selinux for kubernetes
            ConditionPathExists=!/var/log/disable-selinux.stamp
    
            [Service]
            Type=oneshot
            RemainAfterExit=yes
            ExecStart=bash -c "sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config"
            ExecStart=setenforce 0
            ExecStart=/bin/touch /var/log/disable-selinux.stamp
    
            [Install]
            WantedBy=multi-user.target        
    
        - name: set-time-sync.service
          enabled: true
          contents: |
            [Unit]
            Description=set time sync for kubernetes
            ConditionPathExists=!/var/log/set-time-sync.stamp
    
            [Service]
            Type=oneshot
            RemainAfterExit=yes
            ExecStart=bash -c "sed -i '3aserver ntp1.aliyun.com iburst' /etc/chrony.conf"
            ExecStart=bash -c "sed -i '24aallow ${MASTER_SEGMENT}' /etc/chrony.conf"
            ExecStart=bash -c "sed -i '26alocal stratum 10' /etc/chrony.conf"
            ExecStart=systemctl restart chronyd.service
            ExecStart=/bin/touch /var/log/set-time-sync.stamp
    
            [Install]
            WantedBy=multi-user.target        
    
        - name: init-cluster.service
          enabled: true
          contents: |
            [Unit]
            Description=init kubernetes cluster
            Requires=set-kernel-para.service pull-images.service disable-selinux.service set-time-sync.service
            After=set-kernel-para.service pull-images.service disable-selinux.service set-time-sync.service
            ConditionPathExists=/var/log/set-kernel-para.stamp
            ConditionPathExists=/var/log/set-time-sync.stamp
            ConditionPathExists=/var/log/disable-selinux.stamp
            ConditionPathExists=/var/log/pull-images.stamp
            ConditionPathExists=!/var/log/init-k8s-cluster.stamp
    
            [Service]
            Type=oneshot
            RemainAfterExit=yes
            ExecStart=kubeadm init --config=/root/init-config.yaml --upload-certs
            ExecStart=/bin/touch /var/log/init-k8s-cluster.stamp
    
            [Install]
            WantedBy=multi-user.target        
    
    
        - name: install-cni-plugin.service
          enabled: true
          contents: |
            [Unit]
            Description=install cni network plugin for kubernetes
            Requires=init-cluster.service
            After=init-cluster.service
    
            [Service]
            Type=oneshot
            RemainAfterExit=yes
            ExecStart=bash -c "curl https://docs.projectcalico.org/v3.19/manifests/calico.yaml -o /root/calico.yaml"
            ExecStart=/bin/sleep 6
            ExecStart=bash -c "sed -i 's#usr/libexec/#opt/libexec/#g' /root/calico.yaml"
            ExecStart=kubectl apply -f /root/calico.yaml --kubeconfig=/etc/kubernetes/admin.conf
    
            [Install]
            WantedBy=multi-user.target        
    

    Node节点butane配置文件示例:

    variant: fcos
    version: 1.1.0
    passwd:
      users:
        - name: root
          password_hash: "${PASSWORD_HASH}"
          "groups": [
              "adm",
              "sudo",
              "systemd-journal",
              "wheel"
            ]
          ssh_authorized_keys:
            - "${SSH-RSA}"
    storage:
      directories:
      - path: /etc/systemd/system/kubelet.service.d
        overwrite: true
      files:
        - path: /etc/hostname
          mode: 0644
          contents:
            inline: ${NODE_NAME}
        - path: /etc/hosts
          mode: 0644
          overwrite: true
          contents:
            inline: |
              127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
              ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
              ${MASTER_IP} ${MASTER_NAME}	
              ${NODE_IP} ${NODE_NAME}          
        - path: /etc/NetworkManager/system-connections/ens2.nmconnection
          mode: 0600
          overwrite: true
          contents:
            inline: |
              [connection]
              id=ens2
              type=ethernet
              interface-name=ens2
              [ipv4]
              address1=${NODE_IP}/24,${GATEWAY}
              dns=8.8.8.8;
              dns-search=
              method=manual          
        - path: /etc/sysctl.d/kubernetes.conf
          mode: 0644
          overwrite: true
          contents:
            inline: |
              net.bridge.bridge-nf-call-iptables=1
              net.bridge.bridge-nf-call-ip6tables=1
              net.ipv4.ip_forward=1          
        - path: /etc/isulad/daemon.json
          mode: 0644
          overwrite: true
          contents:
            inline: |
              {
                  "exec-opts": ["native.cgroupdriver=systemd"],
                  "group": "isula",
                  "default-runtime": "lcr",
                  "graph": "/var/lib/isulad",
                  "state": "/var/run/isulad",
                  "engine": "lcr",
                  "log-level": "ERROR",
                  "pidfile": "/var/run/isulad.pid",
                  "log-opts": {
                      "log-file-mode": "0600",
                      "log-path": "/var/lib/isulad",
                      "max-file": "1",
                      "max-size": "30KB"
                  },
                  "log-driver": "stdout",
                  "container-log": {
                      "driver": "json-file"
                  },
                  "hook-spec": "/etc/default/isulad/hooks/default.json",
                  "start-timeout": "2m",
                  "storage-driver": "overlay2",
                  "storage-opts": [
                      "overlay2.override_kernel_check=true"
                  ],
                  "registry-mirrors": [
                      "docker.io"
                  ],
                  "insecure-registries": [
                      "${image-repository}"
                  ],
                  "pod-sandbox-image": "k8s.gcr.io/pause:3.6",
                  "native.umask": "secure",
                  "network-plugin": "cni",
                  "cni-bin-dir": "/opt/cni/bin",
                  "cni-conf-dir": "/etc/cni/net.d",
                  "image-layer-check": false,
                  "use-decrypted-key": true,
                  "insecure-skip-verify-enforce": false,
                  "cri-runtimes": {
                      "kata": "io.containerd.kata.v2"
                  }
              }          
        - path: /root/pull_images.sh
          mode: 0644
          overwrite: true
          contents:
            inline: |
              #!/bin/sh
              KUBE_VERSION=v1.23.10
              KUBE_PAUSE_VERSION=3.6
              ETCD_VERSION=3.5.1-0
              DNS_VERSION=v1.8.6
              CALICO_VERSION=v3.19.4
              username=${image-repository}
              images=(
                      kube-proxy:${KUBE_VERSION}
                      kube-scheduler:${KUBE_VERSION}
                      kube-controller-manager:${KUBE_VERSION}
                      kube-apiserver:${KUBE_VERSION}
                      pause:${KUBE_PAUSE_VERSION}
                      etcd:${ETCD_VERSION}
              )
              for image in ${images[@]}
              do
                  isula pull ${username}/${image}
                  isula tag ${username}/${image} k8s.gcr.io/${image}
                  isula rmi ${username}/${image}
              done
              isula pull ${username}/coredns:${DNS_VERSION}
              isula tag ${username}/coredns:${DNS_VERSION} k8s.gcr.io/coredns/coredns:${DNS_VERSION}
              isula rmi ${username}/coredns:${DNS_VERSION}
              touch /var/log/pull-images.stamp          
        - path: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
          mode: 0644
          contents:
            inline: |
              # Note: This dropin only works with kubeadm and kubelet v1.11+
              [Service]
              Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
              Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
              # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
              EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
              # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
              # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
              EnvironmentFile=-/etc/sysconfig/kubelet
              ExecStart=
              ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS          
        - path: /root/join-config.yaml
          mode: 0644
          contents:
            inline: |
              apiVersion: kubeadm.k8s.io/v1beta3
              caCertPath: /etc/kubernetes/pki/ca.crt
              discovery:
                bootstrapToken:
                  apiServerEndpoint: ${MASTER_IP}:6443
                  token: ${token}
                  unsafeSkipCAVerification: true
                timeout: 5m0s
                tlsBootstrapToken: ${token}
              kind: JoinConfiguration
              nodeRegistration:
                criSocket: /var/run/isulad.sock
                imagePullPolicy: IfNotPresent
                name: ${NODE_NAME}
                taints: null          
      links:
        - path: /etc/localtime
          target: ../usr/share/zoneinfo/Asia/Shanghai
    
    systemd:
      units:
        - name: kubelet.service
          enabled: true
          contents: |
            [Unit]
            Description=kubelet: The Kubernetes Node Agent
            Documentation=https://kubernetes.io/docs/
            Wants=network-online.target
            After=network-online.target
    
            [Service]
            ExecStart=/usr/bin/kubelet
            Restart=always
            StartLimitInterval=0
            RestartSec=10
    
            [Install]
            WantedBy=multi-user.target        
    
        - name: set-kernel-para.service
          enabled: true
          contents: |
            [Unit]
            Description=set kernel para for kubernetes
            ConditionPathExists=!/var/log/set-kernel-para.stamp
    
            [Service]
            Type=oneshot
            RemainAfterExit=yes
            ExecStart=modprobe br_netfilter
            ExecStart=sysctl -p /etc/sysctl.d/kubernetes.conf
            ExecStart=/bin/touch /var/log/set-kernel-para.stamp
    
            [Install]
            WantedBy=multi-user.target        
    
        - name: pull-images.service
          enabled: true
          contents: |
            [Unit]
            Description=pull images for kubernetes
            ConditionPathExists=!/var/log/pull-images.stamp
    
            [Service]
            Type=oneshot
            RemainAfterExit=yes
            ExecStart=systemctl start isulad
            ExecStart=systemctl enable isulad
            ExecStart=sh /root/pull_images.sh
    
            [Install]
            WantedBy=multi-user.target        
    
        - name: disable-selinux.service
          enabled: true
          contents: |
            [Unit]
            Description=disable selinux for kubernetes
            ConditionPathExists=!/var/log/disable-selinux.stamp
    
            [Service]
            Type=oneshot
            RemainAfterExit=yes
            ExecStart=bash -c "sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config"
            ExecStart=setenforce 0
            ExecStart=/bin/touch /var/log/disable-selinux.stamp
    
            [Install]
            WantedBy=multi-user.target        
    
        - name: set-time-sync.service
          enabled: true
          contents: |
            [Unit]
            Description=set time sync for kubernetes
            ConditionPathExists=!/var/log/set-time-sync.stamp
    
            [Service]
            Type=oneshot
            RemainAfterExit=yes
            ExecStart=bash -c "sed -i '3aserver ${MASTER_IP}' /etc/chrony.conf"
            ExecStart=systemctl restart chronyd.service
            ExecStart=/bin/touch /var/log/set-time-sync.stamp
    
            [Install]
            WantedBy=multi-user.target        
    
        - name: join-cluster.service
          enabled: true
          contents: |
            [Unit]
            Description=node join kubernetes cluster
            Requires=set-kernel-para.service pull-images.service disable-selinux.service set-time-sync.service
            After=set-kernel-para.service pull-images.service disable-selinux.service set-time-sync.service
            ConditionPathExists=/var/log/set-kernel-para.stamp
            ConditionPathExists=/var/log/set-time-sync.stamp
            ConditionPathExists=/var/log/disable-selinux.stamp
            ConditionPathExists=/var/log/pull-images.stamp
    
            [Service]
            Type=oneshot
            RemainAfterExit=yes
            ExecStart=kubeadm join --config=/root/join-config.yaml
    
            [Install]
            WantedBy=multi-user.target        
    

    生成Ignition文件

    为了方便使用者读、写,Ignition文件增加了一步转换过程。将Butane配置文件(yaml格式)转换成Ignition文件(json格式),并使用生成的Ignition文件引导新的NestOS镜像。Butane配置转换成Ignition配置命令:

    podman run --interactive --rm quay.io/coreos/butane:release --pretty --strict < your_config.bu > transpiled_config.ign
    

    K8S集群搭建

    利用上一节配置的Ignition文件,执行下述命令创建k8s集群的Master节点,其中 vcpus、ram 和 disk 参数可自行调整,详情可参考 virt-install 手册。

    virt-install --name=${NAME} --vcpus=4 --ram=8192 --import --network=bridge=virbr0 --graphics=none --qemu-commandline="-fw_cfg name=opt/com.coreos/config,file=${IGNITION_FILE_PATH}" --disk=size=40,backing_store=${NESTOS_RELEASE_QCOW2_PATH} --network=bridge=virbr1 --disk=size=40
    

    Master节点系统安装成功后,系统后台会起一系列环境配置服务,其中set-kernel-para.service会配置内核参数,pull-images.service会拉取集群所需的镜像,disable-selinux.service会关闭selinux,set-time-sync.service服务会设置时间同步,init-cluster.service会初始化集群,之后install-cni-plugin.service会安装cni网络插件。整个集群部署过程中由于要拉取镜像,所以需要等待几分钟。

    通过kubectl get pods -A命令可以查看是否所有pod状态都为running。

    在Master节点上通过下面命令查看token:

    kubeadm token list
    

    将查询到的token信息添加到Node节点的ignition文件中,并利用该ignition文件创建Node节点。Node节点创建完成后,在Master节点上通过执行kubectl get nodes命令,可以查看Node节点是否加入到了集群中。

    至此,k8s部署成功。

    rpm-ostree使用

    rpm-ostree安装软件包

    安装wget。

    rpm-ostree install wget
    

    重启系统,可在启动时通过键盘上下按键选择rpm包安装完成后或安装前的系统状态,其中【ostree:0】为安装之后的版本。

    systemctl reboot
    

    查看wget是否安装成功。

    rpm -qa | grep wget
    

    rpm-ostree 手动更新升级 NestOS

    在NestOS中执行命令可查看当前rpm-ostree状态,可看到当前版本号。

    rpm-ostree status
    

    执行检查命令查看是否有升级可用,发现存在新版本。

    rpm-ostree upgrade --check
    

    预览版本的差异。

    rpm-ostree upgrade --preview
    

    在最新版本中,我们将nano包做了引入。 执行如下指令会下载最新的ostree和RPM数据,不需要进行部署。

    rpm-ostree upgrade --download-only
    

    重启NestOS,重启后可看到系统的新旧版本两个状态,选择最新版本的分支进入。

    rpm-ostree upgrade --reboot
    

    比较NestOS版本差别

    检查状态,确认此时ostree有两个版本,分别为LTS.20210927.dev.0和LTS.20210928.dev.0。

    rpm-ostree status
    

    根据commit号比较2个ostree的差别。

    rpm-ostree db diff 55eed9bfc5ec fe2408e34148
    

    系统回滚

    当一个系统更新完成,之前的NestOS部署仍然在磁盘上,如果更新导致了系统出现问题,可以使用之前的部署回滚系统。

    临时回滚

    要临时回滚到之前的OS部署,在系统启动过程中按住shift键,当引导加载菜单出现时,在菜单中选择相关的分支。

    永久回滚

    要永久回滚到之前的操作系统部署,登录到目标节点,运行rpm-ostree rollback,此操作将使用之前的系统部署作为默认部署,并重新启动到其中。 执行命令,回滚到前面更新前的系统。

    rpm-ostree rollback
    

    重启后失效。

    切换版本

    在上一步将NestOS回滚到了旧版本,可以通过命令切换当前 NestOS 使用的rpm-ostree版本,将旧版本切换为新版本。

    rpm-ostree deploy -r 22.09.20220325.dev.0
    

    重启后确认目前NestOS已经使用的是新版本的ostree了。

    zincati自动更新使用

    zincati负责NestOS的自动更新,zincati通过cincinnati提供的后端来检查当前是否有可更新版本,若检测到有可更新版本,会通过rpm-ostree进行下载。

    目前系统默认关闭zincati自动更新服务,可通过修改配置文件设置为开机自动启动自动更新服务。

    vi /etc/zincati/config.d/95-disable-on-dev.toml
    

    将updates.enabled设置为true,同时增加配置文件,修改cincinnati后端地址。

    vi /etc/zincati/config.d/update-cincinnati.toml
    

    添加如下内容。

    [cincinnati]
    base_url="http://nestos.org.cn:8080"
    

    重新启动zincati服务。

    systemctl restart zincati.service
    

    当有新版本时,zincati会自动检测到可更新版本,此时查看rpm-ostree状态,可以看到状态是“busy”,说明系统正在升级中。

    一段时间后NestOS将自动重启,此时再次登录NestOS,可以再次确认rpm-ostree的状态,其中状态转为"idle",而且当前版本已经是“20220325”,这说明rpm-ostree版本已经升级了。

    查看zincati服务的日志,确认升级的过程和重启系统的日志。另外日志显示的"auto-updates logic enabled"也说明更新是自动的。

    定制NestOS

    我们可以使用nestos-installer 工具对原始的NestOS ISO文件进行加工,将Ignition文件打包进去从而生成定制的 NestOS ISO文件。使用定制的NestOS ISO文件可以在系统启动完成后自动执行NestOS的安装,因此NestOS的安装会更加简单。

    在开始定制NestOS之前,需要做如下准备工作:

    • 下载 NestOS ISO
    • 准备 config.ign文件

    生成定制NestOS ISO文件

    设置参数变量

    $ export COREOS_ISO_ORIGIN_FILE=nestos-22.09.20220324.x86_64.iso
    $ export COREOS_ISO_CUSTOMIZED_FILE=my-nestos.iso
    $ export IGN_FILE=config.ign
    

    ISO文件检查

    确认原始的NestOS ISO文件中是没有包含Ignition配置。

    $ nestos-installer iso ignition show $COREOS_ISO_ORIGIN_FILE 
    
    Error: No embedded Ignition config.
    

    生成定制NestOS ISO文件

    将Ignition文件和原始NestOS ISO文件打包生成定制的NestOS ISO文件。

    $ nestos-installer iso ignition embed $COREOS_ISO_ORIGIN_FILE --ignition-file $IGN_FILE $COREOS_ISO_ORIGIN_FILE --output $COREOS_ISO_CUSTOMIZED_FILE
    

    ISO文件检查

    确认定制NestOS ISO 文件中已经包含Ignition配置了。

    $ nestos-installer iso ignition show $COREOS_ISO_CUSTOMIZED_FILE
    

    执行命令,将会显示Ignition配置内容。

    安装定制NestOS ISO文件

    使用定制的 NestOS ISO 文件可以直接引导安装,并根据Ignition自动完成NestOS的安装。在完成安装后,我们可以直接在虚拟机的控制台上用nest/password登录NestOS。

    文档捉虫

    “有虫”文档片段

    问题描述

    提交类型 issue

    有点复杂...

    找人问问吧。

    PR

    小问题,全程线上修改...

    一键搞定!

    问题类型
    规范和低错类

    ● 错别字或拼写错误;标点符号使用错误;

    ● 链接错误、空单元格、格式错误;

    ● 英文中包含中文字符;

    ● 界面和描述不一致,但不影响操作;

    ● 表述不通顺,但不影响理解;

    ● 版本号不匹配:如软件包名称、界面版本号;

    易用性

    ● 关键步骤错误或缺失,无法指导用户完成任务;

    ● 缺少必要的前提条件、注意事项等;

    ● 图形、表格、文字等晦涩难懂;

    ● 逻辑不清晰,该分类、分项、分步骤的没有给出;

    正确性

    ● 技术原理、功能、规格等描述和软件不一致,存在错误;

    ● 原理图、架构图等存在错误;

    ● 命令、命令参数等错误;

    ● 代码片段错误;

    ● 命令无法完成对应功能;

    ● 界面错误,无法指导操作;

    风险提示

    ● 对重要数据或系统存在风险的操作,缺少安全提示;

    内容合规

    ● 违反法律法规,涉及政治、领土主权等敏感词;

    ● 内容侵权;

    您对文档的总体满意度

    非常不满意
    非常满意
    提交
    根据您的反馈,会自动生成issue模板。您只需点击按钮,创建issue即可。
    文档捉虫
    编组 3备份