Long-Term Supported Versions

    Innovation Versions

      Kubernetes+iSulad Environment Deployment

      Preparing Cluster Servers

      Prepare at least 3 machines running openEuler 22.03 LTS or later versions. The following table lists information about the machines.

      Host NameIP AddressOSRoleComponent
      lab1197.xxx.xxx.xxxopenEuler 22.03 LTS SP2Control nodeiSulad/Kubernetes
      lab2197.xxx.xxx.xxxopenEuler 22.03 LTS SP2Worker node 1iSulad/Kubernetes
      lab3197.xxx.xxx.xxxopenEuler 22.03 LTS SP2Worker node 2iSulad/Kubernetes

      Preparing Images and Software Packages

      The following table lists software packages and images used in the example. The versions are for reference only.

      SoftwareVersion
      iSulad2.0.17-2
      kubernetes-client1.20.2-9
      kubernetes-kubeadm1.20.2-9
      kubernetes-kubelet1.20.2-9
      ImageVersion
      k8s.gcr.io/kube-proxyv1.20.2
      k8s.gcr.io/kube-apiserverv1.20.2
      k8s.gcr.io/kube-controller-managerv1.20.2
      k8s.gcr.io/kube-schedulerv1.20.2
      k8s.gcr.io/etcd3.4.13-0
      k8s.gcr.io/coredns1.7.0
      k8s.gcr.io/pause3.2
      calico/nodev3.14.2
      calico/pod2daemon-flexvolv3.14.2
      calico/cniv3.14.2
      calico/kube-controllersv3.14.2

      If you perform the deployment in without an Internet connection, download the software packages, dependencies, and images in advance.

      Modifying the hosts File

      1. Change the host name of the machine, for example, lab1.

        hostnamectl set-hostname lab1
        sudo -i
        
      2. Configure host name resolution by modifying the /etc/hosts file on each machine.

        vim /etc/hosts
        
      3. Add the following content (IP address and host name) to the hosts file:

        197.xxx.xxx.xxx lab1
        197.xxx.xxx.xxx lab2
        197.xxx.xxx.xxx lab3
        

      Preparing the Environment

      1. Disable the firewall/

        systemctl stop firewalld
        systemctl disable firewalld
        
      2. Disable SELinux.

        setenforce 0
        
      3. Disable memory swapping.

        swapoff -a
        sed -ri 's/.*swap.*/#&/' /etc/fstab
        
      4. Configure the network and enable forwarding.

        $ cat > /etc/sysctl.d/kubernetes.conf <<EOF
        net.bridge.bridge-nf-call-iptables = 1
        net.ipv4.ip_forward = 1
        net.bridge.bridge-nf-call-ip6tables = 1
        vm.swappiness=0
        EOF
        
      5. Enable the rules.

        modprobe overlay
        modprobe br_netfilter
        sysctl -p /etc/sysctl.d/kubernetes.conf
        
      6. Configure the startup script.

        vim /etc/init.d/k8s.sh
        
        • Add the following content to the k8s.sh script:

          #!/bin/sh
          modprobe br_netfilter
          sysctl -w net.bridge.bridge-nf-call-ip6tables = 1
          sysctl -w net.bridge.bridge-nf-call-iptables = 1
          
        • Change the permissions aon the script.

          chmod +x /etc/init.d/k8s.sh
          
      7. Create the configuration file.

        $ vim br_netfilte.service
        
        [Unit]
        Description=To enable the core module br_netfilter when reboot
        After=default.target
        [Service]
        ExecStart=/etc/init.d/k8s.sh
        # The path can be customized.
        [Install]
        WantedBy=default.target
        
        • Start the service.

          systemctl daemon-reload
          systemctl enable br_netfilter.service --now
          
      8. Configure sysctl.

        sed -i "s/net.ipv4.ip_forward=0/net.ipv4.ip_forward=1/g" /etc/sysctl.conf
        sed -i 12a\vm.swappiness=0 /etc/sysctl.conf
        

      Installing kubeadm, kubectl, kubelet, and iSulad

      1. Install the software packages using Yum.
      yum install -y kubernetes-kubeadm
      yum install -y kubernetes-client
      yum install -y kubernetes-kubelet
      yum install -y iSulad
      
      1. Enable kubelet to start upon system startup.
      systemctl enable kubelet
      

      Modifying iSulad Configurations

      1. Open the /etc/isulad/daemon.json file.
      vi /etc/isulad/daemon.json
      
      1. Modify the file as follows:
      {
          "group": "isula",
          "default-runtime": "lcr",
          "graph": "/var/lib/isulad",
          "state": "/var/run/isulad",
          "engine": "lcr",
          "log-level": "ERROR",
          "pidfile": "/var/run/isulad.pid",
          "log-opts": {
              "log-file-mode": "0600",
              "log-path": "/var/lib/isulad",
              "max-file": "1",
              "max-size": "30KB"
          },
          "log-driver": "stdout",
          "container-log": {
              "driver": "json-file"
          },
          "hook-spec": "/etc/default/isulad/hooks/default.json",
          "start-timeout": "2m",
          "storage-driver": "overlay2",
          "storage-opts": [
              "overlay2.override_kernel_check=true"
          ],
          "registry-mirrors": [
                      "docker.io"
          ],
          "insecure-registries": [
                       "k8s.gcr.io",
                       "quay.io",
                       "oci.inhuawei.com",
                       "rnd-dockerhub.huawei.com",
                       "registry.aliyuncs.com",
                       "<IP address of the local image repository>"
          ],
          "pod-sandbox-image": "k8s.gcr.io/pause:3.2",
          "native.umask": "normal",
          "network-plugin": "cni",
          "cni-bin-dir": "/opt/cni/bin",
          "cni-conf-dir": "/etc/cni/net.d",
          "image-layer-check": false,
          "use-decrypted-key": true,
          "insecure-skip-verify-enforce": false,
          "cri-runtimes": {
              "kata": "io.containerd.kata.v2"
          }
      }
      
      1. Restart the isulad service.

        systemctl restart isulad
        

      Loading the isulad Images

      1. Check the required system images.

        kubeadm config images list
        

      Pay attention to the versions in the output, as shown in the figure.

      1. Pull the images using the isula command.

        Note: The versions in the following commands are for reference only. Use the versions in the preceding output.

        isula pull k8simage/kube-apiserver:v1.20.15
        isula pull k8smx/kube-controller-manager:v1.20.15
        isula pull k8smx/kube-scheduler:v1.20.15
        isula pull k8smx/kube-proxy:v1.20.15
        isula pull k8smx/pause:3.2
        isula pull k8smx/coredns:1.7.0
        isula pull k8smx/etcd:3.4.13-0
        
      2. Modify the tags of the pulled images.

        isula tag k8simage/kube-apiserver:v1.20.15 k8s.gcr.io/kube-apiserver:v1.20.15
        isula tag k8smx/kube-controller-manager:v1.20.15 k8s.gcr.io/kube-controller-manager:v1.20.15
        isula tag k8smx/kube-scheduler:v1.20.15 k8s.gcr.io/kube-scheduler:v1.20.15
        isula tag k8smx/kube-proxy:v1.20.15 k8s.gcr.io/kube-proxy:v1.20.15
        isula tag k8smx/pause:3.2 k8s.gcr.io/pause:3.2
        isula tag k8smx/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
        isula tag k8smx/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
        
      3. Remove the old images.

        isula rmi k8simage/kube-apiserver:v1.20.15
        isula rmi k8smx/kube-controller-manager:v1.20.15
        isula rmi k8smx/kube-scheduler:v1.20.15
        isula rmi k8smx/kube-proxy:v1.20.15
        isula rmi k8smx/pause:3.2
        isula rmi k8smx/coredns:1.7.0
        isula rmi k8smx/etcd:3.4.13-0
        
      4. View pulled images.

        isula images
        

      Installing crictl

      yum install -y cri-tools
      

      Initializing the Master Node

      Initialize the master node.

      kubeadm init --kubernetes-version v1.20.2 --cri-socket=/var/run/isulad.sock --pod-network-cidr=<IP address range of the pods>
      
      • --kubernetes-version indicates the current Kubernetes version.
      • --cri-socket specifies the engine, that is, isulad.
      • --pod-network-cidr specifies the IP address range of the pods.

      Enter the following commands as prompted:

      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
      

      After the initialization, copy the last two lines of the output and run the copied commands on the nodes to add them to the master cluster. The commands can also be generated using the following command:

      kubeadm token create --print-join-command
      

      Adding Nodes

      Paste the kubeadm join command generated on Master, add --cri-socket=/var/run/isulad.sock before --discovery-token-ca-cert-hash, and then run the command.

      kubeadm join <IP address> --token bgyis4.euwkjqb7jwuenwvs --cri-socket=/var/run/isulad.sock --discovery-token-ca-cert-hash sha256:3792f02e136042e2091b245ac71c1b9cdcb97990311f9300e91e1c339e1dfcf6
      

      Installing Calico Network Plugins

      1. Pull Calico images.

        Configure the Calico network plugins on the Master node and pull the required images on each node.

        isula pull calico/node:v3.14.2
        isula pull calico/cni:v3.14.2
        isula pull calico/kube-controllers:v3.14.2
        isula pull calico/pod2daemon-flexvol:v3.14.2
        
      2. Download the configuration file on Master.

        wget https://docs.projectcalico.org/v3.14/manifests/calico.yaml
        
      3. Modify calico.yaml.

        # vim calico.yaml
        
        # Modify the following parameters.
        
        - name: IP_AUTODERECTION_METHOD
        Value: ”can-reach=197.3.10.254”
        
        - name: CALICO_IPV4POOL_IPIP
        Value: ”CrossSubnet”
        

        • If the default CNI of the pod is Flannel, add the following content to flannel.yaml:

          --iface=enp4s0
          

      4. Create a pod.

        kubectl apply -f calico.yaml
        
        • If you want to delete the configuration file, run the following command:

          kubectl delete -f calico.yaml
          
      5. View pod information.

        kubectl get pod -A -o wide
        

      Checking the Master Node Information

      kubectl get nodes -o wide
      

      To reset a node, run the following command:

      kubeadm reset
      

      Bug Catching

      Buggy Content

      Bug Description

      Submit As Issue

      It's a little complicated....

      I'd like to ask someone.

      PR

      Just a small problem.

      I can fix it online!

      Bug Type
      Specifications and Common Mistakes

      ● Misspellings or punctuation mistakes;

      ● Incorrect links, empty cells, or wrong formats;

      ● Chinese characters in English context;

      ● Minor inconsistencies between the UI and descriptions;

      ● Low writing fluency that does not affect understanding;

      ● Incorrect version numbers, including software package names and version numbers on the UI.

      Usability

      ● Incorrect or missing key steps;

      ● Missing prerequisites or precautions;

      ● Ambiguous figures, tables, or texts;

      ● Unclear logic, such as missing classifications, items, and steps.

      Correctness

      ● Technical principles, function descriptions, or specifications inconsistent with those of the software;

      ● Incorrect schematic or architecture diagrams;

      ● Incorrect commands or command parameters;

      ● Incorrect code;

      ● Commands inconsistent with the functions;

      ● Wrong screenshots.

      Risk Warnings

      ● Lack of risk warnings for operations that may damage the system or important data.

      Content Compliance

      ● Contents that may violate applicable laws and regulations or geo-cultural context-sensitive words and expressions;

      ● Copyright infringement.

      How satisfied are you with this document

      Not satisfied at all
      Very satisfied
      Submit
      Click to create an issue. An issue template will be automatically generated based on your feedback.
      Bug Catching
      编组 3备份