iSulad+Kubernetes Environment Deployment 
Preparing Cluster Servers 
Prepare at least 3 machines running openEuler 20.03 LTS or later versions. The following table lists information about the machines.
| Host Name | IP Address | OS | Role | Component | 
|---|---|---|---|---|
| lab1 | 197.xxx.xxx.xxx | openEuler 20.03 LTS SP3 | Control node | iSulad/Kubernetes | 
| lab2 | 197.xxx.xxx.xxx | openEuler 20.03 LTS SP3 | Worker node 1 | iSulad/Kubernetes | 
| lab3 | 197.xxx.xxx.xxx | openEuler 20.03 LTS SP3 | Worker node 2 | iSulad/Kubernetes | 
Preparing Images and Software Packages 
The following table lists software packages and images used in the example. The versions are for reference only.
| Software | Version | 
|---|---|
| iSulad | 2.0.17-2 | 
| kubernetes-client | 1.20.2-9 | 
| kubernetes-kubeadm | 1.20.2-9 | 
| kubernetes-kubelet | 1.20.2-9 | 
| Image | Version | 
|---|---|
| k8s.gcr.io/kube-proxy | v1.20.2 | 
| k8s.gcr.io/kube-apiserver | v1.20.2 | 
| k8s.gcr.io/kube-controller-manager | v1.20.2 | 
| k8s.gcr.io/kube-scheduler | v1.20.2 | 
| k8s.gcr.io/etcd | 3.4.13-0 | 
| k8s.gcr.io/coredns | 1.7.0 | 
| k8s.gcr.io/pause | 3.2 | 
| calico/node | v3.14.2 | 
| calico/pod2daemon-flexvol | v3.14.2 | 
| calico/cni | v3.14.2 | 
| calico/kube-controllers | v3.14.2 | 
If you perform the deployment in without an Internet connection, download the software packages, dependencies, and images in advance.
- Download software packages: https://dl-cdn.openeuler.openatom.cn/
- Download images from Docker Hub: https://hub.docker.com
Modifying the hosts File 
- Change the host name of the machine, for example, lab1. shell- hostnamectl set-hostname lab1 sudo -i
- Configure host name resolution by modifying the /etc/hosts file on each machine. shell- vim /etc/hosts
- Add the following content (IP address and host name) to the hosts file: text- 197.xxx.xxx.xxx lab1 197.xxx.xxx.xxx lab2 197.xxx.xxx.xxx lab3
Preparing the Environment 
- Disable the firewall/ shell- systemctl stop firewalld systemctl disable firewalld
- Disable SELinux. shell- setenforce 0
- Disable memory swapping. shell- swapoff -a sed -ri 's/.*swap.*/#&/' /etc/fstab
- Configure the network and enable forwarding. shell- $ cat > /etc/sysctl.d/kubernetes.conf <<EOF net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 vm.swappiness=0 EOF
- Enable the rules. shell- modprobe overlay modprobe br_netfilter sysctl -p /etc/sysctl.d/kubernetes.conf
- Configure the startup script. shell- vim /etc/init.d/k8s.sh- Add the following content to the k8s.sh script:
 shell- #!/bin/sh modprobe br_netfilter sysctl -w net.bridge.bridge-nf-call-ip6tables = 1 sysctl -w net.bridge.bridge-nf-call-iptables = 1- Change the permissions aon the script.
 shell- chmod +x /etc/init.d/k8s.sh
- Create the configuration file. - The br_netfilter.service configuration file should be placed in systemd service directory, typically found at /usr/lib/systemd/system. shell- $ vim br_netfilte.service [Unit] Description=To enable the core module br_netfilter when reboot After=default.target [Service] ExecStart=/etc/init.d/k8s.sh # The path can be customized. [Install] WantedBy=default.target- Start the service.
 shell- systemctl daemon-reload systemctl enable br_netfilter.service
- Configure sysctl. shell- sed -i "s/net.ipv4.ip_forward=0/net.ipv4.ip_forward=1/g" /etc/sysctl.conf sed -i 12a\vm.swappiness=0 /etc/sysctl.conf
Installing kubeadm, kubectl, kubelet, and iSulad 
- Install the software packages using Yum. shell- yum install -y kubernetes-kubeadm yum install -y kubernetes-client yum install -y kubernetes-kubelet yum install -y iSulad
- Enable kubelet to start upon system startup. shell- systemctl enable kubelet
Modifying iSulad Configurations 
- Open the /etc/isulad/daemon.json file. shell- vi /etc/isulad/daemon.json
- Modify the file as follows: json- { "group": "isula", "default-runtime": "runc", "graph": "/var/lib/isulad", "state": "/var/run/isulad", "engine": "lcr", "log-level": "ERROR", "pidfile": "/var/run/isulad.pid", "log-opts": { "log-file-mode": "0600", "log-path": "/var/lib/isulad", "max-file": "1", "max-size": "30KB" }, "log-driver": "stdout", "container-log": { "driver": "json-file" }, "hook-spec": "/etc/default/isulad/hooks/default.json", "start-timeout": "2m", "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ], "registry-mirrors": [ "docker.io" ], "insecure-registries": [ "k8s.gcr.io", "quay.io", "oci.inhuawei.com", "rnd-dockerhub.huawei.com", "registry.aliyuncs.com", "<IP address of the local image repository>" ], "pod-sandbox-image": "k8s.gcr.io/pause:3.2", "native.umask": "normal", "network-plugin": "cni", "cni-bin-dir": "/opt/cni/bin", "cni-conf-dir": "/etc/cni/net.d", "image-layer-check": false, "use-decrypted-key": true, "insecure-skip-verify-enforce": false, "cri-runtimes": { "kata": "io.containerd.kata.v2" } }
- Restart the isulad service. shell- systemctl restart isulad
Loading the isulad Images 
- Check the required system images. shell- kubeadm config images list- Pay attention to the versions in the output, as shown in the figure. 
- Pull the images using the - isulacommand.- Note - Note: The versions in the following commands are for reference only. Use the versions in the preceding output. shell- isula pull k8smx/kube-apiserver:v1.20.15 isula pull k8smx/kube-controller-manager:v1.20.15 isula pull k8smx/kube-scheduler:v1.20.15 isula pull k8smx/kube-proxy:v1.20.15 isula pull k8smx/pause:3.2 isula pull k8smx/coredns:1.7.0 isula pull k8smx/etcd:3.4.13-0
- Modify the tags of the pulled images. shell- isula tag k8smx/kube-apiserver:v1.20.15 k8s.gcr.io/kube-apiserver:v1.20.15 isula tag k8smx/kube-controller-manager:v1.20.15 k8s.gcr.io/kube-controller-manager:v1.20.15 isula tag k8smx/kube-scheduler:v1.20.15 k8s.gcr.io/kube-scheduler:v1.20.15 isula tag k8smx/kube-proxy:v1.20.15 k8s.gcr.io/kube-proxy:v1.20.15 isula tag k8smx/pause:3.2 k8s.gcr.io/pause:3.2 isula tag k8smx/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0 isula tag k8smx/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
- Remove the old images. shell- isula rmi k8smx/kube-apiserver:v1.20.15 isula rmi k8smx/kube-controller-manager:v1.20.15 isula rmi k8smx/kube-scheduler:v1.20.15 isula rmi k8smx/kube-proxy:v1.20.15 isula rmi k8smx/pause:3.2 isula rmi k8smx/coredns:1.7.0 isula rmi k8smx/etcd:3.4.13-0
- View pulled images. shell- isula images
Installing crictl 
yum install -y cri-toolsInitializing the Master Node 
Initialize the master node.
kubeadm init --kubernetes-version v1.20.2 --cri-socket=/var/run/isulad.sock --pod-network-cidr=<IP address range of the pods>- --kubernetes-versionindicates the current Kubernetes version.
- --cri-socketspecifies the engine, that is, isulad.
- --pod-network-cidrspecifies the IP address range of the pods.
Enter the following commands as prompted:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configAfter the initialization, copy the last two lines of the output and run the copied commands on the nodes to add them to the master cluster. The commands can also be generated using the following command:
kubeadm token create --print-join-commandAdding Nodes 
Paste the kubeadm join command generated on Master, add --cri-socket=/var/run/isulad.sock before discovery.
Installing Calico Network Plugins 
- Pull Calico images. - Configure the Calico network plugins on the Master node and pull the required images on each node. shell- isula pull calico/node:v3.14.2 isula pull calico/cni:v3.14.2 isula pull calico/kube-controllers:v3.14.2 isula pull calico/pod2daemon-flexvol:v3.14.2
- Download the configuration file on Master. shell- wget https://docs.projectcalico.org/v3.14/manifests/calico.yaml
- Create a pod. shell- kubectl apply -f calico.yaml- If you want to delete the configuration file, run the following command: shell- kubectl delete -f calico.yaml
 
- View pod information. shell- kubectl get pod -A -o wide
Checking the Master Node Information 
kubectl get nodes -o wideTo reset a node, run the following command:
kubeadm reset