部署 Node 节点组件
本章节仅以k8snode1
节点为例。
环境准备
# 内网需要配置代理
$ dnf install -y docker iSulad conntrack-tools socat containernetworking-plugins
$ swapoff -a
$ mkdir -p /etc/kubernetes/pki/
$ mkdir -p /etc/cni/net.d
$ mkdir -p /opt/cni
# 删除默认kubeconfig
$ rm /etc/kubernetes/kubelet.kubeconfig
## 使用isulad作为运行时 ########
# 配置iSulad
cat /etc/isulad/daemon.json
{
"registry-mirrors": [
"docker.io"
],
"insecure-registries": [
"k8s.gcr.io",
"quay.io"
],
"pod-sandbox-image": "k8s.gcr.io/pause:3.2",# pause类型
"network-plugin": "cni", # 置空表示禁用cni网络插件则下面两个路径失效, 安装插件后重启isulad即可
"cni-bin-dir": "/usr/libexec/cni/",
"cni-conf-dir": "/etc/cni/net.d",
}
# 在iSulad环境变量中添加代理,下载镜像
cat /usr/lib/systemd/system/isulad.service
[Service]
Type=notify
Environment="HTTP_PROXY=http://name:password@proxy:8080"
Environment="HTTPS_PROXY=http://name:password@proxy:8080"
# 重启iSulad并设置为开机自启
systemctl daemon-reload
systemctl restart isulad
## 如果使用docker作为运行时 ########
$ dnf install -y docker
# 如果需要代理的环境,可以给docker配置代理,新增配置文件http-proxy.conf,并编写如下内容,替换name,password和proxy-addr为实际的配置。
$ cat /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://name:password@proxy-addr:8080"
$ systemctl daemon-reload
$ systemctl restart docker
创建 kubeconfig 配置文件
对各节点依次如下操作创建配置文件:
$ kubectl config set-cluster openeuler-k8s \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://192.168.122.154:6443 \
--kubeconfig=k8snode1.kubeconfig
$ kubectl config set-credentials system:node:k8snode1 \
--client-certificate=/etc/kubernetes/pki/k8snode1.pem \
--client-key=/etc/kubernetes/pki/k8snode1-key.pem \
--embed-certs=true \
--kubeconfig=k8snode1.kubeconfig
$ kubectl config set-context default \
--cluster=openeuler-k8s \
--user=system:node:k8snode1 \
--kubeconfig=k8snode1.kubeconfig
$ kubectl config use-context default --kubeconfig=k8snode1.kubeconfig
注:修改k8snode1为对应节点名
拷贝证书
和控制面一样,所有证书、密钥和相关配置都放到/etc/kubernetes/pki/
目录。
$ ls /etc/kubernetes/pki/
ca.pem k8snode1.kubeconfig kubelet_config.yaml kube-proxy-key.pem kube-proxy.pem
k8snode1-key.pem k8snode1.pem kube_proxy_config.yaml kube-proxy.kubeconfig
CNI 网络配置
先通过 containernetworking-plugins 作为 kubelet 使用的 cni 插件,后续可以引入 calico,flannel 等插件,增强集群的网络能力。
# 桥网络配置
$ cat /etc/cni/net.d/10-bridge.conf
{
"cniVersion": "0.3.1",
"name": "bridge",
"type": "bridge",
"bridge": "cnio0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "10.244.0.0/16",
"gateway": "10.244.0.1",
"routes": [
{ "dst": "0.0.0.0/0", "gw": "10.244.0.1"}
]
},
"dns": {
"nameservers": [
"10.244.0.1"
]
}
}
# 回环网络配置
$ cat /etc/cni/net.d/99-loopback.conf
{
"cniVersion": "0.3.1",
"name": "lo",
"type": "loopback"
}
部署 kubelet 服务
kubelet 依赖的配置文件
$ cat /etc/kubernetes/pki/kubelet_config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
mode: Webhook
clusterDNS:
- 10.32.0.10
clusterDomain: cluster.local
runtimeRequestTimeout: "15m"
tlsCertFile: "/etc/kubernetes/pki/k8snode1.pem"
tlsPrivateKeyFile: "/etc/kubernetes/pki/k8snode1-key.pem"
注意:clusterDNS 的地址为:10.32.0.10,必须和之前设置的 service-cluster-ip-range 一致
编写 systemd 配置文件
$ cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/usr/bin/kubelet \
--config=/etc/kubernetes/pki/kubelet_config.yaml \
--network-plugin=cni \
--pod-infra-container-image=k8s.gcr.io/pause:3.2 \
--kubeconfig=/etc/kubernetes/pki/k8snode1.kubeconfig \
--register-node=true \
--hostname-override=k8snode1 \
--cni-bin-dir="/usr/libexec/cni/" \
--v=2
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
注意:如果使用isulad作为runtime,需要增加如下配置
--container-runtime=remote \
--container-runtime-endpoint=unix:///var/run/isulad.sock \
部署 kube-proxy
kube-proxy 依赖的配置文件
cat /etc/kubernetes/pki/kube_proxy_config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: /etc/kubernetes/pki/kube-proxy.kubeconfig
clusterCIDR: 10.244.0.0/16
mode: "iptables"
编写 systemd 配置文件
$ cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://kubernetes.io/docs/reference/generated/kube-proxy/
After=network.target
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
--config=/etc/kubernetes/pki/kube_proxy_config.yaml \
--hostname-override=k8snode1 \
$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
启动组件服务
$ systemctl enable kubelet kube-proxy
$ systemctl start kubelet kube-proxy
其他节点依次部署即可。
验证集群状态
等待几分钟,使用如下命令查看node状态:
$ kubectl get nodes --kubeconfig /etc/kubernetes/pki/admin.kubeconfig
NAME STATUS ROLES AGE VERSION
k8snode1 Ready <none> 17h v1.20.2
k8snode2 Ready <none> 19m v1.20.2
k8snode3 Ready <none> 12m v1.20.2
部署 coredns
coredns可以部署到node节点或者master节点,本文这里部署到节点k8snode1
。
编写 coredns 配置文件
$ cat /etc/kubernetes/pki/dns/Corefile
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
endpoint https://192.168.122.154:6443
tls /etc/kubernetes/pki/ca.pem /etc/kubernetes/pki/admin-key.pem /etc/kubernetes/pki/admin.pem
kubeconfig /etc/kubernetes/pki/admin.kubeconfig default
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
说明:
- 监听53端口;
- 设置kubernetes插件配置:证书、kube api的URL 。
准备 systemd 的 service 文件
cat /usr/lib/systemd/system/coredns.service
[Unit]
Description=Kubernetes Core DNS server
Documentation=https://github.com/coredns/coredns
After=network.target
[Service]
ExecStart=bash -c "KUBE_DNS_SERVICE_HOST=10.32.0.10 coredns -conf /etc/kubernetes/pki/dns/Corefile"
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
启动服务
$ systemctl enable coredns
$ systemctl start coredns
创建 coredns 的 Service 对象
$ cat coredns_server.yaml
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
clusterIP: 10.32.0.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
$ kubectl apply -f coredns_server.yaml --kubeconfig /etc/kubernetes/pki/admin.kubeconfig
创建 coredns 的 endpoint 对象
$ cat coredns_ep.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: kube-dns
namespace: kube-system
subsets:
- addresses:
- ip: 192.168.122.157
ports:
- name: dns-tcp
port: 53
protocol: TCP
- name: dns
port: 53
protocol: UDP
- name: metrics
port: 9153
protocol: TCP
$ kubectl apply -f coredns_ep.yaml --kubeconfig /etc/kubernetes/pki/admin.kubeconfig
确认 coredns 服务
# 查看service对象
$ kubectl get service -n kube-system kube-dns
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.32.0.10 <none> 53/UDP,53/TCP,9153/TCP 51m
# 查看endpoint对象
$ kubectl get endpoints -n kube-system kube-dns
NAME ENDPOINTS AGE
kube-dns 192.168.122.157:53,192.168.122.157:53,192.168.122.157:9153 52m
文档捉虫