长期支持版本

    部署 Node 节点组件

    本章节仅以k8snode1节点为例。

    环境准备

    # 内网需要配置代理
    $ dnf install -y docker iSulad conntrack-tools socat containernetworking-plugins
    $ swapoff -a
    $ mkdir -p /etc/kubernetes/pki/
    $ mkdir -p /etc/cni/net.d
    $ mkdir -p /opt/cni
    # 删除默认kubeconfig
    $ rm /etc/kubernetes/kubelet.kubeconfig
    
    ## 使用isulad作为运行时 ########
    # 配置iSulad
    cat  /etc/isulad/daemon.json
    {
            "registry-mirrors": [
                    "docker.io"
            ],
            "insecure-registries": [
                    "k8s.gcr.io",
                    "quay.io"
            ],
            "pod-sandbox-image": "k8s.gcr.io/pause:3.2",# pause类型
            "network-plugin": "cni", # 置空表示禁用cni网络插件则下面两个路径失效, 安装插件后重启isulad即可
            "cni-bin-dir": "/usr/libexec/cni/",
            "cni-conf-dir": "/etc/cni/net.d",
    }
    
    # 在iSulad环境变量中添加代理,下载镜像
    cat /usr/lib/systemd/system/isulad.service
    [Service]
    Type=notify
    Environment="HTTP_PROXY=http://name:password@proxy:8080"
    Environment="HTTPS_PROXY=http://name:password@proxy:8080"
    
    # 重启iSulad并设置为开机自启
    systemctl daemon-reload
    systemctl restart isulad
    
    
    
    
    ## 如果使用docker作为运行时 ########
    $ dnf install -y docker
    # 如果需要代理的环境,可以给docker配置代理,新增配置文件http-proxy.conf,并编写如下内容,替换name,password和proxy-addr为实际的配置。
    $ cat /etc/systemd/system/docker.service.d/http-proxy.conf
    [Service]
    Environment="HTTP_PROXY=http://name:password@proxy-addr:8080"
    $ systemctl daemon-reload
    $ systemctl restart docker
    

    创建 kubeconfig 配置文件

    对各节点依次如下操作创建配置文件:

    $ kubectl config set-cluster openeuler-k8s \
        --certificate-authority=/etc/kubernetes/pki/ca.pem \
        --embed-certs=true \
        --server=https://192.168.122.154:6443 \
        --kubeconfig=k8snode1.kubeconfig
    
    $ kubectl config set-credentials system:node:k8snode1 \
        --client-certificate=/etc/kubernetes/pki/k8snode1.pem \
        --client-key=/etc/kubernetes/pki/k8snode1-key.pem \
        --embed-certs=true \
        --kubeconfig=k8snode1.kubeconfig
    
    $ kubectl config set-context default \
        --cluster=openeuler-k8s \
        --user=system:node:k8snode1 \
        --kubeconfig=k8snode1.kubeconfig
    
    $ kubectl config use-context default --kubeconfig=k8snode1.kubeconfig
    

    注:修改k8snode1为对应节点名

    拷贝证书

    和控制面一样,所有证书、密钥和相关配置都放到/etc/kubernetes/pki/目录。

    $ ls /etc/kubernetes/pki/
    ca.pem                 k8snode1.kubeconfig  kubelet_config.yaml     kube-proxy-key.pem     kube-proxy.pem
    k8snode1-key.pem  k8snode1.pem         kube_proxy_config.yaml  kube-proxy.kubeconfig
    

    CNI 网络配置

    先通过 containernetworking-plugins 作为 kubelet 使用的 cni 插件,后续可以引入 calico,flannel 等插件,增强集群的网络能力。

    # 桥网络配置
    $ cat /etc/cni/net.d/10-bridge.conf
    {
      "cniVersion": "0.3.1",
      "name": "bridge",
      "type": "bridge",
      "bridge": "cnio0",
      "isGateway": true,
      "ipMasq": true,
      "ipam": {
        "type": "host-local",
        "subnet": "10.244.0.0/16",
        "gateway": "10.244.0.1"
      },
      "dns": {
        "nameservers": [
          "10.244.0.1"
        ]
      }
    }
    
    # 回环网络配置
    $ cat /etc/cni/net.d/99-loopback.conf
    {
        "cniVersion": "0.3.1",
        "name": "lo",
        "type": "loopback"
    }
    

    部署 kubelet 服务

    kubelet 依赖的配置文件

    $ cat /etc/kubernetes/pki/kubelet_config.yaml
    kind: KubeletConfiguration
    apiVersion: kubelet.config.k8s.io/v1beta1
    authentication:
      anonymous:
        enabled: false
      webhook:
        enabled: true
      x509:
        clientCAFile: /etc/kubernetes/pki/ca.pem
    authorization:
      mode: Webhook
    clusterDNS:
    - 10.32.0.10
    clusterDomain: cluster.local
    runtimeRequestTimeout: "15m"
    tlsCertFile: "/etc/kubernetes/pki/k8snode1.pem"
    tlsPrivateKeyFile: "/etc/kubernetes/pki/k8snode1-key.pem"
    

    注意:clusterDNS 的地址为:10.32.0.10,必须和之前设置的 service-cluster-ip-range 一致

    编写 systemd 配置文件

    $ cat /usr/lib/systemd/system/kubelet.service
    [Unit]
    Description=kubelet: The Kubernetes Node Agent
    Documentation=https://kubernetes.io/docs/
    Wants=network-online.target
    After=network-online.target
    
    [Service]
    ExecStart=/usr/bin/kubelet \
    	--config=/etc/kubernetes/pki/kubelet_config.yaml \
    	--network-plugin=cni \
    	--pod-infra-container-image=k8s.gcr.io/pause:3.2 \
    	--kubeconfig=/etc/kubernetes/pki/k8snode1.kubeconfig \
    	--register-node=true \
    	--hostname-override=k8snode1 \
    	--cni-bin-dir="/usr/libexec/cni/" \
    	--v=2
    
    Restart=always
    StartLimitInterval=0
    RestartSec=10
    
    [Install]
    WantedBy=multi-user.target
    

    注意:如果使用isulad作为runtime,需要增加如下配置

    --container-runtime=remote \
    --container-runtime-endpoint=unix:///var/run/isulad.sock \
    

    部署 kube-proxy

    kube-proxy 依赖的配置文件

    cat /etc/kubernetes/pki/kube_proxy_config.yaml
    kind: KubeProxyConfiguration
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    clientConnection:
      kubeconfig: /etc/kubernetes/pki/kube-proxy.kubeconfig
    clusterCIDR: 10.244.0.0/16
    mode: "iptables"
    

    编写 systemd 配置文件

    $ cat /usr/lib/systemd/system/kube-proxy.service
    [Unit]
    Description=Kubernetes Kube-Proxy Server
    Documentation=https://kubernetes.io/docs/reference/generated/kube-proxy/
    After=network.target
    
    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/proxy
    ExecStart=/usr/bin/kube-proxy \
    	    $KUBE_LOGTOSTDERR \
    	    $KUBE_LOG_LEVEL \
    	    --config=/etc/kubernetes/pki/kube_proxy_config.yaml \
    	    --hostname-override=k8snode1 \
    	    $KUBE_PROXY_ARGS
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    

    启动组件服务

    $ systemctl enable kubelet kube-proxy
    $ systemctl start kubelet kube-proxy
    

    其他节点依次部署即可。

    验证集群状态

    等待几分钟,使用如下命令查看node状态:

    $ kubectl get nodes --kubeconfig /etc/kubernetes/pki/admin.kubeconfig
    NAME            STATUS   ROLES    AGE   VERSION
    k8snode1   Ready    <none>   17h   v1.20.2
    k8snode2   Ready    <none>   19m   v1.20.2
    k8snode3   Ready    <none>   12m   v1.20.2
    

    部署 coredns

    coredns可以部署到node节点或者master节点,本文这里部署到节点k8snode1

    编写 coredns 配置文件

    $ cat /etc/kubernetes/pki/dns/Corefile
    .:53 {
        errors
        health {
          lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          endpoint https://192.168.122.154:6443
          tls /etc/kubernetes/pki/ca.pem /etc/kubernetes/pki/admin-key.pem /etc/kubernetes/pki/admin.pem
          kubeconfig /etc/kubernetes/pki/admin.kubeconfig default
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf {
          max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
    

    说明:

    • 监听53端口;
    • 设置kubernetes插件配置:证书、kube api的URL;

    准备 systemd 的 service 文件

    cat /usr/lib/systemd/system/coredns.service
    [Unit]
    Description=Kubernetes Core DNS server
    Documentation=https://github.com/coredns/coredns
    After=network.target
    
    [Service]
    ExecStart=bash -c "KUBE_DNS_SERVICE_HOST=10.32.0.10 coredns -conf /etc/kubernetes/pki/dns/Corefile"
    
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    

    启动服务

    $ systemctl enable coredns
    $ systemctl start coredns
    

    创建 coredns 的 Service 对象

    $ cat coredns_server.yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: kube-dns
      namespace: kube-system
      annotations:
        prometheus.io/port: "9153"
        prometheus.io/scrape: "true"
      labels:
        k8s-app: kube-dns
        kubernetes.io/cluster-service: "true"
        kubernetes.io/name: "CoreDNS"
    spec:
      clusterIP: 10.32.0.10
      ports:
      - name: dns
        port: 53
        protocol: UDP
      - name: dns-tcp
        port: 53
        protocol: TCP
      - name: metrics
        port: 9153
        protocol: TCP
    

    创建 coredns 的 endpoint 对象

    $ cat coredns_ep.yaml
    apiVersion: v1
    kind: Endpoints
    metadata:
      name: kube-dns
      namespace: kube-system
    subsets:
      - addresses:
          - ip: 192.168.122.157
        ports:
          - name: dns-tcp
            port: 53
            protocol: TCP
          - name: dns
            port: 53
            protocol: UDP
          - name: metrics
            port: 9153
            protocol: TCP
    

    确认 coredns 服务

    # 查看service对象
    $ kubectl get service -n kube-system kube-dns
    NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
    kube-dns   ClusterIP   10.32.0.10   <none>        53/UDP,53/TCP,9153/TCP   51m
    # 查看endpoint对象
    $ kubectl get endpoints -n kube-system kube-dns
    NAME       ENDPOINTS                                                    AGE
    kube-dns   192.168.122.157:53,192.168.122.157:53,192.168.122.157:9153   52m
    

    文档捉虫

    “有虫”文档片段

    问题描述

    提交类型 issue

    有点复杂...

    找人问问吧。

    PR

    小问题,全程线上修改...

    一键搞定!

    问题类型
    规范和低错类

    ● 错别字或拼写错误;标点符号使用错误;

    ● 链接错误、空单元格、格式错误;

    ● 英文中包含中文字符;

    ● 界面和描述不一致,但不影响操作;

    ● 表述不通顺,但不影响理解;

    ● 版本号不匹配:如软件包名称、界面版本号;

    易用性

    ● 关键步骤错误或缺失,无法指导用户完成任务;

    ● 缺少必要的前提条件、注意事项等;

    ● 图形、表格、文字等晦涩难懂;

    ● 逻辑不清晰,该分类、分项、分步骤的没有给出;

    正确性

    ● 技术原理、功能、规格等描述和软件不一致,存在错误;

    ● 原理图、架构图等存在错误;

    ● 命令、命令参数等错误;

    ● 代码片段错误;

    ● 命令无法完成对应功能;

    ● 界面错误,无法指导操作;

    风险提示

    ● 对重要数据或系统存在风险的操作,缺少安全提示;

    内容合规

    ● 违反法律法规,涉及政治、领土主权等敏感词;

    ● 内容侵权;

    您对文档的总体满意度

    非常不满意
    非常满意
    提交
    根据您的反馈,会自动生成issue模板。您只需点击按钮,创建issue即可。
    文档捉虫
    编组 3备份