【Kubernetes】使用kubeadm部署Kubernetes 1.20.2 集群实践记录


kubernetes高可用集群安装(二进制安装、v1.20.2版)

1. 前言

之前文章安装 kubernetes 集群,都是使用 kubeadm 安装,然鹅很多公司也采用二进制方式搭建集群。这篇文章主要讲解,如何采用二进制包来搭建完整的高可用集群。相比使用 kubeadm 搭建,二进制搭建要繁琐很多,需要自己配置签名证书,每个组件都需要一步步配置安装。 本文以2021年1月14日官方更新的最新版 v1.20.2 来介绍。

2. 环境准备

2.1 机器规划

IP地址 机器名称 机器配置 操作系统 机器角色 安装软件
172.10.1.11 master1 2C4G CentOS7.6 master kube-apiserver、kube-controller-manager、kube-scheduler、etcd
172.10.1.12 msater2 2C4G CentOS7.6 master kube-apiserver、kube-controller-manager、kube-scheduler、etcd
172.10.1.13 master3 2C4G CentOS7.6 master kube-apiserver、kube-controller-manager、kube-scheduler、etcd
172.10.1.14 node1 2C4G CentOS7.6 worker kubelet、kube-proxy
172.10.1.15 node2 2C4G CentOS7.6 worker kubelet、kube-proxy
172.10.1.16 node2 2C4G CentOS7.6 worker kubelet、kube-proxy
172.10.0.20 / / / 负载均衡VIP /

注:此处VIP是采用的云厂商的SLB,你也可以使用haproxy + keepalived的方式实现。

2.2 软件版本

软件 版本
kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy v1.20.2
kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy v1.20.2
etcd v3.4.13
calico v3.14
coredns 1.7.0

3. 搭建集群

3.1 机器基本配置

以下配置在6台机器上面操作

3.1.1 修改主机名

修改主机名称:master1、master2、master3、node1、node2、node3

3.1.2 配置hosts文件

修改机器的/etc/hosts文件

cat >> /etc/hosts << EOF
172.10.1.11 master1
172.10.1.12 master2
172.10.1.13 master3
172.10.1.14 node1
172.10.1.15 node2
172.10.1.16 node3
EOF

3.1.3 关闭防火墙和selinux

systemctl stop firewalld
setenforce 0
sed -i 's/^SELINUX=.\*/SELINUX=disabled/' /etc/selinux/config

3.1.4 关闭交换分区

swapoff -a
永久关闭,修改/etc/fstab,注释掉swap一行

3.1.5 时间同步

yum install -y chrony
systemctl start chronyd
systemctl enable chronyd
chronyc sources

3.1.6 修改内核参数

cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

3.1.7 加载ipvs模块

modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
lsmod | grep ip_vs
lsmod | grep nf_conntrack_ipv4
yum install -y ipvsadm

3.2 配置工作目录

每台机器都需要配置证书文件、组件的配置文件、组件的服务启动文件,现专门选择 master1 来统一生成这些文件,然后再分发到其他机器。以下操作在 master1 上进行

[root@master1 ~]# mkdir -p /data/work
该目录为配置文件和证书文件生成目录后面的所有文件生成相关操作均在此目录下进行
[root@master1 ~]# ssh-keygen -t rsa -b 2048
将秘钥分发到另外五台机器 master1 可以免密码登录其他机器

3.3 搭建etcd集群

3.3.1 配置etcd工作目录

[root@master1 ~]# mkdir -p /etc/etcd                     # 配置文件存放目录
[root@master1 ~]# mkdir -p /etc/etcd/ssl               # 证书文件存放目录

3.3.2 创建etcd证书

工具下载

[root@master1 work]# cd /data/work/
[root@master1 work]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@master1 work]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@master1 work]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

工具配置

[root@master1 work]# chmod +x cfssl*
[root@master1 work]# mv cfssl_linux-amd64 /usr/local/bin/cfssl
[root@master1 work]# mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
[root@master1 work]# mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

配置ca请求文件

[root@master1 work]# vim ca-csr.json
{
  "CN": "kubernetes",
  "key": {
      "algo": "rsa",
      "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Hubei",
      "L": "Wuhan",
      "O": "k8s",
      "OU": "system"
    }
  ],
  "ca": {
          "expiry": "87600h"
  }
}

注: CN:Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法; O:Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group)

创建ca证书

[root@master1 work]# cfssl gencert -initca ca-csr.json  | cfssljson -bare ca

配置ca证书策略

[root@master1 work]# vim ca-config.json
{
  "signing": {
      "default": {
          "expiry": "87600h"
        },
      "profiles": {
          "kubernetes": {
              "usages": [
                  "signing",
                  "key encipherment",
                  "server auth",
                  "client auth"
              ],
              "expiry": "87600h"
          }
      }
  }
}

配置etcd请求csr文件

[root@master1 work]# vim etcd-csr.json
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "172.10.1.11",
    "172.10.1.12",
    "172.10.1.13"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [{
    "C": "CN",
    "ST": "Hubei",
    "L": "Wuhan",
    "O": "k8s",
    "OU": "system"
  }]
}

生成证书

[root@master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson  -bare etcd
[root@master1 work]# ls etcd*.pem
etcd-key.pem  etcd.pem

3.3.3 部署etcd集群

下载etcd软件包

[root@master1 work]# wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz
[root@master1 work]# tar -xf etcd-v3.4.13-linux-amd64.tar.gz
[root@master1 work]# cp -p etcd-v3.4.13-linux-amd64/etcd* /usr/local/bin/
[root@master1 work]# rsync -vaz etcd-v3.4.13-linux-amd64/etcd* master2:/usr/local/bin/
[root@master1 work]# rsync -vaz etcd-v3.4.13-linux-amd64/etcd* master3:/usr/local/bin/

创建配置文件

[root@master1 work]# vim etcd.conf
#[Member]
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.10.1.11:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.10.1.11:2379,http://127.0.0.1:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.10.1.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.10.1.11:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://172.10.1.11:2380,etcd2=https://172.10.1.12:2380,etcd3=https://172.10.1.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

注: ETCD_NAME:节点名称,集群中唯一 ETCD_DATA_DIR:数据目录 ETCD_LISTEN_PEER_URLS:集群通信监听地址 ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址 ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址 ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址 ETCD_INITIAL_CLUSTER:集群节点地址 ETCD_INITIAL_CLUSTER_TOKEN:集群Token ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群

创建启动服务文件 方式一: 有配置文件的启动

[root@master1 work]# vim etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

方式二: 无配置文件的启动方式

[root@master1 work]# vim etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \
  --name=etcd1 \
  --data-dir=/var/lib/etcd/default.etcd \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth \
  --listen-peer-urls=https://172.10.1.11:2380 \
  --listen-client-urls=https://172.10.1.11:2379,http://127.0.0.1:2379 \
  --advertise-client-urls=https://172.10.1.11:2379 \
  --initial-advertise-peer-urls=https://172.10.1.11:2380 \
  --initial-cluster=etcd1=https://172.10.1.11:2380,etcd2=https://172.10.1.12:2380,etcd3=https://172.10.1.13:2380 \
  --initial-cluster-token=etcd-cluster \
  --initial-cluster-state=new
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

注:本文采用第一种方式

同步相关文件到各个节点

[root@master1 work]# cp ca*.pem /etc/etcd/ssl/
[root@master1 work]# cp etcd*.pem /etc/etcd/ssl/
[root@master1 work]# cp etcd.conf /etc/etcd/
[root@master1 work]# cp etcd.service /usr/lib/systemd/system/
[root@master1 work]# for i in master2 master3;do rsync -vaz etcd.conf $i:/etc/etcd/;done
[root@master1 work]# for i in master2 master3;do rsync -vaz etcd*.pem ca*.pem $i:/etc/etcd/ssl/;done
[root@master1 work]# for i in master2 master3;do rsync -vaz etcd.service $i:/usr/lib/systemd/system/;done

注:master2和master3分别修改配置文件中etcd名字和ip,并创建目录 /var/lib/etcd/default.etcd

启动etcd集群

[root@master1 work]# mkdir -p /var/lib/etcd/default.etcd
[root@master1 work]# systemctl daemon-reload
[root@master1 work]# systemctl enable etcd.service
[root@master1 work]# systemctl start etcd.service
[root@master1 work]# systemctl status etcd

注:第一次启动可能会卡一段时间,因为节点会等待其他节点启动

查看集群状态

[root@master1 work]# ETCDCTL_API=3 /usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://172.10.1.11:2379,https://172.10.1.12:2379,https://172.10.1.13:2379 endpoint health

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/0aa4dc77-65c7-486e-a454-d56609898991/1450914-20210118230524841-1725295913.png

3.4 kubernetes组件部署

3.4.1 下载安装包

[root@master1 work]# wget https://dl.k8s.io/v1.20.1/kubernetes-server-linux-amd64.tar.gz
[root@master1 work]# tar -xf kubernetes-server-linux-amd64.tar
[root@master1 work]# cd kubernetes/server/bin/
[root@master1 bin]# cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
[root@master1 bin]# rsync -vaz kube-apiserver kube-controller-manager kube-scheduler kubectl master2:/usr/local/bin/
[root@master1 bin]# rsync -vaz kube-apiserver kube-controller-manager kube-scheduler kubectl master3:/usr/local/bin/
[root@master1 bin]# for i in node1 node2 node3;do rsync -vaz kubelet kube-proxy $i:/usr/local/bin/;done
[root@master1 bin]# cd /data/work/

3.4.2 创建工作目录

[root@master1 work]# mkdir -p /etc/kubernetes/          # kubernetes组件配置文件存放目录
[root@master1 work]# mkdir -p /etc/kubernetes/ssl     # kubernetes组件证书文件存放目录
[root@master1 work]# mkdir /var/log/kubernetes        # kubernetes组件日志文件存放目录

3.4.3 部署api-server

创建csr请求文件

[root@master1 work]# vim kube-apiserver-csr.json
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "172.10.1.11",
    "172.10.1.12",
    "172.10.1.13",
    "172.10.1.14",
    "172.10.1.15",
    "172.10.1.16",
    "172.10.0.20",
    "10.255.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Hubei",
      "L": "Wuhan",
      "O": "k8s",
      "OU": "system"
    }
  ]
}

注: 如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表。 由于该证书后续被 kubernetes master 集群使用,需要将master节点的IP都填上,同时还需要填写 service 网络的首个IP。(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.254.0.1)

生成证书和token文件

[root@master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver
[root@master1 work]# cat > token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

创建配置文件

[root@master1 work]# vim kube-apiserver.conf
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --anonymous-auth=false \
  --bind-address=172.10.1.11 \
  --secure-port=6443 \
  --advertise-address=172.10.1.11 \
  --insecure-port=0 \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all=true \
  --enable-bootstrap-token-auth \
  --service-cluster-ip-range=10.255.0.0/16 \
  --token-auth-file=/etc/kubernetes/token.csv \
  --service-node-port-range=30000-50000 \
  --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \
  --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
  --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
    --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \      # 1.20以上版本必须有此参数
  --service-account-issuer=https://kubernetes.default.svc.cluster.local \   # 1.20以上版本必须有此参数
  --etcd-cafile=/etc/etcd/ssl/ca.pem \
  --etcd-certfile=/etc/etcd/ssl/etcd.pem \
  --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
  --etcd-servers=https://172.10.1.11:2379,https://172.10.1.12:2379,https://172.10.1.13:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/kube-apiserver-audit.log \
  --event-ttl=1h \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=4"

注: --logtostderr:启用日志 --v:日志等级 --log-dir:日志目录 --etcd-servers:etcd集群地址 --bind-address:监听地址 --secure-port:https安全端口 --advertise-address:集群通告地址 --allow-privileged:启用授权 --service-cluster-ip-range:Service虚拟IP地址段 --enable-admission-plugins:准入控制模块 --authorization-mode:认证授权,启用RBAC授权和节点自管理 --enable-bootstrap-token-auth:启用TLS bootstrap机制 --token-auth-file:bootstrap token文件 --service-node-port-range:Service nodeport类型默认分配端口范围 --kubelet-client-xxx:apiserver访问kubelet客户端证书 --tls-xxx-file:apiserver https证书 --etcd-xxxfile:连接Etcd集群证书 --audit-log-xxx:审计日志

创建服务启动文件

[root@master1 work]# vim kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

同步相关文件到各个节点

[root@master1 work]# cp ca*.pem /etc/kubernetes/ssl/
[root@master1 work]# cp kube-apiserver*.pem /etc/kubernetes/ssl/
[root@master1 work]# cp token.csv /etc/kubernetes/
[root@master1 work]# cp kube-apiserver.conf /etc/kubernetes/
[root@master1 work]# cp kube-apiserver.service /usr/lib/systemd/system/
[root@master1 work]# rsync -vaz token.csv master2:/etc/kubernetes/
[root@master1 work]# rsync -vaz token.csv master3:/etc/kubernetes/
[root@master1 work]# rsync -vaz kube-apiserver*.pem master2:/etc/kubernetes/ssl/     # 主要rsync同步文件只能创建最后一级目录如果ssl目录不存在会自动创建但是上一级目录kubernetes必须存在
[root@master1 work]# rsync -vaz kube-apiserver*.pem master3:/etc/kubernetes/ssl/
[root@master1 work]# rsync -vaz ca*.pem master2:/etc/kubernetes/ssl/
[root@master1 work]# rsync -vaz ca*.pem master3:/etc/kubernetes/ssl/
[root@master1 work]# rsync -vaz kube-apiserver.conf master2:/etc/kubernetes/
[root@master1 work]# rsync -vaz kube-apiserver.conf master3:/etc/kubernetes/
[root@master1 work]# rsync -vaz kube-apiserver.service master2:/usr/lib/systemd/system/
[root@master1 work]# rsync -vaz kube-apiserver.service master3:/usr/lib/systemd/system/

注:master2和master3配置文件的IP地址修改为实际的本机IP

启动服务

[root@master1 work]# systemctl daemon-reload
[root@master1 work]# systemctl enable kube-apiserver
[root@master1 work]# systemctl start kube-apiserver
[root@master1 work]# systemctl status kube-apiserver
测试
[root@master1 work]# curl --insecure https://172.10.1.11:6443/
有返回说明启动正常

3.4.4 部署kubectl

创建csr请求文件

[root@master1 work]# vim admin-csr.json
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Hubei",
      "L": "Wuhan",
      "O": "system:masters",
      "OU": "system"
    }
  ]
}

说明: 后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权; kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限; O指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限; 注: 这个admin 证书,是将来生成管理员用的kube config 配置文件用的,现在我们一般建议使用RBAC 来对kubernetes 进行角色权限控制, kubernetes 将证书中的CN 字段 作为User, O 字段作为 Group; "O": "system:masters", 必须是system:masters,否则后面kubectl create clusterrolebinding报错。

生成证书

[root@master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
[root@master1 work]# cp admin*.pem /etc/kubernetes/ssl/

创建kubeconfig配置文件 kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书

设置集群参数
[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.10.0.20:6443 --kubeconfig=kube.config
设置客户端认证参数
[root@master1 work]# kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config
设置上下文参数
[root@master1 work]# kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config
设置默认上下文
[root@master1 work]# kubectl config use-context kubernetes --kubeconfig=kube.config
[root@master1 work]# mkdir ~/.kube
[root@master1 work]# cp kube.config ~/.kube/config
授权kubernetes证书访问kubelet api权限
[root@master1 work]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

查看集群组件状态 上面步骤完成后,kubectl就可以与kube-apiserver通信了

[root@master1 work]# kubectl cluster-info
[root@master1 work]# kubectl get componentstatuses
[root@master1 work]# kubectl get all --all-namespaces

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/bd722f1e-f792-41e4-8732-a50d320b3ae6/1450914-20210118230758466-581694079.png

同步kubectl配置文件到其他节点

[root@master1 work]# rsync -vaz /root/.kube/config master2:/root/.kube/
[root@master1 work]# rsync -vaz /root/.kube/config master3:/root/.kube/

配置kubectl子命令补全

[root@master1 work]# yum install -y bash-completion
[root@master1 work]# source /usr/share/bash-completion/bash_completion
[root@master1 work]# source <(kubectl completion bash)
[root@master1 work]# kubectl completion bash > ~/.kube/completion.bash.inc
[root@master1 work]# source '/root/.kube/completion.bash.inc'
[root@master1 work]# source $HOME/.bash_profile

3.4.5 部署kube-controller-manager

创建csr请求文件

[root@master1 work]# vim kube-controller-manager-csr.json
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "172.10.1.11",
      "172.10.1.12",
      "172.10.1.13"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "Hubei",
        "L": "Wuhan",
        "O": "system:kube-controller-manager",
        "OU": "system"
      }
    ]
}

注: hosts 列表包含所有 kube-controller-manager 节点 IP; CN 为 system:kube-controller-manager、O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限生成证书

[root@master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
[root@master1 work]# ls kube-controller-manager*.pem

创建kube-controller-manager的kubeconfig

设置集群参数
[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.10.0.20:6443 --kubeconfig=kube-controller-manager.kubeconfig
设置客户端认证参数
[root@master1 work]# kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
设置上下文参数
[root@master1 work]# kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
设置默认上下文
[root@master1 work]# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

创建配置文件

[root@master1 work]# vim kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS="--port=0 \
  --secure-port=10252 \
  --bind-address=127.0.0.1 \
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
  --service-cluster-ip-range=10.255.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --allocate-node-cidrs=true \
  --cluster-cidr=10.0.0.0/16 \
  --experimental-cluster-signing-duration=87600h \
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --leader-elect=true \
  --feature-gates=RotateKubeletServerCertificate=true \
  --controllers=*,bootstrapsigner,tokencleaner \
  --horizontal-pod-autoscaler-use-rest-clients=true \
  --horizontal-pod-autoscaler-sync-period=10s \
  --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
  --use-service-account-credentials=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2"

创建启动文件

[root@master1 work]# vim kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

同步相关文件到各个节点

[root@master1 work]# cp kube-controller-manager*.pem /etc/kubernetes/ssl/
[root@master1 work]# cp kube-controller-manager.kubeconfig /etc/kubernetes/
[root@master1 work]# cp kube-controller-manager.conf /etc/kubernetes/
[root@master1 work]# cp kube-controller-manager.service /usr/lib/systemd/system/
[root@master1 work]# rsync -vaz kube-controller-manager*.pem master2:/etc/kubernetes/ssl/
[root@master1 work]# rsync -vaz kube-controller-manager*.pem master3:/etc/kubernetes/ssl/
[root@master1 work]# rsync -vaz kube-controller-manager.kubeconfig kube-controller-manager.conf master2:/etc/kubernetes/
[root@master1 work]# rsync -vaz kube-controller-manager.kubeconfig kube-controller-manager.conf master3:/etc/kubernetes/
[root@master1 work]# rsync -vaz kube-controller-manager.service master2:/usr/lib/systemd/system/
[root@master1 work]# rsync -vaz kube-controller-manager.service master3:/usr/lib/systemd/system/

启动服务

[root@master1 work]# systemctl daemon-reload
[root@master1 work]# systemctl enable kube-controller-manager
[root@master1 work]# systemctl start kube-controller-manager
[root@master1 work]# systemctl status kube-controller-manager

3.4.6 部署kube-scheduler

创建csr请求文件

[root@master1 work]# vim kube-scheduler-csr.json
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "172.10.1.11",
      "172.10.1.12",
      "172.10.1.13"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "Hubei",
        "L": "Wuhan",
        "O": "system:kube-scheduler",
        "OU": "system"
      }
    ]
}

注: hosts 列表包含所有 kube-scheduler 节点 IP; CN 为 system:kube-scheduler、O 为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。

生成证书

[root@master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
[root@master1 work]# ls kube-scheduler*.pem

创建kube-scheduler的kubeconfig

设置集群参数
[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.10.0.20:6443 --kubeconfig=kube-scheduler.kubeconfig
设置客户端认证参数
[root@master1 work]# kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig
设置上下文参数
[root@master1 work]# kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
设置默认上下文
[root@master1 work]# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

创建配置文件

[root@master1 work]# vim kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
--leader-elect=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"

创建服务启动文件

[root@master1 work]# vim kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

同步相关文件到各个节点

[root@master1 work]# cp kube-scheduler*.pem /etc/kubernetes/ssl/
[root@master1 work]# cp kube-scheduler.kubeconfig /etc/kubernetes/
[root@master1 work]# cp kube-scheduler.conf /etc/kubernetes/
[root@master1 work]# cp kube-scheduler.service /usr/lib/systemd/system/
[root@master1 work]# rsync -vaz kube-scheduler*.pem master2:/etc/kubernetes/ssl/
[root@master1 work]# rsync -vaz kube-scheduler*.pem master3:/etc/kubernetes/ssl/
[root@master1 work]# rsync -vaz kube-scheduler.kubeconfig kube-scheduler.conf master2:/etc/kubernetes/
[root@master1 work]# rsync -vaz kube-scheduler.kubeconfig kube-scheduler.conf master3:/etc/kubernetes/
[root@master1 work]# rsync -vaz kube-scheduler.service master2:/usr/lib/systemd/system/
[root@master1 work]# rsync -vaz kube-scheduler.service master3:/usr/lib/systemd/system/

启动服务

[root@master1 work]# systemctl daemon-reload
[root@master1 work]# systemctl enable kube-scheduler
[root@master1 work]# systemctl start kube-scheduler
[root@master1 work]# systemctl status kube-scheduler

3.4.7 部署docker

在三个work节点上安装安装docker

[root@node1 ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
[root@node1 ~]# yum install -y docker-ce
[root@node1 ~]# systemctl enable docker
[root@node1 ~]# systemctl start docker
[root@node1 ~]# docker --version

修改docker源和驱动

[root@node1 ~]# cat > /etc/docker/daemon.json << EOF
{
    "exec-opts": ["native.cgroupdriver=systemd"],
    "registry-mirrors": [
        "https://1nj0zren.mirror.aliyuncs.com",
        "https://kfwkfulq.mirror.aliyuncs.com",
        "https://2lqq34jg.mirror.aliyuncs.com",
        "https://pee6w651.mirror.aliyuncs.com",
        "http://hub-mirror.c.163.com",
        "https://docker.mirrors.ustc.edu.cn",
        "http://f1361db2.m.daocloud.io",
        "https://registry.docker-cn.com"
    ]
}
EOF
[root@node1 ~]# systemctl restart docker
[root@node1 ~]# docker info | grep "Cgroup Driver"

下载依赖镜像

[root@node1 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
[root@node1 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
[root@node1 ~]# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2

[root@node1 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
[root@node1 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
[root@node1 ~]# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0

3.4.8 部署kubelet

以下操作在master1上操作创建kubelet-bootstrap.kubeconfig

[root@master1 work]# BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)
设置集群参数
[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.10.0.20:6443 --kubeconfig=kubelet-bootstrap.kubeconfig
设置客户端认证参数
[root@master1 work]# kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig
设置上下文参数
[root@master1 work]# kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
设置默认上下文
[root@master1 work]# kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
创建角色绑定
[root@master1 work]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

创建配置文件

[root@master1 work]#  vim kubelet.json
{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/etc/kubernetes/ssl/ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "172.10.1.14",
  "port": 10250,
  "readOnlyPort": 10255,
  "cgroupDriver": "cgroupfs",                     # 如果docker的驱动为systemd处修改为systemd此处设置很重要否则后面node节点无法加入到集群
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "featureGates": {
    "RotateKubeletClientCertificate": true,
    "RotateKubeletServerCertificate": true
  },
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.255.0.2"]
}

创建启动文件

[root@master1 work]# vim kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
  --cert-dir=/etc/kubernetes/ssl \
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
  --config=/etc/kubernetes/kubelet.json \
  --network-plugin=cni \
  --pod-infra-container-image=k8s.gcr.io/pause:3.2 \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

注: –hostname-override:显示名称,集群中唯一 –network-plugin:启用CNI –kubeconfig:空路径,会自动生成,后面用于连接apiserver –bootstrap-kubeconfig:首次启动向apiserver申请证书 –config:配置参数文件 –cert-dir:kubelet证书生成目录 –pod-infra-container-image:管理Pod网络容器的镜像

同步相关文件到各个节点

[root@master1 work]# cp kubelet-bootstrap.kubeconfig /etc/kubernetes/
[root@master1 work]# cp kubelet.json /etc/kubernetes/
[root@master1 work]# cp kubelet.service /usr/lib/systemd/system/
以上步骤如果master节点不安装kubelet则不用执行
[root@master1 work]# for i in node1 node2 node3;do rsync -vaz kubelet-bootstrap.kubeconfig kubelet.json $i:/etc/kubernetes/;done
[root@master1 work]# for i in node1 node2 node3;do rsync -vaz ca.pem $i:/etc/kubernetes/ssl/;done
[root@master1 work]# for i in node1 node2 node3;do rsync -vaz kubelet.service $i:/usr/lib/systemd/system/;done

注:kubelete.json配置文件address改为各个节点的ip地址启动服务 各个work节点上操作

[root@node1 ~]# mkdir /var/lib/kubelet
[root@node1 ~]# mkdir /var/log/kubernetes
[root@node1 ~]# systemctl daemon-reload
[root@node1 ~]# systemctl enable kubelet
[root@node1 ~]# systemctl start kubelet
[root@node1 ~]# systemctl status kubelet

确认kubelet服务启动成功后,接着到master上Approve一下bootstrap请求。执行如下命令可以看到三个worker节点分别发送了三个 CSR 请求:

[root@master1 work]# kubectl get csr

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/b94f58ff-a989-4c1e-9715-e0e1d72530bd/1450914-20210118231136154-2081188300.png

[root@master1 work]# kubectl certificate approve node-csr-HlX3cExsZohWsu8Dd6Rp_ztFejmMdpzvti_qgxo4SAQ [root@master1 work]# kubectl certificate approve node-csr-oykYfnH_coRF2PLJH4fOHlGznOZUBPDg5BPZXDo2wgk [root@master1 work]# kubectl certificate approve node-csr-ytRB2fikhL6dykcekGg4BdD87o-zw9WPU44SZ1nFT50 [root@master1 work]# kubectl get csr [root@master1 work]# kubectl get nodes

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/7a0199bb-01cc-43c7-9e78-8663cac8c87e/1450914-20210118231231841-1290281127.png

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/a1349072-d970-4261-a63e-61d62078d284/1450914-20210118231300576-1622775037.png

3.4.9 部署kube-proxy

创建csr请求文件

[root@master1 work]# vim kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Hubei",
      "L": "Wuhan",
      "O": "k8s",
      "OU": "system"
    }
  ]
}

生成证书

[root@master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
[root@master1 work]# ls kube-proxy*.pem

创建kubeconfig文件

[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.10.0.20:6443 --kubeconfig=kube-proxy.kubeconfig
[root@master1 work]# kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
[root@master1 work]# kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
[root@master1 work]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

创建kube-proxy配置文件

[root@master1 work]# vim kube-proxy.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 172.10.1.14
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 192.168.0.0/16                           # 此处网段必须与网络组件网段保持一致否则部署网络组件时会报错
healthzBindAddress: 172.10.1.14:10256
kind: KubeProxyConfiguration
metricsBindAddress: 172.10.1.14:10249
mode: "ipvs"

创建服务启动文件

[root@master1 work]# vim kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy.yaml \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

同步文件到各个节点

[root@master1 work]# cp kube-proxy*.pem /etc/kubernetes/ssl/
[root@master1 work]# cp kube-proxy.kubeconfig kube-proxy.yaml /etc/kubernetes/
[root@master1 work]# cp kube-proxy.service /usr/lib/systemd/system/
master节点不安装kube-proxy则以上步骤不用执行
[root@master1 work]# for i in node1 node2 node3;do rsync -vaz kube-proxy.kubeconfig kube-proxy.yaml $i:/etc/kubernetes/;done
[root@master1 work]# for i in node1 node2 node3;do rsync -vaz kube-proxy.service $i:/usr/lib/systemd/system/;done

注:配置文件kube-proxy.yaml中address修改为各节点的实际IP启动服务

[root@node1 ~]# mkdir -p /var/lib/kube-proxy
[root@node1 ~]# systemctl daemon-reload
[root@node1 ~]# systemctl enable kube-proxy
[root@node1 ~]# systemctl restart kube-proxy
[root@node1 ~]# systemctl status kube-proxy

3.4.10 配置网络组件

[root@master1 work]# wget https://docs.projectcalico.org/v3.14/manifests/calico.yaml
[root@master1 work]# kubectl apply -f calico.yaml

此时再来查看各个节点,均为Ready状态

[root@master1 work]# kubectl get pods -A
[root@master1 work]# kubectl get nodes

3.4.11 部署coredns

下载coredns yaml文件:https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed 修改yaml文件: kubernetes cluster.local in-addr.arpa ip6.arpa forward . /etc/resolv.conf clusterIP为:10.255.0.2(kubelet配置文件中的clusterDNS)

[root@master1 work]# cat coredns.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        ready
        kubernetes cluster.local  in-addr.arpa ip6.arpa {
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf {
          max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. Default is 1.
  # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      affinity:
         podAntiAffinity:
           preferredDuringSchedulingIgnoredDuringExecution:
           - weight: 100
             podAffinityTerm:
               labelSelector:
                 matchExpressions:
                   - key: k8s-app
                     operator: In
                     values: ["kube-dns"]
               topologyKey: kubernetes.io/hostname
      containers:
      - name: coredns
        image: coredns/coredns:1.8.0
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.255.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
[root@master1 work]# kubectl apply -f coredns.yaml

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/3fa82eff-fefa-43c6-ae8b-d9851b927ef3/1450914-20210118231536489-207986079.png

3.5 验证

3.5.1 部署nginx

[root@master1 ~]# vim nginx.yaml
---
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-controller
spec:
  replicas: 2
  selector:
    name: nginx
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.19.6
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service-nodeport
spec:
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30001
      protocol: TCP
  type: NodePort
  selector:
    name: nginx
[root@master1 ~]# kubectl apply -f nginx.yaml
[root@master1 ~]# kubectl get svc
[root@master1 ~]# kubectl get pods

3.5.2 验证

ping验证nginx service

访问nginx