k8s单master节点的部署(实验)
文章目录
- k8s单master节点的部署(实验)
- 1.单master集群部署的环境
- 2.ETCD集群的部署
- 2.1 安装制作证书的工具cfssl
- 2.2 制作CA证书
- 2.3 使用证书、etcd脚本搭建ETCD集群
- 2.4node节点加入ETCD集群
- 3.docker的部署
- 4.flannel网络组件的部署
- 5.部署master组件
- 6.总结
1.单master集群部署的环境
- 搭建k8s集群部署所需要的服务器(三个节点):
服务器 | 需要安装的软件 |
---|---|
master(192.168.73.11) | kube-apiserver、kube-controller-manager、kube-scheduler、etcd |
node01(192.168.73.12) | kubelet、kube-proxy、docker、flannel、etcd |
node02(192.168.73.13) | kubelet、kube-proxy、docker、flannel、etcd |
- 将网卡配置成静态网卡
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13vim /etc/sysconfig/network-scripts/ifcfg-ens33 #将网卡设置为静态的网卡 BOOTPROTO=static #开启开机自启网卡 ONBOOT=yes #配置IP地址、子网掩码、网关、上网的dns IPADDR=192.168.73.11 #另外的两个IP地址为:192.168.73.12、192.168.73.13 NETMASK=255.255.255.0 GATEWAY=192.168.73.2 DNS1=8.8.8.8 DNS2=114.114.114.114
- 防止重启虚拟机IP的地址变化
复制代码
1
2
3
4
5
6
7
8
9systemctl stop NetworkManager #关闭网络管理 systemctl disable NetworkManager #关闭网络管理的开机自启动 systemctl restart network #重启网卡 ping www.baidu.com #对百度的ping测试,确保能够上网
- 防火墙不要关闭
复制代码
1
2
3
4
5
6
7systemctl start firewalld #开启防火墙 iptables -F #清空防火墙的规则链 setenforce 0 #关闭防火墙的核心防护
2.ETCD集群的部署
- ETCD之间通信都是经过加密,所以要创建CA证书所使用TLS加密通讯
2.1 安装制作证书的工具cfssl
- master节点:
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17[root@localhost ~]# mkdir k8s [root@localhost ~]# cd k8s/ //编写cfssl.sh脚本,从官网下载制作证书的工具cfssl,直接放在/usr/local/bin目录下,方便系统识别,最后给工具加执行权限 [root@localhost k8s]# vi cfssl.sh curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo [root@localhost k8s]# bash cfssl.sh #执行脚本等待安装下载软件 [root@localhost k8s]# ls /usr/local/bin/ #可以看到三个制作证书的工具 cfssl cfssl-certinfo cfssljson #cfssl:生成证书工具 #cfssl-certinfo:查看证书信息 #cfssljson:通过传入json文件生成证书
2.2 制作CA证书
复制代码
1
2
3[root@localhost k8s]# mkdir etcd-cert [root@localhost k8s]# cd etcd-cert/
- 创建生成ca证书的配置文件
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21[root@localhost etcd-cert]# cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "www": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
- 创建ca证书的签名证书
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18[root@localhost etcd-cert]# cat > ca-csr.json <<EOF { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ] } EOF
- 用ca签名证书生成ca证书,得到ca-key.pem ca.pem
复制代码
1
2[root@localhost etcd-cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
- 指定etcd三个节点之间的通信验证需要服务器签名证书server-csr.json
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22[root@localhost etcd-cert]# cat > server-csr.json <<EOF { "CN": "etcd", "hosts": [ "192.168.73.11", #修改成自己的节点IP地址 "192.168.73.12", "192.168.73.13" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing" } ] } EOF
- 使用ca-key.pem、ca.pem、服务器签名证书 生成ETCD证书 server-key.pem server.pem
复制代码
1
2[root@localhost etcd-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
2.3 使用证书、etcd脚本搭建ETCD集群
- 上传一个生成etcd配置文件的脚本etcd.sh到/root/k8s目录下面,脚本内容如下:
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62[root@localhost k8s]# vi /root/k8s/etcd.sh #!/bin/bash # example: ./etcd.sh etcd01 192.168.73.11 etcd02=https://192.168.73.12:2380,etcd03=https://192.168.73,13:2380 ETCD_NAME=$1 ETCD_IP=$2 ETCD_CLUSTER=$3 WORK_DIR=/opt/etcd #创建节点的配置文件模板 cat <<EOF >$WORK_DIR/cfg/etcd #[Member] ETCD_NAME="${ETCD_NAME}" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380" ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380" ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379" ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF #创建节点的启动脚本模板 cat <<EOF >/usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=${WORK_DIR}/cfg/etcd ExecStart=${WORK_DIR}/bin/etcd --name=${ETCD_NAME} --data-dir=${ETCD_DATA_DIR} --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} --initial-cluster=${ETCD_INITIAL_CLUSTER} --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} --initial-cluster-state=new --cert-file=${WORK_DIR}/ssl/server.pem --key-file=${WORK_DIR}/ssl/server-key.pem --peer-cert-file=${WORK_DIR}/ssl/server.pem --peer-key-file=${WORK_DIR}/ssl/server-key.pem --trusted-ca-file=${WORK_DIR}/ssl/ca.pem --peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF #重启服务,并设置开机自启 systemctl daemon-reload systemctl enable etcd systemctl restart etcd
- 把下载好的三个软件上传到k8s目录下
- 先解压etcd软件包到当前的目录下,再创建etcd集群的工作目录
复制代码
1
2
3
4
5
6
7
8
9
10
11[root@localhost k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz [root@localhost k8s]# ls etcd-v3.3.10-linux-amd64 Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md #稍后使用源码包中的etcd、etcdctl 应用程序命令 [root@localhost k8s]# mkdir -p /opt/etcd/{cfg,bin,ssl} [root@localhost k8s]# ls /opt/etcd/ bin cfg ssl
- 把etcd、etcdctl执行文件放在/opt/etcd/bin/
复制代码
1
2[root@localhost k8s]# mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin/
- 拷贝证书到/opt/etcd/ssl/目录下
复制代码
1
2
3
4
5[root@localhost k8s]# cp etcd-cert/*.pem /opt/etcd/ssl/ [root@localhost k8s]# ls /opt/etcd/ssl/ ca-key.pem ca.pem server-key.pem server.pem
- 执行etcd.sh脚本产生etcd集群的配置脚本和服务启动脚本,进行卡住状态等待其他节点加入
复制代码
1
2
3
4
5
6#注意:修改成自己的ip地址 [root@localhost k8s]# bash etcd.sh etcd01 192.168.73.11 etcd02=https://192.168.73.12:2380,etcd03=https://192.168.73.13:2380 //使用另外一个会话窗口,会发现etcd进程己经开启 [root@localhost k8s]# ps -ef | grep etcd
2.4node节点加入ETCD集群
- 在master节点上拷贝证书到其他的node节点
复制代码
1
2
3
4
5[root@localhost k8s]# scp -r /opt/etcd/ root@192.168.73.12:/opt/ #将master上面的文件拷贝到node01节点上 [root@localhost k8s]# scp -r /opt/etcd/ root@192.168.73.13:/opt/ #将master上面的文件拷贝到node02节点上
- 将master服务的启动文件拷贝到其他的node节点上
复制代码
1
2
3
4
5
6
7
8[root@localhost k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.73.12:/usr/lib/systemd/system/ root@192.168.73.12's password: etcd.service 100% 923 105.2KB/s 00:00 [root@localhost k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.73.13:/usr/lib/systemd/system/ root@192.168.73.13's password: etcd.service 100% 923 830.1KB/s 00:00 [root@localhost k8s]#
- 修改拷贝到node01节点上面的etcd配置文件
复制代码
1
2
3
4
5
6
7
8
9
10
11vim /opt/etcd/cfg/etcd #[Member] ETCD_NAME="etcd02" #ETCD集群的节点名称 #下面的地址都要指向自己的IP地址 ETCD_LISTEN_PEER_URLS="https://192.168.73.12:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.73.12:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.73.12:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.73.12:2379"
- 修改拷贝到node01节点上面的etcd配置文件
复制代码
1
2
3
4
5
6
7
8
9
10
11vim /opt/etcd/cfg/etcd #[Member] ETCD_NAME="etcd03" #ETCD集群的节点名称 #下面的地址都要指向自己的IP地址 ETCD_LISTEN_PEER_URLS="https://192.168.73.13:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.73.13:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.73.13:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.713:2379"
- 在master节点输入bash等待node节点加入
复制代码
1
2[root@localhost k8s]# bash etcd.sh etcd01 192.168.73.11 etcd02=https://192.168.73.12:2380,etcd03=https://192.168.73.13:2380
- 快速的启动node01、node02节点
复制代码
1
2
3[root@localhost ~]# systemctl start etcd [root@localhost ~]# systemctl status etcd
- 查看集群状态
复制代码
1
2
3
4#在master节点上面操作 [root@localhost k8s]# cd /opt/etcd/ssl/ [root@localhost ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.73.11:2379,https://192.168.73.12:2379,https://192.168.73.13:2379" cluster-health
3.docker的部署
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30//node节点上面配置docker [root@localhost ~]# yum -y install yum-utils device-mapper-persistent-data lvm2 [root@localhost ~]#yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo [root@localhost ~]# yum -y install docker-ce #启动docker [root@localhost ~]# systemctl restart docker [root@localhost ~]# systemctl enable docker #配置镜像加速 [root@localhost ~]# tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors":["https://v8z6yng7.mirror.aliyuncs.com"] } EOF #重启docker [root@localhost ~]# systemctl daemon-reload [root@localhost ~]# systemctl restart docker #网络优化 [root@localhost ~]# vim /etc/sysctl.conf net.ipv4.ip_forward=1 [root@localhost ~]# sysctl -p [root@localhost ~]# service network restart [root@localhost ~]# systemctl restart docker [root@localhost ~]# docker images [root@localhost ~]# docker ps -a
4.flannel网络组件的部署
- 建立ETCD集群与外部通信
- 在master节点上,将分配的子网段写入到etcd中,以便于flannel使用
复制代码
1
2
3注意:必须在证书存放的路径/root/k8s/etcd-cert下执行此命令。 [root@localhost etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.73.11:2379,https://192.168.73.12:2379,https://192.168.73.13:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
- 查看写入信息
复制代码
1
2
3[root@localhost etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.73.11:2379,https://192.168.73.12:2379,https://192.168.73.13:2379" get /coreos.com/network/config { "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
- 在两个node节点上创建k8s工作目录
复制代码
1
2
3
4
5[root@localhost ~]# mkdir -p /opt/kubernetes/{cfg,bin,ssl} [root@localhost ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/ [root@localhost ~]# ls /opt/kubernetes/bin/
- 上传可以生成配置文件和启动文件的脚本flannel.sh
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37//脚本内容: [root@localhost ~]# vi flannel.sh #!/bin/bash ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"} cat <<EOF >/opt/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF cat <<EOF >/usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/opt/kubernetes/cfg/flanneld ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable flanneld systemctl restart flanneld
- 两个node节点开启flannel网络功能
复制代码
1
2[root@localhost ~]# bash flannel.sh https://192.168.73.11:2379,https://192.168.73.12:2379,https://192.168.73.13:2379
- 查看网络状态是否运行
复制代码
1
2[root@localhost ~]# systemctl status flanneld
- 创建docker连接flannel网络
- 两个node节点,修改docker的配置文件
复制代码
1
2
3
4
5[root@localhost ~]# vi /usr/lib/systemd/system/docker.service //修改添加两处: EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock
- 查看flannel网络分配的子网段
复制代码
1
2[root@localhost ~]# cat /run/flannel/subnet.env
- 启动docker服务
复制代码
1
2
3[root@localhost ~]# systemctl daemon-reload [root@localhost ~]# systemctl restart docker
- 两个node节点分别创建并自动进入centos:7容器
复制代码
1
2
3
4
5
6
7
8
9[root@localhost ~]# docker run -it centos:7 /bin/bash Unable to find image 'centos:7' locally 7: Pulling from library/centos ab5ef0e58194: Pull complete Digest: sha256:4a701376d03f6b39b8c2a8f4a8e499441b0d567f9ab9d58e4991de4472fb813c Status: Downloaded newer image for centos:7 [root@690ec8bdaa81 /]# yum install -y net-tools #安装后可以使用ifconfig命令
- 在容器里面查看IP地址,并进行ping测试
复制代码
1
2
3ifconfig #查看容器的IP地址 ping 对方的IP地址
5.部署master组件
- 在master上操作,api-server生成证书
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136[root@master k8s]# mkdir -p /opt/kubernetes/{cfg,bin,ssl} '//创建k8s工作目录' [root@master k8s]# mkdir k8s-cert '//创建k8s证书目录' [root@master k8s]# unzip master.zip -d /opt/kubernetes/ '//解压 maste.zip' [root@master k8s]# ls /opt/k8s/ apiserver.sh bin cfg controller-manager.sh scheduler.sh ssl '//发现controller-manager.sh 没有执行权限' [root@master k8s]# chmod +x /opt/kubernetes/controller-manager.sh '//给执行权限' [root@master k8s]# cd k8s-cert/ [root@master k8s-cert]# vim k8s-cert.sh cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF cat > ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca - #----------------------- cat > server-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "10.0.0.1", "127.0.0.1", "192.168.233.131", '//master1,配置文件中要删除此类注释' "192.168.233.130", '//master2' "192.168.233.100", '//VIP' "192.168.233.128", '//nginx代理master' "192.168.233.129", '//nginx代理backup' "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server #----------------------- cat > admin-csr.json <<EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin #----------------------- cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy '//为什么没有写node节点的IP地址?因为如果写了node节点IP地址,后期增加或者删除node节点的时候会非常麻烦'
- 生成证书
复制代码
1
2
3
4
5
6
7
8
9
10
11
12[root@master k8s-cert]# bash k8s-cert.sh '//生成证书' [root@master k8s-cert]# ls admin.csr admin.pem ca-csr.json k8s-cert.sh kube-proxy-key.pem server-csr.json admin-csr.json ca-config.json ca-key.pem kube-proxy.csr kube-proxy.pem server-key.pem admin-key.pem ca.csr ca.pem kube-proxy-csr.json server.csr server.pem [root@master k8s-cert]# ls *.pem admin-key.pem ca-key.pem kube-proxy-key.pem server-key.pem admin.pem ca.pem kube-proxy.pem server.pem [root@master k8s-cert]# cp ca*.pem server*.pem /opt/kubernets/ssl/ '//复制证书到工作目录' [root@master k8s-cert]# ls /opt/kubernets/ssl/ ca-key.pem ca.pem server-key.pem server.pem
- 解压k8s服务器端压缩包
复制代码
1
2
3
4
5
6
7[root@master k8s-cert]# cd .. [root@master k8s]# ls cfssl.sh etcd-v3.3.10-linux-amd64 k8s-cert etcd-cert etcd-v3.3.10-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz etcd.sh flannel-v0.10.0-linux-amd64.tar.gz master.zip [root@master k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz
- 复制服务器端关键命令到k8s工作目录中
复制代码
1
2
3
4
5[root@master k8s]# cd kubernetes/server/bin/ [root@master bin]# cp kube-controller-manager kube-scheduler kubectl kube-apiserver /opt/kubernets/bin/ [root@master bin]# ls /opt/kubernetes/bin/ kube-apiserver kube-controller-manager kubectl kube-scheduler
- 编辑令牌并绑定角色kubelet-bootstrap
复制代码
1
2
3
4
5
6
7[root@master bin]# cd /root/k8s/ [root@master k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d '' '//随机生成序列号' 7ea8f86b 157225fd 4b927376 5e88a3ca [root@master k8s]# vim /opt/kubernets/cfg/token.csv 7ea8f86b157225fd4b9273765e88a3ca,kubelet-bootstrap,10001,"system:kubelet-bootstrap" '//序列号,用户名,id,角色,这个用户是master用来管理node节点的'
- 开启apiserver,并将数据存放在etcd集群中并检查kube状态
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14[root@master kubernetes]# bash apiserver.sh 192.168.73.11 https://192.168.73.11:2379,https://192.168.73.12:2379,https://192.168.73.13:2379 [root@master kubernetes]# ls /opt/kubernetes/cfg/ kube-apiserver token.csv [root@master kubernetes]# netstat -ntap |grep kube [root@master kubernetes]# ps aux |grep kube [root@master kubernetes]# vim /opt/kubernetes/cfg/kube-apiserver ...省略内容 --secure-port=6443 '//其实就是443,https协议通信端口' ...省略内容 [root@master kubernetes]# netstat -ntap |grep 6443 tcp 0 0 192.168.73.11:6443 0.0.0.0:* LISTEN 12636/kube-apiserve tcp 0 0 192.168.73.11:40686 192.168.73.11:6443 ESTABLISHED 12636/kube-apiserve tcp 0 0 192.168.73.11:6443 192.168.73.11:40686 ESTABLISHED 12636/kube-apiserve
- 启动scheduler服务
复制代码
1
2
3
4[root@master kubernetes]# ./scheduler.sh 127.0.0.1 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service. [root@master kubernetes]# systemctl status kube-scheduler
- 启动controller-manager
复制代码
1
2
3
4[root@master kubernetes]# ./controller-manager.sh 127.0.0.1 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service. [root@master kubernetes]# systemctl status kube-controller-manager
- 查看master节点状态
复制代码
1
2
3
4
5
6
7
8[root@master kubernetes]# /opt/kubernetes/bin/kubectl get cs '//发现是正常的,没问题' NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-2 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"}
- node01节点部署
- master节点将kubectl和kube-proxy拷贝到node节点
复制代码
1
2
3
4
5
6
7
8
9
10
11
12[root@master kubernetes]# cd /root/k8s/kubernetes/server/bin/ [root@master bin]# ls apiextensions-apiserver kube-apiserver.docker_tag kube-proxy cloud-controller-manager kube-apiserver.tar kube-proxy.docker_tag cloud-controller-manager.docker_tag kube-controller-manager kube-proxy.tar cloud-controller-manager.tar kube-controller-manager.docker_tag kube-scheduler hyperkube kube-controller-manager.tar kube-scheduler.docker_tag kubeadm kubectl kube-scheduler.tar kube-apiserver kubelet [root@master bin]# scp kubelet kube-proxy root@192.168.73.12:/opt/k8s/bin [root@master bin]# scp kubelet kube-proxy root@192.168.73.13:/opt/k8s/bin
- node节点解压node.zip
复制代码
1
2
3
4
5
6
7
8[root@node01 ~]# rz -E rz waiting to receive. [root@node01 ~]# ls anaconda-ks.cfg flannel-v0.10.0-linux-amd64.tar.gz node.zip [root@node01 ~]# unzip node.zip [root@node01 ~]# ls anaconda-ks.cfg flannel-v0.10.0-linux-amd64.tar.gz kubelet.sh node.zip proxy.sh
- master节点创建kubeconfig目录
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56[root@master bin]# cd /root/k8s/ [root@master k8s]# mkdir kubeconfig [root@master k8s]# cd kubeconfig/ [root@master kubeconfig]# vim kubeconfig APISERVER=$1 SSL_DIR=$2 # 创建kubelet bootstrapping kubeconfig export KUBE_APISERVER="https://$APISERVER:6443" # 设置集群参数 kubectl config set-cluster kubernetes --certificate-authority=$SSL_DIR/ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=bootstrap.kubeconfig # 设置客户端认证参数 kubectl config set-credentials kubelet-bootstrap --token=7ea8f86b157225fd4b9273765e88a3ca '//此token序列号就是之前/opt/kubernetes/cfg/token.csv 文件中使用的的' --kubeconfig=bootstrap.kubeconfig # 设置上下文参数 kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=bootstrap.kubeconfig # 设置默认上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig #---------------------- # 创建kube-proxy kubeconfig文件 kubectl config set-cluster kubernetes --certificate-authority=$SSL_DIR/ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy --client-certificate=$SSL_DIR/kube-proxy.pem --client-key=$SSL_DIR/kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig [root@master kubeconfig]# export PATH=$PATH://opt/kubernetes/bin '//设置环境变量(可以写入到/etc/prlfile中)'
- 生成配置文件并拷贝到node节点
复制代码
1
2
3
4
5
6[root@master kubeconfig]# bash kubeconfig 192.168.73.11 /root/k8s/k8s-cert/ [root@master kubeconfig]# ls bootstrap.kubeconfig kubeconfig kube-proxy.kubeconfig [root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.73.12:/opt/k8s/cfg [root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.73.13:/opt/k8s/cfg
- 创建bootstrap角色并赋予权限用于连接apiserver请求签名
复制代码
1
2
3[root@master kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
- node01节点操作生成kubelet kubelet.config配置文件
复制代码
1
2
3
4
5
6
7
8[root@node01 ~]# vim kubelet.sh '//将/opt/kubernetes路径都修改为/opt/k8s' [root@node01 ~]# bash kubelet.sh 192.168.73.12 Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. [root@node01 ~]# ls /opt/k8s/cfg/ bootstrap.kubeconfig flanneld kubelet kubelet.config kube-proxy.kubeconfig [root@node01 ~]# systemctl status kubelet
- master上检查到node01节点的请求,查看证书状态
复制代码
1
2
3
4
5[root@master kubeconfig]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-xmi9gQiUIFuyZ9KAIKFIyf4JiQOuPN1tACjVzu_SH6s 71s kubelet-bootstrap Pending '//pending:等待集群给该节点办法证书'
- 颁发证书,再次查看证书状态
复制代码
1
2
3
4
5
6[root@master kubeconfig]# kubectl certificate approve node-csr-xmi9gQiUIFuyZ9KAIKFIyf4JiQOuPN1tACjVzu_SH6s certificatesigningrequest.certificates.k8s.io/node-csr-xmi9gQiUIFuyZ9KAIKFIyf4JiQOuPN1tACjVzu_SH6s approved [root@master kubeconfig]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-xmi9gQiUIFuyZ9KAIKFIyf4JiQOuPN1tACjVzu_SH6s 3m9s kubelet-bootstrap Approved,Issued '//已经被允许加入集群'
- 查看集群状态并启动proxy
复制代码
1
2
3
4
5
6
7
8[root@master kubeconfig]# kubectl get node '//如果有一个节点noready,检查kubelet,如果很多节点noready,那就检查apiserver,如果没问题再检查VIP地址,keepalived' NAME STATUS ROLES AGE VERSION 192.168.73.12 Ready <none> 92s v1.12.3 [root@node01 ~]# vim proxy.sh '//修改配置文件,将/opt/kubernetes路径换成/opt/k8s' [root@node01 ~]# bash proxy.sh 192.168.73.12 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service. [root@node01 ~]# systemctl status kube-proxy.service '//发现服务是running状态'
- node02节点部署
- 将node01之前生成的配置文件直接复制到node02
复制代码
1
2
3[root@node01 ~]# scp -r /opt/k8s/cfg/ root@192.168.73.13:/opt/k8s/cfg/ [root@node01 ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.73.13:/usr/lib/systemd/system '//复制启动脚本过去'
- 修改三个配置文件的IP地址
复制代码
1
2
3
4
5
6
7
8[root@node02 ~]# cd /opt/k8s/cfg/ [root@node02 cfg]# vim kubelet --hostname-override=192.168.73.13 '//修改为自己的IP地址' [root@node02 cfg]# vim kubelet.config address: 192.168.73.13 [root@node02 cfg]# vim kube-proxy --hostname-override=192.168.73.13
- 启动服务并查看状态
复制代码
1
2
3
4
5
6
7
8
9[root@node02 cfg]# systemctl start kubelet [root@node02 cfg]# systemctl enable kubelet Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. [root@node02 cfg]# systemctl status kubelet [root@node02 cfg]# systemctl start kube-proxy [root@node02 cfg]# systemctl enable kube-proxy Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service. [root@node02 cfg]# systemctl status kube-proxy
- master上操作查看请求并同意node02证书
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15[root@master kubeconfig]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-A8BX2W67HKODPGvn0Q0dZ8Lr5Q8_2fXFt1O0STzZdis 74s kubelet-bootstrap Pending node-csr-xmi9gQiUIFuyZ9KAIKFIyf4JiQOuPN1tACjVzu_SH6s 21m kubelet-bootstrap Approved,Issued [root@master kubeconfig]# kubectl certificate approve node-csr-A8BX2W67HKODPGvn0Q0dZ8Lr5Q8_2fXFt1O0STzZdis '//同意证书' certificatesigningrequest.certificates.k8s.io/node-csr-A8BX2W67HKODPGvn0Q0dZ8Lr5Q8_2fXFt1O0STzZdis approved [root@master kubeconfig]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-A8BX2W67HKODPGvn0Q0dZ8Lr5Q8_2fXFt1O0STzZdis 99s kubelet-bootstrap Approved,Issued node-csr-xmi9gQiUIFuyZ9KAIKFIyf4JiQOuPN1tACjVzu_SH6s 21m kubelet-bootstrap Approved,Issued [root@master kubeconfig]# kubectl get node NAME STATUS ROLES AGE VERSION 192.168.73.12 Ready <none> 19m v1.12.3 192.168.73.13 Ready <none> 44s v1.12.3
6.总结
-
回忆一下k8s单节点的部署流程
1.自签etcd的证书
2.etcd部署
3.node安装docker
4.flannel部署(先写入子网到etcd)
5.自签APIServer证书
6.部署APIServer组件(token,csv)
7.部署controller-manager(指定apiserver证书)和scheduler组件
8.生成kubeconfig(bootstrap,kubeconfig和kube-proxy.kubeconfig)
9.部署kubelet组件
10.部署kube-proxy组件
11.kubectl get csr && kubectl certificate approve允许办法证书
12.添加一个node节点
最后
以上就是昏睡钻石最近收集整理的关于k8s单master节点的部署(实验)k8s单master节点的部署(实验)的全部内容,更多相关k8s单master节点内容请搜索靠谱客的其他文章。
本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
发表评论 取消回复