1. 集群环境准备
1.1. 主机规划
IP | 主机名 | 主机角色 | 操作系统 | 安装组件 |
---|---|---|---|---|
192.168.11.71 | k8s-master1 | master,worker | Centos7.9 | api-server, controller-manager, scheduler, etcd, kubectl, kubelet, kube-proxy, containerd, runc |
1.2. 软件版本
软件 | 版本 | 备注 |
---|---|---|
kubernetes | v1.21.14 | |
etcd | v3.5.4 | |
calico | v3.19.4 | |
corends | v1.8.4 | |
containerd | v1.6.10 | |
runc | v1.1.0 | |
crictl | v1.24.2 | |
cni | v1.1.1 | |
cfssl | v1.6.3 |
1.3. 网络分配
网络名称 | 网段 | 备注 |
---|---|---|
Node网络 | 192.168.11.0/24 | |
Service网络 | 10.96.0.0/12 | |
Pod网络 | 172.16.0.0/12 | |
2. 集群部署
2.1. 主机准备
UUID也需要修改,每个主机的UUID必须要设置不一致,MAC地址也必须不一样,对于通过虚拟机clone的方式尤其需要注意,他们的UUID, MAC地址极有可能是一样的
分别修改每台主机的IP地址,以及hostname;由于我这里每台主机都是通过VMware克隆出来的,所以其主机IP都是一样的
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109# 时间同步 yum install -y chrony systemctl start chronyd systemctl enable chronyd chronyc sources # 关闭防火墙 systemctl disable --now firewalld # 关闭selinux setenforce 0 sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux sestatus # 关闭交换分区 sed -ri 's/.*swap.*/#&/' /etc/fstab swapoff -a && sysctl -w vm.swappiness=0 cat /etc/fstab # 时间同步 yum install -y chrony systemctl start chronyd systemctl enable chronyd chronyc sources # 网络配置 # 方式一 # systemctl disable --now NetworkManager # systemctl start network && systemctl enable network # 方式二 cat > /etc/NetworkManager/conf.d/calico.conf << EOF [keyfile] unmanaged-devices=interface-name:cali*;interface-name:tunl* EOF systemctl restart NetworkManager # 主机系统优化 ulimit -SHn 65535 cat >> /etc/security/limits.conf <<EOF * soft nofile 655360 * hard nofile 131072 * soft nproc 655350 * hard nproc 655350 * seft memlock unlimited * hard memlock unlimitedd EOF yum install ipvsadm ipset sysstat conntrack libseccomp -y tee /etc/modules-load.d/ipvs.conf << 'EOF' ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack ip_tables ip_set xt_set ipt_set ipt_rpfilter ipt_REJECT ipip EOF systemctl restart systemd-modules-load.service # 查看模块是否已经被启用 lsmod | grep -e ip_vs -e nf_conntrack # 优化内核参数 tee /etc/sysctl.d/k8s.conf << 'EOF' net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl =15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.core.somaxconn = 16384 net.ipv6.conf.all.disable_ipv6 = 0 net.ipv6.conf.default.disable_ipv6 = 0 net.ipv6.conf.lo.disable_ipv6 = 0 net.ipv6.conf.all.forwarding = 1 EOF sysctl --system # 加载containerd相关内核模块 modprobe overlay modprobe br_netfilter tee /etc/modules-load.d/containerd.conf << 'EOF' overlay br_netfileter EOF systemctl enable --now systemd-modules-load.service # 安装必备工具 yum -y install wget jq psmisc vim net-tools tree telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl -y reboot
重启之后需要查看一下ipvs, containerd
相关模块加载情况
复制代码
1
2
3lsmod | grep --color=auto -e ip_vs -e nf_conntrack lsmod | egrep 'br_netfilter | overlay'
2.2. 下载etcd, k8s, cfssl软件安装包
- K8S版本可以参考该目录
- github参考地址
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28# 创建工作目录 mkdir -p /data/work cd /data/work # 下载cfssl工具 CFSSL_VERSION=1.6.3 #wget -O cfssl_linux-amd64 https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v${CFSSL_VERSION}/cfssl_${CFSSL_VERSION}_linux_amd64 #wget -O cfssljson_linux-amd64 https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v${CFSSL_VERSION}/cfssljson_${CFSSL_VERSION}_linux_amd64 #wget -O cfssl-certinfo_linux-amd64 https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v${CFSSL_VERSION}/cfssl-certinfo_${CFSSL_VERSION}_linux_amd64 chmod +x cfssl* mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo ETCD_VERSION=v3.5.4 #wget https://ghproxy.com/https://github.com/etcd-io/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz tar -zxvf etcd-${ETCD_VERSION}-linux-amd64.tar.gz cp -ar etcd-${ETCD_VERSION}-linux-amd64/etcd* /usr/local/bin chmod +x /usr/local/bin/etcd* ls -ll /usr/local/bin/etcd* etcdctl version K8S_VERSION=v1.21.14 #wget https://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/kubernetes-server-linux-amd64.tar.gz tar zxvf kubernetes-server-linux-amd64.tar.gz cd kubernetes/server/bin/ cp kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kube-proxy /usr/local/bin/ # 退出目录,回到/data/work工作目录中,后续所有的操作依然在本目录中 cd /data/work/
2.3. 生成CA证书
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46tee ca-csr.json << 'EOF' { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ], "ca": { "expiry": "876000h" } } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca tee ca-config.json << 'EOF' { "signing": { "default": { "expiry": "876000h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "876000h" } } } } EOF mkdir -p /etc/kubernetes/pki cp ca*.pem /etc/kubernetes/pki/
2.4. 部署ETCD
2.4.1. 启动ETCD
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103# 创建etcd csr请求文件 tee etcd-csr.json << 'EOF' { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ] } EOF # 生成etcd相关证书 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -hostname=127.0.0.1,k8s-master1,192.168.11.71 -profile=kubernetes etcd-csr.json | cfssljson -bare etcd ls -ll etcd*.pem # 创建ETCD服务配置文件 tee etcd-config.yml << 'EOF' name: 'etcd1' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.11.71:2380' listen-client-urls: 'https://192.168.11.71:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.11.71:2380' advertise-client-urls: 'https://192.168.11.71:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'etcd1=https://192.168.11.71:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false EOF # 创建etcd服务启动文件 tee etcd.service << 'EOF' [Unit] Description=Etcd Service Documentation=https://coreos.com/etcd/docs/latest/ After=network.target [Service] Type=notify ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd-config.yml Restart=on-failure RestartSec=10 LimitNOFILE=65536 [Install] WantedBy=multi-user.target Alias=etcd3.service EOF mkdir -p /etc/etcd/ssl cp etcd*.pem /etc/etcd/ssl/ cp etcd-config.yml /etc/etcd/ cp etcd.service /usr/lib/systemd/system/ mkdir -p /etc/kubernetes/pki/etcd ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/ mkdir -p /var/lib/etcd # 启动ETCD systemctl daemon-reload && systemctl enable etcd.service && systemctl start etcd.service
2.4.2. 验证ETCD的状态
复制代码
1
2
3
4
5
6
7
8
9# 查看ETCD各节点的健康状态 ETCDCTL_API=3 && /usr/local/bin/etcdctl --write-out=table --cacert=/etc/kubernetes/pki/ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem --endpoints=https://192.168.11.71:2379 endpoint health # 查看成员列表 ETCDCTL_API=3 && /usr/local/bin/etcdctl --write-out=table --cacert=/etc/kubernetes/pki/ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem --endpoints=https://192.168.11.71:2379 member list # 查看数据库大小以及哪个节点是Leader节点 ETCDCTL_API=3 && /usr/local/bin/etcdctl --write-out=table --cacert=/etc/kubernetes/pki/ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem --endpoints=https://192.168.11.71:2379 endpoint status # 检查ETCD的性能 ETCDCTL_API=3 && /usr/local/bin/etcdctl --write-out=table --cacert=/etc/kubernetes/pki/ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem --endpoints=https://192.168.11.71:2379 check perf
执行结果
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61[root@k8s-master1 work]# # 查看ETCD各节点的健康状态 [root@k8s-master1 work]# ETCDCTL_API=3 && /usr/local/bin/etcdctl --write-out=table --cacert=/etc/kubernetes/pki/ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem --endpoints=https://192.168.11.71:2379 endpoint health +----------------------------+--------+-------------+-------+ | ENDPOINT | HEALTH | TOOK | ERROR | +----------------------------+--------+-------------+-------+ | https://192.168.11.71:2379 | true | 14.286573ms | | +----------------------------+--------+-------------+-------+ [root@k8s-master1 work]# [root@k8s-master1 work]# # 查看成员列表 [root@k8s-master1 work]# ETCDCTL_API=3 && /usr/local/bin/etcdctl --write-out=table --cacert=/etc/kubernetes/pki/ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem --endpoints=https://192.168.11.71:2379 member list +------------------+---------+-------+----------------------------+----------------------------+------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+-------+----------------------------+----------------------------+------------+ | 5ce02800cf5f635d | started | etcd1 | https://192.168.11.71:2380 | https://192.168.11.71:2379 | false | +------------------+---------+-------+----------------------------+----------------------------+------------+ [root@k8s-master1 work]# [root@k8s-master1 work]# # 查看数据库大小以及哪个节点是Leader节点 [root@k8s-master1 work]# ETCDCTL_API=3 && /usr/local/bin/etcdctl --write-out=table --cacert=/etc/kubernetes/pki/ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem --endpoints=https://192.168.11.71:2379 endpoint status +----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://192.168.11.71:2379 | 5ce02800cf5f635d | 3.5.4 | 20 kB | true | false | 2 | 5 | 5 | | +----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ [root@k8s-master1 work]# [root@k8s-master1 work]# # 检查ETCD的性能 [root@k8s-master1 work]# ETCDCTL_API=3 && /usr/local/bin/etcdctl --write-out=table --cacert=/etc/kubernetes/pki/ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem --endpoints=https://192.168.11.71:2379 check perf 59 / 60 Booooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooom ! 98.33%PASS: Throughput is 150 writes/s PASS: Slowest request took 0.073586s PASS: Stddev is 0.002288s PASS [root@k8s-master1 work]#
2.5. 部署apiserver
2.5.1. 启动apiserver
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102tee kube-apiserver-csr.json << 'EOF' { "CN": "kube-apiserver", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ] } EOF # 生成apiserver证书 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -hostname=10.96.0.1,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.11.71 -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver # 这里必须使用cat命令,不能使用tee命令,否者这里的$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')不会被解析 cat > token.csv << EOF $(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF # 创建apiserver配置文件 tee kube-apiserver.conf << 'EOF' tee /etc/kubernetes/kube-apiserver.conf << 'EOF' KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds --anonymous-auth=false --bind-address=192.168.11.71 --secure-port=6443 --advertise-address=192.168.11.71 --authorization-mode=Node,RBAC --runtime-config=api/all=true --enable-bootstrap-token-auth=true --service-cluster-ip-range=10.96.0.0/12 --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=0-50000 --tls-cert-file=/etc/kubernetes/pki/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/pki/kube-apiserver-key.pem --client-ca-file=/etc/kubernetes/pki/ca.pem --kubelet-client-certificate=/etc/kubernetes/pki/kube-apiserver.pem --kubelet-client-key=/etc/kubernetes/pki/kube-apiserver-key.pem --service-account-key-file=/etc/kubernetes/pki/ca-key.pem --service-account-signing-key-file=/etc/kubernetes/pki/ca-key.pem --service-account-issuer=https://kubernetes.default.svc.cluster.local --etcd-cafile=/etc/kubernetes/pki/ca.pem --etcd-certfile=/etc/kubernetes/pki/etcd/etcd.pem --etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-key.pem --etcd-servers=https://192.168.11.71:2379 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --allow-privileged=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/log/kube-apiserver-audit.log --event-ttl=1h --alsologtostderr=true --logtostderr=false --log-dir=/var/log/kubernetes --v=4" EOF # 创建apiserver服务启动文件 tee kube-apiserver.service << 'EOF' [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=etcd.service Wants=etcd.service [Service] EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF # 移动apiserver配置文件到相应位置 cp kube-apiserver*.pem /etc/kubernetes/pki/ cp token.csv /etc/kubernetes/ cp kube-apiserver.conf /etc/kubernetes/ cp kube-apiserver.service /usr/lib/systemd/system/ mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes # 启动服务 systemctl daemon-reload && systemctl enable kube-apiserver && systemctl start kube-apiserver
2.5.2. 校验api-server服务是否正常启动
复制代码
1
2
3systemctl status kube-apiserver curl --insecure https://192.168.11.71:6443/
执行结果
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32[root@k8s-master1 ~]# systemctl status kube-apiserver ● kube-apiserver.service - Kubernetes API Server Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2022-11-17 22:16:20 CST; 1min 25s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 52956 (kube-apiserver) CGroup: /system.slice/kube-apiserver.service └─52956 /usr/local/bin/kube-apiserver --enable-admission-plugi... Nov 17 22:17:22 k8s-master1 kube-apiserver[52956]: I1117 22:17:22.478545 ... Nov 17 22:17:22 k8s-master1 kube-apiserver[52956]: I1117 22:17:22.484930 ... Nov 17 22:17:32 k8s-master1 kube-apiserver[52956]: I1117 22:17:32.464028 ... Nov 17 22:17:32 k8s-master1 kube-apiserver[52956]: I1117 22:17:32.469795 ... Nov 17 22:17:32 k8s-master1 kube-apiserver[52956]: I1117 22:17:32.481656 ... Nov 17 22:17:32 k8s-master1 kube-apiserver[52956]: I1117 22:17:32.485503 ... Nov 17 22:17:42 k8s-master1 kube-apiserver[52956]: I1117 22:17:42.463590 ... Nov 17 22:17:42 k8s-master1 kube-apiserver[52956]: I1117 22:17:42.467854 ... Nov 17 22:17:42 k8s-master1 kube-apiserver[52956]: I1117 22:17:42.480078 ... Nov 17 22:17:42 k8s-master1 kube-apiserver[52956]: I1117 22:17:42.486241 ... Hint: Some lines were ellipsized, use -l to show in full. [root@k8s-master1 ~]# [root@k8s-master1 ~]# curl --insecure https://192.168.11.71:6443/ { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "Unauthorized", "reason": "Unauthorized", "code": 401 [root@k8s-master1 ~]#
2.6. 部署kubectl
2.6.1. 创建CSR文件
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44tee admin-csr.json << 'EOF' { "CN": "admin", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:masters", "OU": "Kubernetes-manual" } ] } EOF # 生成admin证书 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.11.71:6443 --kubeconfig=kube.config # 设置客户端认证参数 kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config # 设置上下文参数 kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config # 设置当前上下文 kubectl config use-context kubernetes --kubeconfig=kube.config mkdir -p ~/.kube cp kube.config ~/.kube/config cp kube.config /etc/kubernetes/admin.conf # 授权kubernetes证书访问kubelet api权限 kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes --kubeconfig=/root/.kube/config yum install -y bash-completion source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) kubectl completion bash > ~/.kube/completion.bash.inc source '/root/.kube/completion.bash.inc' source $HOME/.bash_profile
2.6.2. 查看集群状态
复制代码
1
2
3
4kubectl cluster-info kubectl get componentstatuses kubectl get all --all-namespaces
2.7. 部署kube-controller-manager
2.7.1. 启动kube-controller-manager
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92tee kube-controller-manager-csr.json << 'EOF' { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-controller-manager", "OU": "Kubernetes-manual" } ] } EOF # 创建controller-manager证书 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -hostname=127.0.0.1,192.168.11.71 -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager # 创建Kube-controller-manager的kubeconfig kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.11.71:6443 --kubeconfig=kube-controller-manager.kubeconfig # 设置客户端认证参数 kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig # 设置上下文参数 kubectl config set-context system:kube-controller-manager@kubernetes --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig # 设置当前上下文 kubectl config use-context system:kube-controller-manager@kubernetes --kubeconfig=kube-controller-manager.kubeconfig # 创建controller-manager配置文件 tee kube-controller-manager.conf << 'EOF' KUBE_CONTROLLER_MANAGER_OPTS="--secure-port=10257 --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig --service-cluster-ip-range=10.96.0.0/12 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem --allocate-node-cidrs=true --cluster-cidr=172.16.0.0/12 --root-ca-file=/etc/kubernetes/pki/ca.pem --service-account-private-key-file=/etc/kubernetes/pki/ca-key.pem --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --controllers=*,bootstrapsigner,tokencleaner --horizontal-pod-autoscaler-sync-period=10s --tls-cert-file=/etc/kubernetes/pki/kube-controller-manager.pem --tls-private-key-file=/etc/kubernetes/pki/kube-controller-manager-key.pem --use-service-account-credentials=true --alsologtostderr=true --logtostderr=false --log-dir=/var/log/kubernetes --v=2" EOF # 创建服务启动文件 tee kube-controller-manager.service << 'EOF' [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF # 复制配置文件到对应目录 cp kube-controller-manager*.pem /etc/kubernetes/pki/ cp kube-controller-manager.kubeconfig /etc/kubernetes/ cp kube-controller-manager.conf /etc/kubernetes/ cp kube-controller-manager.service /usr/lib/systemd/system/ systemctl daemon-reload &&systemctl enable kube-controller-manager && systemctl start kube-controller-manager && systemctl status kube-controller-manager
2.7.2. 验证controller-manager服务是否启动成功
复制代码
1
2kubectl get componentstatuses
2.8. 部署kube-scheduler
2.8.1. 启动kube-scheduler
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77tee kube-scheduler-csr.json << 'EOF' { "CN": "system:kube-scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-scheduler", "OU": "Kubernetes-manual" } ] } EOF # 创建证书 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -hostname=127.0.0.1,192.168.11.71,192.168.11.72,192.168.11.73 -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler # 创建kube-scheduler的kubeconfig文件 kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.11.71:6443 --kubeconfig=kube-scheduler.kubeconfig # 设置客户端认证参数 kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig # 设置上下文参数 kubectl config set-context system:kube-scheduler@kubernetes --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig # 设置当前上下文 kubectl config use-context system:kube-scheduler@kubernetes --kubeconfig=kube-scheduler.kubeconfig # 创建kube-scheduler的配置文件 tee kube-scheduler.conf << 'EOF' KUBE_SCHEDULER_OPTS="--bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig --leader-elect=true --alsologtostderr=true --logtostderr=false --log-dir=/var/log/kubernetes --v=2" EOF # 创建kube-scheduler服务启动文件 tee kube-scheduler.service << 'EOF' [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF # 拷贝kube-scheduler到合适的集群节点中 cp kube-scheduler*.pem /etc/kubernetes/ssl/ cp kube-scheduler.kubeconfig /etc/kubernetes/ cp kube-scheduler.conf /etc/kubernetes/ cp kube-scheduler.service /usr/lib/systemd/system/ systemctl daemon-reload && systemctl enable kube-scheduler && systemctl start kube-scheduler && systemctl status kube-scheduler
2.8.2. 验证scheduler是否启动成功
复制代码
1
2kubectl get componentstatuses
2.9. 安装containerd
直接在每个节点上执行以下脚本即可
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279# 下载containerd #wget https://ghproxy.com/https://github.com/containerd/containerd/releases/download/v1.6.10/cri-containerd-cni-1.6.10-linux-amd64.tar.gz # 直接把containerd所有的可执行文件解压至根目录 tar -xvzf cri-containerd-cni-*-linux-amd64.tar.gz -C / # 替换containerd默认自带的runc #wget https://ghproxy.com/https://github.com/opencontainers/runc/releases/download/v1.1.2/runc.amd64 mv runc.amd64 runc chmod +x runc mv -f runc /usr/local/sbin/ #创建服务启动文件 tee /etc/systemd/system/containerd.service << 'EOF' [Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target local-fs.target [Service] ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/local/bin/containerd Type=notify Delegate=yes KillMode=process Restart=always RestartSec=5 LimitNPROC=infinity LimitCORE=infinity LimitNOFILE=infinity TasksMax=infinity OOMScoreAdjust=-999 [Install] WantedBy=multi-user.target EOF # 配置containerd所需要的模块 cat <<EOF | tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF # 加载overlay, br_netfilter模块 systemctl restart systemd-modules-load.service # 配置contaienrd所需要的内核参数 cat <<EOF | tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF # 加载内核参数 sysctl --system # 创建默认配置文件 mkdir -p /etc/containerd # 创建containerd默认配置文件 #containerd config default | tee /etc/containerd/config.toml #sed -i "s#SystemdCgroup = false#SystemdCgroup = true#" /etc/containerd/config.toml #sed -i "s#registry.k8s.io#registry.aliyuncs.com/google_containers#" /etc/containerd/config.toml # 直接生成配置文件,注意以下版本,这里生成的是containerd 1.6.10的配置,其它版本的配置应该生成默认的配置,然后进行更改 tee /etc/containerd/config.toml << 'EOF' disabled_plugins = [] imports = [] oom_score = 0 plugin_dir = "" required_plugins = [] root = "/var/lib/containerd" state = "/run/containerd" temp = "" version = 2 [cgroup] path = "" [debug] address = "" format = "" gid = 0 level = "" uid = 0 [grpc] address = "/run/containerd/containerd.sock" gid = 0 max_recv_message_size = 16777216 max_send_message_size = 16777216 tcp_address = "" tcp_tls_ca = "" tcp_tls_cert = "" tcp_tls_key = "" uid = 0 [metrics] address = "" grpc_histogram = false [plugins] [plugins."io.containerd.gc.v1.scheduler"] deletion_threshold = 0 mutation_threshold = 100 pause_threshold = 0.02 schedule_delay = "0s" startup_delay = "100ms" [plugins."io.containerd.grpc.v1.cri"] device_ownership_from_security_context = false disable_apparmor = false disable_cgroup = false disable_hugetlb_controller = true disable_proc_mount = false disable_tcp_service = true enable_selinux = false enable_tls_streaming = false enable_unprivileged_icmp = false enable_unprivileged_ports = false ignore_image_defined_volumes = false max_concurrent_downloads = 3 max_container_log_line_size = 16384 netns_mounts_under_state_dir = false restrict_oom_score_adj = false sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6" selinux_category_range = 1024 stats_collect_period = 10 stream_idle_timeout = "4h0m0s" stream_server_address = "127.0.0.1" stream_server_port = "0" systemd_cgroup = false tolerate_missing_hugetlb_controller = true unset_seccomp_profile = "" [plugins."io.containerd.grpc.v1.cri".cni] bin_dir = "/opt/cni/bin" conf_dir = "/etc/cni/net.d" conf_template = "" ip_pref = "" max_conf_num = 1 [plugins."io.containerd.grpc.v1.cri".containerd] default_runtime_name = "runc" disable_snapshot_annotations = true discard_unpacked_layers = false ignore_rdt_not_enabled_errors = false no_pivot = false snapshotter = "overlayfs" [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime] base_runtime_spec = "" cni_conf_dir = "" cni_max_conf_num = 0 container_annotations = [] pod_annotations = [] privileged_without_host_devices = false runtime_engine = "" runtime_path = "" runtime_root = "" runtime_type = "" [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options] [plugins."io.containerd.grpc.v1.cri".containerd.runtimes] [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] base_runtime_spec = "" cni_conf_dir = "" cni_max_conf_num = 0 container_annotations = [] pod_annotations = [] privileged_without_host_devices = false runtime_engine = "" runtime_path = "" runtime_root = "" runtime_type = "io.containerd.runc.v2" [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] BinaryName = "" CriuImagePath = "" CriuPath = "" CriuWorkPath = "" IoGid = 0 IoUid = 0 NoNewKeyring = false NoPivotRoot = false Root = "" ShimCgroup = "" SystemdCgroup = true [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime] base_runtime_spec = "" cni_conf_dir = "" cni_max_conf_num = 0 container_annotations = [] pod_annotations = [] privileged_without_host_devices = false runtime_engine = "" runtime_path = "" runtime_root = "" runtime_type = "" [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options] [plugins."io.containerd.grpc.v1.cri".image_decryption] key_model = "node" [plugins."io.containerd.grpc.v1.cri".registry] config_path = "" [plugins."io.containerd.grpc.v1.cri".registry.auths] [plugins."io.containerd.grpc.v1.cri".registry.configs] [plugins."io.containerd.grpc.v1.cri".registry.headers] [plugins."io.containerd.grpc.v1.cri".registry.mirrors] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] endpoint = ["https://docker.mirrors.ustc.edu.cn", "http://hub-mirror.c.163.com"] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"] endpoint = ["https://gcr.mirrors.ustc.edu.cn"] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"] endpoint = ["https://gcr.mirrors.ustc.edu.cn/google-containers/"] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"] endpoint = ["https://quay.mirrors.ustc.edu.cn"] [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming] tls_cert_file = "" tls_key_file = "" [plugins."io.containerd.internal.v1.opt"] path = "/opt/containerd" [plugins."io.containerd.internal.v1.restart"] interval = "10s" [plugins."io.containerd.internal.v1.tracing"] sampling_ratio = 1.0 service_name = "containerd" [plugins."io.containerd.metadata.v1.bolt"] content_sharing_policy = "shared" [plugins."io.containerd.monitor.v1.cgroups"] no_prometheus = false [plugins."io.containerd.runtime.v1.linux"] no_shim = false runtime = "runc" runtime_root = "" shim = "containerd-shim" shim_debug = false [plugins."io.containerd.runtime.v2.task"] platforms = ["linux/amd64"] sched_core = false [plugins."io.containerd.service.v1.diff-service"] default = ["walking"] [plugins."io.containerd.service.v1.tasks-service"] rdt_config_file = "" [plugins."io.containerd.snapshotter.v1.aufs"] root_path = "" [plugins."io.containerd.snapshotter.v1.btrfs"] root_path = "" [plugins."io.containerd.snapshotter.v1.devmapper"] async_remove = false base_image_size = "" discard_blocks = false fs_options = "" fs_type = "" pool_name = "" root_path = "" [plugins."io.containerd.snapshotter.v1.native"] root_path = "" [plugins."io.containerd.snapshotter.v1.overlayfs"] root_path = "" upperdir_label = false [plugins."io.containerd.snapshotter.v1.zfs"] root_path = "" [plugins."io.containerd.tracing.processor.v1.otlp"] endpoint = "" insecure = false protocol = "" [proxy_plugins] [stream_processors] [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"] accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"] args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"] env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"] path = "ctd-decoder" returns = "application/vnd.oci.image.layer.v1.tar" [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"] accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"] args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"] env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"] path = "ctd-decoder" returns = "application/vnd.oci.image.layer.v1.tar+gzip" [timeouts] "io.containerd.timeout.bolt.open" = "0s" "io.containerd.timeout.shim.cleanup" = "5s" "io.containerd.timeout.shim.load" = "5s" "io.containerd.timeout.shim.shutdown" = "3s" "io.containerd.timeout.task.state" = "2s" [ttrpc] address = "" gid = 0 uid = 0 EOF systemctl daemon-reload systemctl enable --now containerd systemctl restart containerd # 验证containerd是否安装成功 crictl info # 验证是否可以下载镜像 ctr images pull docker.io/library/redis:alpine
2.10. 部署kubelet
2.10.1. 启动kubelet
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv) kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.11.71:6443 --kubeconfig=kubelet-bootstrap.kubeconfig kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=kubelet-bootstrap kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap kubectl describe clusterrolebinding cluster-system-anonymous kubectl describe clusterrolebinding kubelet-bootstrap # 创建kubelet配置文件 tee kubelet.json << 'EOF' { "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", "authentication": { "x509": { "clientCAFile": "/etc/kubernetes/pki/ca.pem" }, "webhook": { "enabled": true, "cacheTTL": "2m0s" }, "anonymous": { "enabled": false } }, "authorization": { "mode": "Webhook", "webhook": { "cacheAuthorizedTTL": "5m0s", "cacheUnauthorizedTTL": "30s" } }, "address": "192.168.11.71", "port": 10250, "readOnlyPort": 10255, "cgroupDriver": "systemd", "hairpinMode": "promiscuous-bridge", "serializeImagePulls": false, "featureGates": { "RotateKubeletServerCertificate": true }, "clusterDomain": "cluster.local.", "clusterDNS": ["10.96.0.10"] } EOF # 创建kubelet服务启动配置文件 tee kubelet.service << 'EOF' [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes After=containerd.service Requires=containerd.service [Service] WorkingDirectory=/var/lib/kubelet ExecStart=/usr/local/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig --cert-dir=/etc/kubernetes/pki --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --config=/etc/kubernetes/kubelet.json --cni-bin-dir=/etc/cni/bin --cni-conf-dir=/etc/cni/net.d --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --network-plugin=cni --rotate-certificates --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.2 --root-dir=/etc/cni/net.d --alsologtostderr=true --logtostderr=false --log-dir=/var/log/kubernetes --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF mkdir -p /etc/kubernetes/pki mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes cp kubelet-bootstrap.kubeconfig /etc/kubernetes/ cp kubelet.json /etc/kubernetes/ cp kubelet.service /usr/lib/systemd/system/ systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet && systemctl status kubelet
2.10.2. 验证kubelet是否启动成功
复制代码
1
2
3kubectl get nodes -owide kubectl get csr
执行结果,注意kubelet的启动需要一定时间,等个一两分钟再执行此命令如果还不是该结果,那么多半是配置哪里出问题了
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36[root@k8s-master1 work]# [root@k8s-master1 work]# kubectl get nodes -owide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-master1 Ready <none> 53s v1.21.14 192.168.11.71 <none> CentOS Linux 7 (Core) 6.0.7-1.el7.elrepo.x86_64 containerd://1.6.10 [root@k8s-master1 work]# kubectl get csr NAME AGE SIGNERNAME REQUESTOR CONDITION csr-z56vc 2m12s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued [root@k8s-master1 work]# [root@k8s-master1 work]#
2.11. 部署kubeproxy
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87tee kube-proxy-csr.json << 'EOF' { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-proxy", "OU": "Kubernetes-manual" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.11.71:6443 --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig tee kube-proxy.yaml << 'EOF' apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 192.168.11.71 clientConnection: kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig clusterCIDR: 172.16.0.0/12 healthzBindAddress: 192.168.11.71:10256 kind: KubeProxyConfiguration metricsBindAddress: 192.168.11.71:10249 mode: "ipvs" EOF tee kube-proxy.service << 'EOF' [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] WorkingDirectory=/var/lib/kube-proxy ExecStart=/usr/local/bin/kube-proxy --config=/etc/kubernetes/kube-proxy.yaml --alsologtostderr=true --logtostderr=false --log-dir=/var/log/kubernetes --v=2 Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF mkdir -p /var/lib/kube-proxy cp kube-proxy*.pem /etc/kubernetes/pki/ cp kube-proxy.kubeconfig kube-proxy.yaml /etc/kubernetes/ cp kube-proxy.service /usr/lib/systemd/system/ systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy && systemctl status kube-proxy
2.12. 部署网络组件calico
网友反映说 centos7 要升级libseccomp 不然 无法安装calico
复制代码
1
2
3
4
5
6
7
8
9
10
11
12wget https://docs.projectcalico.org/v3.19/manifests/calico.yaml sed -i 's/# - name: CALICO_IPV4POOL_CIDR/- name: CALICO_IPV4POOL_CIDR/g' calico.yaml sed -i 's/# value: "192.168.0.0/16"/ value: "172.16.0.0/12"/g' calico.yaml sed -i 's/"type": "calico-ipam"/"type": "calico-ipam",n "assign_ipv4": "true"/g' calico.yaml #创建calico组件 kubectl apply -f calico.yaml #查看calico pod是否成功创建 kubectl get pods -n kube-system -owide
执行过程
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65[root@k8s-master1 work]# [root@k8s-master1 work]# [root@k8s-master1 work]# kubectl apply -f calico.yaml configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget poddisruptionbudget.policy/calico-kube-controllers created [root@k8s-master1 work]# [root@k8s-master1 work]# [root@k8s-master1 work]# [root@k8s-master1 work]# [root@k8s-master1 work]# [root@k8s-master1 work]# [root@k8s-master1 work]# kubectl get pods -n kube-system -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-7cc8dd57d9-dpcqw 1/1 Running 0 3m16s 10.88.0.2 k8s-node1 <none> <none> calico-node-hr9f5 1/1 Running 0 3m15s 192.168.11.74 k8s-node1 <none> <none> [root@k8s-master1 work]#
2.13. 部署CoreDNS
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203tee coredns.yaml << 'EOF' apiVersion: v1 kind: ServiceAccount metadata: name: coredns namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns rules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:coredns subjects: - kind: ServiceAccount name: coredns namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } --- apiVersion: apps/v1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "CoreDNS" spec: # replicas: not specified here: # 1. Default is 1. # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: - key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: kubernetes.io/os: linux affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: ["kube-dns"] topologyKey: kubernetes.io/hostname containers: - name: coredns image: registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.6 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /ready port: 8181 scheme: HTTP dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile --- apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.96.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP - name: metrics port: 9153 protocol: TCP EOF kubectl apply -f coredns.yaml
2.14. 安装Metric Server
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207tee metrics-server.yaml << 'EOF' apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-reader rules: - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server name: system:metrics-server rules: - apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces - configmaps verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: system:metrics-server roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-server subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: v1 kind: Service metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: ports: - name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server --- apiVersion: apps/v1 kind: Deployment metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm - --requestheader-username-headers=X-Remote-User - --requestheader-group-headers=X-Remote-Group - --requestheader-extra-headers-prefix=X-Remote-Extra- image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.5.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS initialDelaySeconds: 20 periodSeconds: 10 resources: requests: cpu: 100m memory: 200Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - mountPath: /tmp name: tmp-dir - name: ca-ssl mountPath: /etc/kubernetes/pki nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: metrics-server volumes: - emptyDir: {} name: tmp-dir - name: ca-ssl hostPath: path: /etc/kubernetes/pki --- apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.io spec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100 EOF kubectl apply -f metrics-server.yaml
2.15. 集群验证
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31tee pod.yaml << 'EOF' apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - name: busybox image: docker.io/library/busybox:1.28 command: - sleep - "3600" imagePullPolicy: IfNotPresent restartPolicy: Always EOF kubectl apply -f pod.yaml # 查看 kubectl get pod #NAME READY STATUS RESTARTS AGE #busybox 1/1 Running 0 17s
2.16. 用pod解析默认命名空间中的kubernetes
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22kubectl get svc #NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE #kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h kubectl exec busybox -n default -- nslookup kubernetes #3Server: 10.96.0.10 #Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local #Name: kubernetes #Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
2.16.1. 测试跨命名空间是否可以解析
复制代码
1
2
3
4
5
6
7
8
9kubectl exec busybox -n default -- nslookup kube-dns.kube-system #Server: 10.96.0.10 #Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local #Name: kube-dns.kube-system #Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
2.16.2. 每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53
复制代码
1
2
3
4
5
6
7
8
9
10
11telnet 10.96.0.1 443 Trying 10.96.0.1... Connected to 10.96.0.1. Escape character is '^]'. telnet 10.96.0.10 53 Trying 10.96.0.10... Connected to 10.96.0.10. Escape character is '^]'. curl 10.96.0.10:53 curl: (52) Empty reply from server
2.16.3. Pod和Pod之前要能通
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139kubectl get po -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox 1/1 Running 0 17m 172.27.14.193 k8s-node02 <none> <none> kubectl get po -n kube-system -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-5dffd5886b-4blh6 1/1 Running 0 77m 172.25.244.193 k8s-master01 <none> <none> calico-node-fvbdq 1/1 Running 1 (75m ago) 77m 192.168.1.61 k8s-master01 <none> <none> calico-node-g8nqd 1/1 Running 0 77m 192.168.1.64 k8s-node01 <none> <none> calico-node-mdps8 1/1 Running 0 77m 192.168.1.65 k8s-node02 <none> <none> calico-node-nf4nt 1/1 Running 0 77m 192.168.1.63 k8s-master03 <none> <none> calico-node-sq2ml 1/1 Running 0 77m 192.168.1.62 k8s-master02 <none> <none> calico-typha-8445487f56-mg6p8 1/1 Running 0 77m 192.168.1.65 k8s-node02 <none> <none> calico-typha-8445487f56-pxbpj 1/1 Running 0 77m 192.168.1.61 k8s-master01 <none> <none> calico-typha-8445487f56-tnssl 1/1 Running 0 77m 192.168.1.64 k8s-node01 <none> <none> coredns-5db5696c7-67h79 1/1 Running 0 63m 172.25.92.65 k8s-master02 <none> <none> metrics-server-6bf7dcd649-5fhrw 1/1 Running 0 61m 172.18.195.1 k8s-master03 <none> <none> # 进入busybox ping其他节点上的pod kubectl exec -ti busybox -- sh / # ping 192.168.1.64 PING 192.168.1.64 (192.168.1.64): 56 data bytes 64 bytes from 192.168.1.64: seq=0 ttl=63 time=0.358 ms 64 bytes from 192.168.1.64: seq=1 ttl=63 time=0.668 ms 64 bytes from 192.168.1.64: seq=2 ttl=63 time=0.637 ms 64 bytes from 192.168.1.64: seq=3 ttl=63 time=0.624 ms 64 bytes from 192.168.1.64: seq=4 ttl=63 time=0.907 ms # 可以连通证明这个pod是可以跨命名空间和跨主机通信的
2.16.4. 创建三个副本,可以看到3个副本分布在不同的节点
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56cat > deployments.yaml << EOF apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: docker.io/library/nginx:1.14.2 ports: - containerPort: 80 EOF kubectl apply -f deployments.yaml deployment.apps/nginx-deployment created kubectl get pod NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 6m25s nginx-deployment-9456bbbf9-4bmvk 1/1 Running 0 8s nginx-deployment-9456bbbf9-9rcdk 1/1 Running 0 8s nginx-deployment-9456bbbf9-dqv8s 1/1 Running 0 8s # 删除nginx kubectl delete -f deployments.yaml
最后
以上就是犹豫水杯最近收集整理的关于Kubernetes v1.21.14二进制搭建单节点集群1. 集群环境准备2. 集群部署的全部内容,更多相关Kubernetes内容请搜索靠谱客的其他文章。
本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
发表评论 取消回复