我是靠谱客的博主 雪白白云,这篇文章主要介绍使用 kubeadm 在 Ubuntu 20.04 上部署k8s集群使用 kubeadm 在 Ubuntu 20.04 上部署k8s集群,现在分享给大家,希望可以做个参考。

使用 kubeadm 在 Ubuntu 20.04 上部署k8s集群

Kubernetes 是一种用于在本地服务器或跨混合云环境中大规模编排和管理容器化应用程序的工具。

KubeadmKubernetes 提供的一个工具,可帮助用户安装可用于生产的 Kubernetes 集群的最佳实践。

部署 Kubernetes 集群时使用了两种服务器类型

  • Master(主节点)

    控制和管理一组工作节点(工作负载运行时)的节点。

    主节点具有以下组件来帮助管理工作节点:

    • Kube-APIServer

      主要实现是 kube-apiserverkube-apiserver 旨在水平扩展——也就是说,它通过部署更多实例来扩展。 s可以运行多个 kube-apiserver 实例并平衡这些实例之间的流量。

    • Kube-Controller-Manager

      它为正在运行的集群运行一组控制器。

      从逻辑上讲,每个控制器都是一个单独的进程,但为了降低复杂性,它们都被编译成一个二进制文件并在一个进程中运行。

      其中一些控制器:

      • Node控制器:负责在节点宕机时进行通知和响应。
      • Job控制器:监视代表一次性任务的作业对象,然后创建 Pod 以运行这些任务以完成。
      • Endpoints控制器:填充 Endpoints 对象(即加入 Services & Pods)。
      • Service Account & Token 控制器:为新命名空间创建默认帐户和 API 访问令牌。
    • Etcd

      是一个高可用键值存储,为 Kubernetes 提供后端数据库。

      它存储和复制整个 Kubernetes集群状态。 它是用 Go 编写并使用 Raft 协议。

    • Kube Scheduler

      监视没有分配节点的新创建的 Pod,并选择一个节点让它们运行。

      调度决策考虑的因素包括:独个和集合资源需求、硬件/软件/策略约束、亲和性和反亲和性规范、数据局部性、工作负载间干扰和截止日期等。

  • Node(工作节点)

    维护运行的 pod 并提供 Kubernetes 运行环境。

可行配置的最低要求

内存:每台机器 2 GB 或更多 RAM(任何更少都会为您的应用程序留下很小的空间)。
CPU2CPU 或更多。

网络:集群中所有机器之间的完整网络连接(公共或专用网络都可以)。

其他:每个节点的唯一主机名、MAC 地址和 product_uuid

交换分区:必须禁用才能使 kubelet 正常工作。

在 Ubuntu 20.04 上安装 Kubernetes 集群

示例中设置包含三台服务器(一个主节点、两个工作节点)。 可以添加更多节点以适应所需的用例和负载,例如为 高可用(HA) 使用三个主节点。

类型主机名硬件参数IP交换分区selinux
主节点k8s-master01.test.com4GB 内存, 2vcpus192.168.1.140禁用禁用
工作节点k8s-worker01.test.com4GB 内存, 2vcpus192.168.1.141禁用禁用
工作节点k8s-worker02.test.com4GB 内存, 2vcpus192.168.1.142禁用禁用

防火墙配置

主节点

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# enable ufw sudo ufw enable # shell sudo ufw allow 22/tcp # Kubernetes API server sudo ufw allow 6443/tcp # etcd server client API sudo ufw allow 2379:2380/tcp # Kubelet API sudo ufw allow 10250/tcp # kube-scheduler sudo ufw allow 20259/tcp # kube-controller-manager sudo ufw allow 10257/tcp

工作节点

复制代码
1
2
3
4
5
6
7
# shell sudo ufw allow 22/tcp # Kubelet API sudo ufw allow 10250/tcp # NodePort Services sudo ufw allow 30000:32767/tcp

hosts配置

所有节点。

复制代码
1
2
3
4
5
6
7
sudo tee -a /etc/hosts<<EOF 192.168.1.140 k8s-cluster.test.com 192.168.1.140 k8s-master-01 k8s-master-01.test.com 192.168.1.141 k8s-worker-01 k8s-worker-01.test.com 192.168.1.142 k8s-worker-02 k8s-worker-02.test.com EOF

关闭selinux

所有节点。

复制代码
1
2
3
4
sudo setenforce 0 sudo sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config sudo sed -i 's#SELINUX=permissive#SELINUX=disabled#g' /etc/selinux/config

关闭交换分区

所有节点。

复制代码
1
2
3
sudo sed -i 's/^(.*swap.*)$/#1/g' /etc/fstab sudo swapoff -a

安装 Kubernetes 服务器

所有节点。

预配要在 Ubuntu 20.04 上部署 Kubernetes 时使用的服务器。 设置过程将根据使用的虚拟化或云环境而有所不同。

服务器准备就绪后,更新它们。

复制代码
1
2
3
sudo apt update sudo apt -y upgrade && sudo systemctl reboot

安装 kubelet、kubeadm 和 kubectl

复制代码
1
2
3
4
5
6
7
8
9
sudo apt update sudo apt -y install curl apt-transport-https curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt update sudo apt -y install vim git curl wget kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl
  • 国内
复制代码
1
2
3
4
5
6
7
8
9
10
sudo apt update sudo apt -y install curl apt-transport-https curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add - echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt update sudo apt -y install vim git wget sudo apt -y install kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl

通过检查 kubectl 的版本来确认安装。

复制代码
1
2
3
4
$ kubectl version --client && kubeadm version Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.2", GitCommit:"9d142434e3af351a628bffee3939e64c681afa4d", GitTreeState:"clean", BuildDate:"2022-01-19T17:35:46Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"} kubeadm version: &version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.2", GitCommit:"9d142434e3af351a628bffee3939e64c681afa4d", GitTreeState:"clean", BuildDate:"2022-01-19T17:34:34Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}

启用内核模块并配置 sysctl

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Enable kernel modules sudo modprobe overlay sudo modprobe br_netfilter # Add some settings to sysctl sudo tee /etc/sysctl.d/kubernetes.conf<<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF # Reload sysctl sudo sysctl --system

安装容器运行时

所有节点。

为了在 Pod 中运行容器,Kubernetes 使用容器运行时。 支持的容器运行时有:

  • Docker(不推荐使用)
  • CRI-O
  • Containerd
  • 其他

注意:必须选择一个运行时。

安装 Docker(不推荐)

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# Add repo and Install packages sudo apt update sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" sudo apt update sudo apt install -y containerd.io docker-ce docker-ce-cli # Create required directories sudo mkdir -p /etc/systemd/system/docker.service.d # Create daemon json config file sudo tee /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF # Start and enable Services sudo systemctl daemon-reload sudo systemctl restart docker sudo systemctl enable docker

安装 CRI-O

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
# Ensure you load modules sudo modprobe overlay sudo modprobe br_netfilter # Set up required sysctl params sudo tee /etc/sysctl.d/kubernetes.conf<<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF # Reload sysctl sudo sysctl --system # Add Cri-o repo sudo su - OS="xUbuntu_20.04" VERSION=1.22 echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | apt-key add - curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add - # Update CRI-O CIDR subnet sudo sed -i 's/10.85.0.0/10.244.0.0/g' /etc/cni/net.d/100-crio-bridge.conf # Install CRI-O sudo apt update sudo apt install cri-o cri-o-runc # Start and enable Service sudo systemctl daemon-reload sudo systemctl restart crio sudo systemctl enable crio sudo systemctl status crio

安装 Containerd

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
# Configure persistent loading of modules sudo tee /etc/modules-load.d/containerd.conf <<EOF overlay br_netfilter EOF # Load at runtime sudo modprobe overlay sudo modprobe br_netfilter # Ensure sysctl params are set sudo tee /etc/sysctl.d/kubernetes.conf<<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF # Reload configs sudo sysctl --system # Install required packages sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates # Add Docker repo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" # Install containerd sudo apt update sudo apt install -y containerd.io # Configure containerd and start service sudo su - mkdir -p /etc/containerd containerd config default>/etc/containerd/config.toml # Change image repository sed -i 's/k8s.gcr.io/registry.aliyuncs.com/google_containers/g' /etc/containerd/config.toml # restart containerd systemctl restart containerd systemctl enable containerd systemctl status containerd

要使用 systemd cgroup 驱动程序,需要在 /etc/containerd/config.toml 中设置:

复制代码
1
2
3
4
5
6
... [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] ... [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true

调整sandbox_image镜像地址:

复制代码
1
2
3
4
5
6
sudo sed -i 's/k8s.gcr.io/registry.aliyuncs.com/google_containers/g' /etc/containerd/config.toml [plugins."io.containerd.grpc.v1.cri"] ... sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.2"

初始化主节点

仅一个主节点。

登录到要用作主服务器的服务器,并确保加载了 br_netfilter 模块:

复制代码
1
2
3
4
$ lsmod | grep br_netfilter br_netfilter 28672 0 bridge 176128 1 br_netfilter

启用 kubelet 服务。

复制代码
1
2
sudo systemctl enable kubelet

这些是用于引导集群的基本 kubeadm init 选项。

复制代码
1
2
3
4
5
--control-plane-endpoint : 值可以为DNS(推荐)或者IP, 为所有`control-plane`节点设置共享端点 --pod-network-cidr : 用于设置 Pod 网络CIDR --cri-socket : 如果有多个容器运行时用于设置运行时套接字路径 --apiserver-advertise-address : Set advertise address for this particular control-plane node's API server

配置–image-repository

复制代码
1
2
registry.aliyuncs.com/google_containers

使用IP

要在不使用 DNS的情况下引导集群,请运行:

复制代码
1
2
3
4
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository=registry.aliyuncs.com/google_containers

使用DNS

设置集群端点 DNS 名称或将记录添加到 /etc/hosts 文件。

复制代码
1
2
3
$ sudo cat /etc/hosts | grep k8s-cluster 192.168.1.140 k8s-cluster.test.com

创建集群:

复制代码
1
2
3
4
5
6
7
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --upload-certs --control-plane-endpoint=k8s-cluster.test.com --image-repository=registry.aliyuncs.com/google_containers --v 5

注意:如果 10.244.0.0/16 已在网络中使用,必须选择不同的 pod 网络 CIDR,替换上述命令中的 10.244.0.0/16

容器运行时套接字:

运行时Unix 域套接字的路径
Docker/var/run/docker.sock
containerdunix:///run/containerd/containerd.sock
CRI-O/var/run/crio/crio.sock

选择不同的运行时传递相应的 Socket 文件

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# CRI-O sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --cri-socket /var/run/crio/crio.sock --upload-certs --control-plane-endpoint=k8s-cluster.test.com --image-repository=registry.aliyuncs.com/google_containers --v 5 # Containerd sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --cri-socket unix:///run/containerd/containerd.sock --upload-certs --control-plane-endpoint=k8s-cluster.test.com --image-repository=registry.aliyuncs.com/google_containers --v 5 # Docker sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --cri-socket /var/run/docker.sock --upload-certs --control-plane-endpoint=k8s-cluster.test.com --image-repository=registry.aliyuncs.com/google_containers --v 5

这是初始化命令的输出:

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
I0122 01:50:55.159413 10000 interface.go:432] Looking for default routes with IPv4 addresses I0122 01:50:55.159442 10000 interface.go:437] Default route transits interface "ens160" I0122 01:50:55.159526 10000 interface.go:209] Interface ens160 is up I0122 01:50:55.159569 10000 interface.go:257] Interface "ens160" has 2 addresses :[192.168.1.140/24 fe80::20c:29ff:fec7:93d7/64]. I0122 01:50:55.159583 10000 interface.go:224] Checking addr 192.168.1.140/24. I0122 01:50:55.159590 10000 interface.go:231] IP found 192.168.1.140 I0122 01:50:55.159597 10000 interface.go:263] Found valid IPv4 address 192.168.1.140 for interface "ens160". I0122 01:50:55.159605 10000 interface.go:443] Found active IP 192.168.1.140 I0122 01:50:55.159627 10000 kubelet.go:217] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd" I0122 01:50:55.163037 10000 version.go:186] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.txt [init] Using Kubernetes version: v1.23.2 [preflight] Running pre-flight checks I0122 01:50:55.866075 10000 checks.go:578] validating Kubernetes and kubeadm version I0122 01:50:55.866101 10000 checks.go:171] validating if the firewall is enabled and active I0122 01:50:55.872657 10000 checks.go:206] validating availability of port 6443 I0122 01:50:55.872859 10000 checks.go:206] validating availability of port 10259 I0122 01:50:55.872894 10000 checks.go:206] validating availability of port 10257 I0122 01:50:55.872914 10000 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml I0122 01:50:55.873551 10000 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml I0122 01:50:55.873569 10000 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml I0122 01:50:55.873576 10000 checks.go:283] validating the existence of file /etc/kubernetes/manifests/etcd.yaml I0122 01:50:55.873591 10000 checks.go:433] validating if the connectivity type is via proxy or direct I0122 01:50:55.873602 10000 checks.go:472] validating http connectivity to first IP address in the CIDR I0122 01:50:55.873615 10000 checks.go:472] validating http connectivity to first IP address in the CIDR I0122 01:50:55.873626 10000 checks.go:107] validating the container runtime I0122 01:50:55.884878 10000 checks.go:373] validating the presence of executable crictl I0122 01:50:55.884910 10000 checks.go:332] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables I0122 01:50:55.884958 10000 checks.go:332] validating the contents of file /proc/sys/net/ipv4/ip_forward I0122 01:50:55.884998 10000 checks.go:654] validating whether swap is enabled or not I0122 01:50:55.885042 10000 checks.go:373] validating the presence of executable conntrack I0122 01:50:55.885083 10000 checks.go:373] validating the presence of executable ip I0122 01:50:55.885101 10000 checks.go:373] validating the presence of executable iptables I0122 01:50:55.885125 10000 checks.go:373] validating the presence of executable mount I0122 01:50:55.885147 10000 checks.go:373] validating the presence of executable nsenter I0122 01:50:55.885190 10000 checks.go:373] validating the presence of executable ebtables I0122 01:50:55.885217 10000 checks.go:373] validating the presence of executable ethtool I0122 01:50:55.885244 10000 checks.go:373] validating the presence of executable socat I0122 01:50:55.885266 10000 checks.go:373] validating the presence of executable tc I0122 01:50:55.885291 10000 checks.go:373] validating the presence of executable touch I0122 01:50:55.885308 10000 checks.go:521] running all checks I0122 01:50:55.920330 10000 checks.go:404] checking whether the given node name is valid and reachable using net.LookupHost I0122 01:50:55.920352 10000 checks.go:620] validating kubelet version I0122 01:50:55.968410 10000 checks.go:133] validating if the "kubelet" service is enabled and active I0122 01:50:56.238364 10000 checks.go:206] validating availability of port 10250 I0122 01:50:56.238418 10000 checks.go:206] validating availability of port 2379 I0122 01:50:56.238458 10000 checks.go:206] validating availability of port 2380 I0122 01:50:56.238488 10000 checks.go:246] validating the existence and emptiness of directory /var/lib/etcd [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I0122 01:50:56.238579 10000 checks.go:842] using image pull policy: IfNotPresent I0122 01:50:56.245826 10000 checks.go:859] pulling: registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.2 I0122 01:51:04.176003 10000 checks.go:859] pulling: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.2 I0122 01:51:10.314667 10000 checks.go:859] pulling: registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.2 I0122 01:51:15.237932 10000 checks.go:859] pulling: registry.aliyuncs.com/google_containers/kube-proxy:v1.23.2 I0122 01:51:36.079396 10000 checks.go:859] pulling: registry.aliyuncs.com/google_containers/pause:3.6 I0122 01:51:39.028988 10000 checks.go:859] pulling: registry.aliyuncs.com/google_containers/etcd:3.5.1-0 I0122 01:51:54.522733 10000 checks.go:859] pulling: registry.aliyuncs.com/google_containers/coredns:v1.8.6 [certs] Using certificateDir folder "/etc/kubernetes/pki" I0122 01:52:02.668843 10000 certs.go:112] creating a new certificate authority for ca [certs] Generating "ca" certificate and key I0122 01:52:02.820631 10000 certs.go:522] validating certificate period for ca certificate [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-cluster.test.com k8s-master-01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.140] [certs] Generating "apiserver-kubelet-client" certificate and key I0122 01:52:03.014451 10000 certs.go:112] creating a new certificate authority for front-proxy-ca [certs] Generating "front-proxy-ca" certificate and key I0122 01:52:03.396360 10000 certs.go:522] validating certificate period for front-proxy-ca certificate [certs] Generating "front-proxy-client" certificate and key I0122 01:52:03.680220 10000 certs.go:112] creating a new certificate authority for etcd-ca [certs] Generating "etcd/ca" certificate and key I0122 01:52:03.733405 10000 certs.go:522] validating certificate period for etcd/ca certificate [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master-01 localhost] and IPs [192.168.1.140 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master-01 localhost] and IPs [192.168.1.140 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key I0122 01:52:04.327233 10000 certs.go:78] creating new public/private key files for signing service account users [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0122 01:52:04.504514 10000 kubeconfig.go:103] creating kubeconfig file for admin.conf [kubeconfig] Writing "admin.conf" kubeconfig file I0122 01:52:04.790366 10000 kubeconfig.go:103] creating kubeconfig file for kubelet.conf [kubeconfig] Writing "kubelet.conf" kubeconfig file I0122 01:52:04.986690 10000 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0122 01:52:05.144057 10000 kubeconfig.go:103] creating kubeconfig file for scheduler.conf [kubeconfig] Writing "scheduler.conf" kubeconfig file I0122 01:52:05.275919 10000 kubelet.go:65] Stopping the kubelet [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" I0122 01:52:05.608326 10000 manifests.go:99] [control-plane] getting StaticPodSpecs I0122 01:52:05.608571 10000 certs.go:522] validating certificate period for CA certificate I0122 01:52:05.608640 10000 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver" I0122 01:52:05.608648 10000 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver" I0122 01:52:05.608653 10000 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-apiserver" I0122 01:52:05.608658 10000 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver" I0122 01:52:05.608663 10000 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver" I0122 01:52:05.608669 10000 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver" I0122 01:52:05.611113 10000 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml" [control-plane] Creating static Pod manifest for "kube-controller-manager" I0122 01:52:05.611126 10000 manifests.go:99] [control-plane] getting StaticPodSpecs I0122 01:52:05.611305 10000 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager" I0122 01:52:05.611314 10000 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager" I0122 01:52:05.611321 10000 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-controller-manager" I0122 01:52:05.611326 10000 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager" I0122 01:52:05.611331 10000 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager" I0122 01:52:05.611336 10000 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager" I0122 01:52:05.611341 10000 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager" I0122 01:52:05.611346 10000 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager" I0122 01:52:05.612064 10000 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [control-plane] Creating static Pod manifest for "kube-scheduler" I0122 01:52:05.612076 10000 manifests.go:99] [control-plane] getting StaticPodSpecs I0122 01:52:05.612259 10000 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler" I0122 01:52:05.612747 10000 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0122 01:52:05.613477 10000 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml" I0122 01:52:05.613514 10000 waitcontrolplane.go:91] [wait-control-plane] Waiting for the API server to be healthy [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 41.527549 seconds I0122 01:52:47.142223 10000 uploadconfig.go:110] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace I0122 01:52:47.191888 10000 uploadconfig.go:124] [upload-config] Uploading the kubelet component config to a ConfigMap [kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently. I0122 01:52:47.216728 10000 uploadconfig.go:129] [upload-config] Preserving the CRISocket information for the control-plane node I0122 01:52:47.216747 10000 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///run/containerd/containerd.sock" to the Node API object "k8s-master-01" as an annotation [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: 8e568ec9b181aace22496d3a1961d965179b65fb58567867fc498cfefbd0c4f0 [mark-control-plane] Marking the node k8s-master-01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node k8s-master-01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: bdqsdw.2uf50yfvo3uwy93w [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace I0122 01:52:48.826596 10000 clusterinfo.go:47] [bootstrap-token] loading admin kubeconfig I0122 01:52:48.827103 10000 clusterinfo.go:58] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig I0122 01:52:48.827341 10000 clusterinfo.go:70] [bootstrap-token] creating/updating ConfigMap in kube-public namespace I0122 01:52:48.836154 10000 clusterinfo.go:84] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace I0122 01:52:48.886003 10000 kubeletfinalize.go:90] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem" [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key I0122 01:52:48.886887 10000 kubeletfinalize.go:134] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root: kubeadm join k8s-cluster.test.com:6443 --token bdqsdw.2uf50yfvo3uwy93w --discovery-token-ca-cert-hash sha256:2a6f431cc99860ff6e15519e08e62f01b9b0cb051380031582bd5cc22efbc084 --control-plane --certificate-key 8e568ec9b181aace22496d3a1961d965179b65fb58567867fc498cfefbd0c4f0 Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join k8s-cluster.test.com:6443 --token bdqsdw.2uf50yfvo3uwy93w --discovery-token-ca-cert-hash sha256:2a6f431cc99860ff6e15519e08e62f01b9b0cb051380031582bd5cc22efbc084

使用输出中的命令配置 kubectl

复制代码
1
2
3
4
mkdir -p $HOME/.kube sudo cp -f /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

检查集群状态

复制代码
1
2
$ kubectl cluster-info

添加其他主节点

根据第一个主节点初始化运行结果中的命令,新增新的主节点。

复制代码
1
2
3
4
5
6
7
8
kubeadm join k8s-cluster.test.com:6443 --token bdqsdw.2uf50yfvo3uwy93w --discovery-token-ca-cert-hash sha256:2a6f431cc99860ff6e15519e08e62f01b9b0cb051380031582bd5cc22efbc084 --control-plane --certificate-key 8e568ec9b181aace22496d3a1961d965179b65fb58567867fc498cfefbd0c4f0 mkdir -p $HOME/.kube sudo cp -f /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

安装网络插件

仅主节点中的一个节点。

将使用 Calico。 当然也可以选择任何其他支持的网络插件。

复制代码
1
2
3
4
5
sudo kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml wget https://docs.projectcalico.org/manifests/custom-resources.yaml sed -i 's/192.168.0.0/10.244.0.0/g' custom-resources.yaml sudo kubectl create -f custom-resources.yaml

防火墙

所有节点。

复制代码
1
2
3
4
sudo ufw allow 179/tcp sudo ufw allow 5473/tcp sudo ufw allow 4789/udp

添加工作节点

所有工作节点。

复制代码
1
2
3
kubeadm join k8s-cluster.test.com:6443 --token bdqsdw.2uf50yfvo3uwy93w --discovery-token-ca-cert-hash sha256:2a6f431cc99860ff6e15519e08e62f01b9b0cb051380031582bd5cc22efbc084

查看所有节点状态

任一主节点。

复制代码
1
2
3
4
5
6
$ kubectl get nodes -A NAME STATUS ROLES AGE VERSION k8s-master-01 Ready control-plane,master 74m v1.23.2 k8s-worker-01 Ready <none> 22m v1.23.2 k8s-worker-02 Ready <none> 14m v1.23.2

在集群上部署应用程序

通过部署应用程序来验证集群是否正常工作。

复制代码
1
2
kubectl apply -f https://k8s.io/examples/pods/commands.yaml

检查 pod 是否启动:

复制代码
1
2
3
4
$ kubectl get pods NAME READY STATUS RESTARTS AGE command-demo 0/1 Completed 0 64s

最后

以上就是雪白白云最近收集整理的关于使用 kubeadm 在 Ubuntu 20.04 上部署k8s集群使用 kubeadm 在 Ubuntu 20.04 上部署k8s集群的全部内容,更多相关使用内容请搜索靠谱客的其他文章。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(64)

评论列表共有 0 条评论

立即
投稿
返回
顶部