我是靠谱客的博主 谦让发夹,最近开发中收集的这篇文章主要介绍[运维] Kubernetes(k8s)安装部署(持续更新),觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

Kubernetes(k8s)简介

Kubernetes是容器集群管理系统,是一个开源的平台,可以实现容器集群的自动化部署、自动扩缩容、维护等功能。

安装环境说明

系统:debian-10.7
平台:amd64

参考文献

Kubernetes中文社区 | 中文文档
etcd 集群部署
k8s简介以及linux环境下的详细安装步骤
二进制方式搭建一个完整K8s集群

关键术语说明

Etcd:Etcd 是 CoreOS 基于 Raft 开发的分布式 key-value 存储,可用于服务发现、共享配置以及一致性保障(如数据库选主、分布式锁等)。
kube-apiserver:Kubernetes API 服务器验证并配置 API 对象的数据, 这些对象包括 pods、services、replicationcontrollers 等。 API 服务器为 REST 操作提供服务,并为集群的共享状态提供前端, 所有其他组件都通过该前端进行交互。
kube-controller-manager:Controller Manager作为集群内部的管理控制中心,负责集群内的Node、Pod副本、服务端点(Endpoint)、命名空间(Namespace)、服务账号(ServiceAccount)、资源定额(ResourceQuota)的管理,当某个Node意外宕机时,Controller Manager会及时发现并执行自动化修复流程,确保集群始终处于预期的工作状态。
kubelet:kubelet 是在每个 Node 节点上运行的主要 “节点代理”。它可以使用以下之一向 apiserver 注册: 主机名(hostname);覆盖主机名的参数;某云驱动的特定逻辑。
kube-proxy:Kubernetes 网络代理在每个节点上运行。网络代理反映了每个节点上 Kubernetes API 中定义的服务,并且可以执行简单的 TCP、UDP 和 SCTP 流转发,或者在一组后端进行循环 TCP、UDP 和 SCTP 转发。当前可通过 Docker-links-compatible 环境变量找到服务集群 IP 和端口,这些环境变量指定了服务代理打开的端口。有一个可选的插件,可以为这些集群 IP 提供集群 DNS。用户必须使用 apiserver API 创建服务才能配置代理。

一键安装工具

常见的一键安装工具有以下几种
kind:使用kind来快速部署k8s环境
kubeadm:官方文档
Rancher:官方文档

环境准备

角色操作系统HostNameIP地址组件
k8s-master1Debian GNU/Linux 10.7 (buster)master-01192.168.24.21kube-scheduler、kube-controller-manager、kube-apiserver、etcd
k8s-master2Debian GNU/Linux 10.7 (buster)master-02192.168.24.22kube-scheduler、kube-controller-manager、kube-apiserver、etcd
k8s-master3Debian GNU/Linux 10.7 (buster)master-03192.168.24.23kube-scheduler、kube-controller-manager、kube-apiserver、etcd
node-04Debian GNU/Linux 10.7 (buster)node-04192.168.24.24kube-proxy、kubelet、kube-scheduler、kube-controller-manager、kube-apiserver、docker
node-05Debian GNU/Linux 10.7 (buster)node-05192.168.24.25kube-proxy、kubelet、kube-scheduler、kube-controller-manager、kube-apiserver、docker
node-04Debian GNU/Linux 10.7 (buster)node-06192.168.24.26kube-proxy、kubelet、kube-scheduler、kube-controller-manager、kube-apiserver、docker

常用命令准备

  1. 修改主机名称
hostnamectl set-hostname node-05

软件版本说明

软件名称版本
Etcd3.4.14
Docker20.10.2
Kubernetes1.18.15
cni0.9.0

依赖应用安装

Etcd集群安装

见etcd 集群部署

docker安装部署

使用二进制方式安装docker

创建安装目录结构

mkdir /opt/kubernetes/{bin,cfg,ssl,logs} -p

生成证书

  1. 生成kube-apiserver证书
cd /opt/kubernetes/ssl
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF

生成证书:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca
  1. 使用自签CA签发kube-apiserver HTTPS证书
cd /opt/kubernetes/ssl
cat > server-csr.json << EOF
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"192.168.24.21",
"192.168.24.22",
"192.168.24.23",
"192.168.24.24",
"192.168.24.25",
"192.168.24.26",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF

注:上述文件hosts字段中IP为所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。
生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
  1. 生成kube-controller-manager证书与私钥(说明:本步骤无需操作)
    (1) kube-controller-mamager连接 apiserver 需要使用的证书,同时本身 10257 端口也会使用此证书
    (2) kube-controller-mamager与kubei-apiserver通信采用双向TLS认证
cd /opt/kubernetes/ssl
cat > kube-controller-manager-csr.json << EOF
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"192.168.24.21",
"192.168.24.22",
"192.168.24.23",
"localhost"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:kube-controller-manager",
"OU": "System"
}
]
}
EOF

(3) hosts 列表包含所有 kube-controller-manager 节点 IP;
(4) CN 为 system:kube-controller-manager;O 为 system:kube-controller-manager;kube-apiserver预定义的 RBAC使用的ClusterRoleBindings system:kube-controller-manager将用户system:kube-controller-manager与ClusterRole system:kube-controller-manager绑定。
生成kube-controller-manager证书与私钥

cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem 
-ca-key=/opt/kubernetes/ssl/ca-key.pem 
-config=/opt/kubernetes/ssl/ca-config.json 
-profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

部署Master Node

从Github下载二进制文件

下载地址
注:打开链接你会发现里面有很多包,下载一个server包就够了,包含了Master和Worker Node二进制文件。

解压二进制包

tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin
ln -sf /opt/kubernetes/bin/kubectl /usr/bin/kubectl

部署kube-apiserver

  1. 创建配置文件
cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--etcd-servers=https://192.168.24.21:2379,https://192.168.24.22:2379,https://192.168.24.23:2379 \
--bind-address=192.168.24.21 \
--secure-port=6443 \
--advertise-address=192.168.24.21 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \
--tls-cert-file=/opt/kubernetes/ssl/server.pem
\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--service-account-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
--service-account-issuer=https://192.168.24.21:6443 \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF

注:上面两个 第一个是转义符,第二个是换行符,使用转义符是为了使用EOF保留换行符。
–logtostderr:启用日志
–v:日志等级
–log-dir:日志目录
–etcd-servers:etcd集群地址
–bind-address:监听地址
–secure-port:https安全端口
–advertise-address:集群通告地址
–allow-privileged:启用授权
–service-cluster-ip-range:Service虚拟IP地址段
–enable-admission-plugins:准入控制模块
–authorization-mode:认证授权,启用RBAC授权和节点自管理
–enable-bootstrap-token-auth:启用TLS bootstrap机制
–token-auth-file:bootstrap token文件
–service-node-port-range:Service nodeport类型默认分配端口范围
–kubelet-client-xxx:apiserver访问kubelet客户端证书
–tls-xxx-file:apiserver https证书
–etcd-xxxfile:连接Etcd集群证书
–audit-log-xxx:审计日志

  1. 启用 TLS Bootstrapping 机制
    TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。
    TLS bootstraping 工作流程:

创建上述配置文件中token文件:

cat > /opt/kubernetes/cfg/token.csv << EOF
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF

格式:token,用户名,UID,用户组
token也可自行生成替换:

head -c 16 /dev/urandom | od -An -t x | tr -d ' '
  1. systemd管理apiserver
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
  1. 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
  1. 授权kubelet-bootstrap用户允许请求证书
kubectl create clusterrolebinding kubelet-bootstrap 
--clusterrole=system:node-bootstrapper 
--user=kubelet-bootstrap

部署kube-controller-manager

  1. 创建配置文件
cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect=true \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem
\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s"
EOF

–master:通过本地非安全本地端口8080连接apiserver。
–leader-elect:当该组件启动多个时,自动选举(HA)
–cluster-signing-cert-file/–cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致

  1. systemd管理controller-manager
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
  1. 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager

部署kube-scheduler

  1. 创建配置文件
cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false 
--v=2 
--log-dir=/opt/kubernetes/logs 
--leader-elect 
--master=127.0.0.1:8080 
--bind-address=127.0.0.1"
EOF

–master:通过本地非安全本地端口8080连接apiserver。
–leader-elect:当该组件启动多个时,自动选举(HA)
2. systemd管理scheduler

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
  1. 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler

查看集群状态

所有组件都已经启动成功,通过kubectl工具查看当前集群组件状态:

~# kubectl get cs
NAME
STATUS
MESSAGE
ERROR
scheduler
Healthy
ok
controller-manager
Healthy
ok
etcd-2
Healthy
{"health":"true"}
etcd-1
Healthy
{"health":"true"}
etcd-0
Healthy
{"health":"true"}

如上输出说明Master节点组件运行正常。

部署其他节点

  1. 拷贝应用
scp /opt/kubernetes root@192.168.24.22:/opt
scp -r /usr/lib/systemd/system/kube-* root@192.168.24.22:/usr/lib/systemd/system
scp /opt/kubernetes root@192.168.24.22:/opt
scp -r /usr/lib/systemd/system/kube-* root@192.168.24.23:/usr/lib/systemd/system
  1. 修改/opt/kubernetes/cfg/kube-apiserver.conf配置文件为本地IP:
vim /opt/kubernetes/cfg/kube-apiserver.conf
  1. kubectl软链接到/usr/bin
ln -sf /opt/kubernetes/bin/kubectl /usr/bin/kubectl

部署Worker Node

创建工作目录并拷贝二进制文件

  1. 在所有worker node创建工作目录:
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
  1. 从master节点拷贝:
cd kubernetes/server/bin
cp kubelet kube-proxy kubectl /opt/kubernetes/bin
# 本地拷贝
ln -sf /opt/kubernetes/bin/kubectl /usr/bin/kubectl
  1. 从master节点拷贝ssl证书
cp -r /opt/kubernetes/ssl/* rainbond@192.168.24.24:/opt/kubernetes/ssl/

部署kubelet

  1. 创建配置文件
cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--address=192.168.24.24 \
--hostname-override=node-04 \
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
EOF

–hostname-override:显示名称,集群中唯一
–network-plugin:启用CNI
–kubeconfig:空路径,会自动生成,后面用于连接apiserver
–bootstrap-kubeconfig:首次启动向apiserver申请证书
–config:配置参数文件
–cert-dir:kubelet证书生成目录
–pod-infra-container-image:管理Pod网络容器的镜像

  1. 配置参数文件
cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF
  1. 生成bootstrap.kubeconfig文件
cd /opt/kubernetes/cfg/
KUBE_APISERVER="https://192.168.24.21:6443" # apiserver IP:PORT
TOKEN="c47ffb939f5ca36231d9e3121a252940" # 与token.csv里保持一致
NODE_ADDRESS="192.168.24.24"
DNS_SERVER_IP="223.5.5.5"
# 生成 kubelet bootstrap kubeconfig 配置文件
kubectl config set-cluster kubernetes 
--certificate-authority=/opt/kubernetes/ssl/ca.pem 
--embed-certs=true 
--server=${KUBE_APISERVER} 
--kubeconfig=bootstrap.kubeconfig
kubectl config set-credentials "kubelet-bootstrap" 
--token=${TOKEN} 
--kubeconfig=bootstrap.kubeconfig
kubectl config set-context default 
--cluster=kubernetes 
--user="kubelet-bootstrap" 
--kubeconfig=bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
  1. systemd管理kubelet
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
  1. 启动并设置开机启动
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet

6 批准kubelet证书申请并加入集群

# 查看kubelet证书请求
kubectl get csr
NAME
AGE
SIGNERNAME
REQUESTOR
CONDITION
node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A
6m3s
kubernetes.io/kube-apiserver-client-kubelet
kubelet-bootstrap
Pending
# 批准申请
kubectl certificate approve node-csr-uCEGPOIiDdlLODKts8J658HrFq9CZ--K6M4G7bjhk8A
# 查看节点
kubectl get node
NAME
STATUS
ROLES
AGE
VERSION
k8s-master
NotReady
<none>
7s
v1.18.3

注:由于网络插件还没有部署,节点会没有准备就绪 NotReady

部署kube-proxy

  1. 创建配置文件
cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \
--v=4 \
--log-dir=/opt/kubernetes/logs \
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF
  1. 配置参数文件
cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: node-04
clusterCIDR: 10.0.0.0/24
EOF
  1. 生成kube-proxy.kubeconfig文件
    生成kube-proxy证书:
# 切换工作目录
cd /opt/kubernetes/ssl
# 创建证书请求文件
cat > kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF

生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
ls kube-proxy*pem
kube-proxy-key.pem
kube-proxy.pem

生成kubeconfig文件:

cd /opt/kubernetes/cfg
KUBE_APISERVER="https://192.168.24.21:6443"
kubectl config set-cluster kubernetes 
--certificate-authority=/opt/kubernetes/ssl/ca.pem 
--embed-certs=true 
--server=${KUBE_APISERVER} 
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy 
--client-certificate=/opt/kubernetes/ssl/kube-proxy.pem 
--client-key=/opt/kubernetes/ssl/kube-proxy-key.pem 
--embed-certs=true 
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default 
--cluster=kubernetes 
--user=kube-proxy 
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
  1. systemd管理kube-proxy
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
User=root
Group=root
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
  1. 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy

部署CNI网络

先准备好CNI二进制文件:
下载地址
解压二进制包并移动到默认工作目录:

mkdir -p /opt/cni/bin
tar zxvf cni-plugins-linux-amd64-v0.9.0.tgz -C /opt/cni/bin

说明:本步骤在master node上执行
部署CNI网络:

wget https://raw.staticdn.net/coreos/flannel/master/Documentation/kube-flannel.yml
sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.12.0-amd64#g" kube-flannel.yml

默认镜像地址无法访问,修改为docker hub镜像仓库。

kubectl apply -f kube-flannel.yml
kubectl get pods -n kube-system
NAME
READY
STATUS
RESTARTS
AGE
kube-flannel-ds-amd64-2pc95
1/1
Running
0
72s
kubectl get node
NAME
STATUS
ROLES
AGE
VERSION
k8s-master
Ready
<none>
41m
v1.18.3

部署好网络插件,Node准备就绪。

授权apiserver访问kubelet

说明:本步骤在master node上执行

cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
- pods/log
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
kubectl apply -f apiserver-to-kubelet-rbac.yaml

flannel部署(说明:未验证)

参考见flannel网络、为node节点创建kubeconfig文件
以下操作在所有的node节点上均执行一次
设置flannel配置文件

cat <<EOF >/opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.1.24.21:2379,https://192.168.1.24.21:2379,https://192.168.1.24.21:2379 
-etcd-cafile=/opt/kubernetes/ssl/ca.pem 
-etcd-certfile=/opt/kubernetes/ssl/server.pem 
-etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"
EOF

设置flannel服务启动文件

cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

重新设置docker启动文件

cat <<EOF >/usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd
$DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF

写入分配的子网段到etcd,供flanneld使用

/opt/kubernetes/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://10.10.99.225:2379,https://10.10.99.228:2379,https://192.168.0.10.10.99.233:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
/opt/kubernetes/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://10.10.99.225:2379,https://10.10.99.228:2379,https://192.168.0.10.10.99.233:2379" get /coreos.com/network/config

在一个etcd节点上执行就可以,会同步到其他etcd节点上。这里指定了flannel的网段为172.17网段,采用vxlan模式。

启动flannel

systemctl daemon-reload
systemctl start flanneld.service
systemctl enable flanneld.service

启动之后会生成一个flannel的虚拟网卡
这里写图片描述

重启docker
systemctl restart docker

新增加Worker Node

  1. 拷贝已部署好的Node相关文件到新节点
    在master节点将Worker Node涉及文件拷贝到新节点192.168.24.25/26
scp -r /opt/kubernetes root@192.168.24.25:/opt/
scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.24.25:/usr/lib/systemd/system
scp -r /opt/cni/ root@192.168.24.25:/opt/
  1. 删除kubelet证书和kubeconfig文件
rm /opt/kubernetes/cfg/kubelet.kubeconfig
rm -f /opt/kubernetes/ssl/kubelet*

注:这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除重新生成。

  1. 修改主机名
vi /opt/kubernetes/cfg/kubelet.conf
--address=192.168.24.25
--hostname-override=node-05
vi /opt/kubernetes/cfg/kube-proxy-config.yml
--address=192.168.24.26
--hostnameOverride: node-06
  1. 启动并设置开机启动
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl start kube-proxy
systemctl enable kube-proxy
  1. 在Master上批准新Node kubelet证书申请
kubectl get csr
NAME
AGE
SIGNERNAME
REQUESTOR
CONDITION
node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro
89s
kubernetes.io/kube-apiserver-client-kubelet
kubelet-bootstrap
Pending
kubectl certificate approve node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro
  1. 查看Node状态
kubectl get node
NAME
STATUS
ROLES
AGE
VERSION
k8s-master
Ready
<none>
65m
v1.18.3
k8s-node1
Ready
<none>
12m
v1.18.3
k8s-node2
Ready
<none>
81s
v1.18.3

Node2(192.168.24.26 )节点同上。记得修改主机名!

部署Dashboard和CoreDNS

部署Dashboard

git clone https://github.com/kubernetes/dashboard.git
cd dashboard
git checkout v2.0.3
cd aio/deploy/

默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:

vi recommended.yaml
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
nodePort: 30001
type: NodePort
selector:
k8s-app: kubernetes-dashboard
kubectl apply -f recommended.yaml
kubectl get pods,svc -n kubernetes-dashboard
NAME
READY
STATUS
RESTARTS
AGE
pod/dashboard-metrics-scraper-694557449d-z8gfb
1/1
Running
0
2m18s
pod/kubernetes-dashboard-9774cc786-q2gsx
1/1
Running
0
2m19s
NAME
TYPE
CLUSTER-IP
EXTERNAL-IP
PORT(S)
AGE
service/dashboard-metrics-scraper
ClusterIP
10.0.0.141
<none>
8000/TCP
2m19s
service/kubernetes-dashboard
NodePort
10.0.0.239
<none>
443:30001/TCP
2m19s

访问地址:https://NodeIP:30001
创建service account并绑定默认cluster-admin管理员集群角色:

kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

使用输出的token登录Dashboard。

部署CoreDNS

CoreDNS用于集群内部Service名称解析。
见部署coredns

安装常见问题

1.报错信息

 failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "kubelet-bootstrap" cannot create resource "certificatesigningrequests" in API group "certificates.k8s.io" at the cluster scope

原因:kubelet-bootstrap并没有权限创建证书。所以要创建这个用户的权限并绑定到这个角色上。参考见k8s集群部署四:node节点组件部署
解决方法是在master上执行

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

最后

以上就是谦让发夹为你收集整理的[运维] Kubernetes(k8s)安装部署(持续更新)的全部内容,希望文章能够帮你解决[运维] Kubernetes(k8s)安装部署(持续更新)所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(38)

评论列表共有 0 条评论

立即
投稿
返回
顶部