我是靠谱客的博主 坚定巨人,这篇文章主要介绍K8S-Demo集群实践08:部署高可用kube-controller-manager集群一、创建和分发kubeconfig文件二、创建kube-controller-manager systemd unit并分发部署三、启动kube-controller-manager集群服务四、查看leader和metrics信息附:K8s-Demo集群版本信息附:专栏链接,现在分享给大家,希望可以做个参考。

K8S-Demo集群实践08:部署高可用kube-controller-manager集群

  • 一、创建和分发kubeconfig文件
  • 二、创建kube-controller-manager systemd unit并分发部署
    • 1、编写kube-controller-manager systemd unit模板
    • 2、为每个Master节点生成部署文件
    • 3、分发到master节点,
  • 三、启动kube-controller-manager集群服务
  • 四、查看leader和metrics信息
  • 附:K8s-Demo集群版本信息
  • 附:专栏链接

  • 在3个Master节点上部署kube-controller-manager,启动后将通过竞争选举机制产生一个leader节点,其它节点为阻塞状态

一、创建和分发kubeconfig文件

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@master1 ~]# cd /opt/install/kubeconfig [root@master1 kubeconfig]# kubectl config set-cluster k8s-demo --certificate-authority=/opt/install/cert/ca.pem --embed-certs=true --server="https://##NODE_IP##:6443" --kubeconfig=controller-manager.kubeconfig [root@master1 ~]# kubectl config set-credentials k8s-demo-ctrl-mgr --client-certificate=/opt/install/cert/controller-manager.pem --client-key=/opt/install/cert/controller-manager-key.pem --embed-certs=true --kubeconfig=controller-manager.kubeconfig [root@master1 ~]# kubectl config set-context system:kube-controller-manager --cluster=k8s-demo --user=k8s-demo-ctrl-mgr --kubeconfig=controller-manager.kubeconfig [root@master1 ~]# kubectl config use-context system:kube-controller-manager --kubeconfig=controller-manager.kubeconfig
复制代码
1
2
3
4
5
6
7
8
[root@master1 ~]# cd /opt/install/kubeconfig [root@master1 kubeconfig]# for node_ip in ${MASTER_IPS[@]} do echo ">>> ${node_ip}" sed -e "s/##NODE_IP##/${node_ip}/" controller-manager.kubeconfig > controller-manager-${node_ip}.kubeconfig scp controller-manager-${node_ip}.kubeconfig root@${node_ip}:/opt/k8s/etc/controller-manager.kubeconfig done

二、创建kube-controller-manager systemd unit并分发部署

1、编写kube-controller-manager systemd unit模板

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
[root@master1 ~]# cd /opt/install/service [root@master1 service]# cat > controller-manager.service.template <<EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] WorkingDirectory=${K8S_DIR}/kube-controller-manager ExecStart=/opt/k8s/bin/kube-controller-manager \ --profiling \ --cluster-name=k8s-demo \ --controllers=*,bootstrapsigner,tokencleaner \ --kube-api-qps=1000 \ --kube-api-burst=2000 \ --leader-elect \ --use-service-account-credentials \ --concurrent-service-syncs=2 \ --bind-address=##NODE_IP## \ --secure-port=10252 \ --port=0 \ --tls-cert-file=/etc/kubernetes/cert/controller-manager.pem \ --tls-private-key-file=/etc/kubernetes/cert/controller-manager-key.pem \ --authentication-kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \ --authorization-kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \ --client-ca-file=/etc/kubernetes/cert/ca.pem \ --requestheader-allowed-names="k8s-demo-aggregator" \ --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \ --requestheader-extra-headers-prefix="X-Remote-Extra-" \ --requestheader-group-headers=X-Remote-Group \ --requestheader-username-headers=X-Remote-User \ --cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \ --cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \ --experimental-cluster-signing-duration=87600h \ --horizontal-pod-autoscaler-sync-period=10s \ --concurrent-deployment-syncs=10 \ --concurrent-gc-syncs=30 \ --node-cidr-mask-size=24 \ --service-cluster-ip-range=${SERVICE_CIDR} \ --pod-eviction-timeout=6m \ --terminated-pod-gc-threshold=10000 \ --root-ca-file=/etc/kubernetes/cert/ca.pem \ --service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \ --logtostderr=true \ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF

2、为每个Master节点生成部署文件

复制代码
1
2
3
4
5
6
7
8
9
10
11
[root@master1 ~]# cd /opt/install/service [root@master1 service]# for (( i=0; i < 3; i++ )) do sed -e "s/##NODE_NAME##/${MASTER_NAMES[i]}/" -e "s/##NODE_IP##/${MASTER_IPS[i]}/" controller-manager.service.template > controller-manager-${MASTER_IPS[i]}.service done [root@master1 service]# ls -l controller-manager*.service -rw-r--r-- 1 root root 1924 12月 22 11:17 controller-manager-192.168.66.10.service -rw-r--r-- 1 root root 1924 12月 22 11:17 controller-manager-192.168.66.11.service -rw-r--r-- 1 root root 1924 12月 22 11:17 controller-manager-192.168.66.12.service

3、分发到master节点,

复制代码
1
2
3
4
5
6
7
[root@master1 ~]# cd /opt/install/service [root@master1 service]# for node_ip in ${MASTER_IPS[@]} do echo ">>> ${node_ip}" scp controller-manager-${node_ip}.service root@${node_ip}:/etc/systemd/system/kube-controller-manager.service done

三、启动kube-controller-manager集群服务

  • 启动kube-controller-manager服务,端口10252
复制代码
1
2
3
4
5
6
7
8
9
[root@master1 ~]# for node_ip in ${MASTER_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-controller-manager" ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager" done [root@master1 ~]# ss -lnpt | grep kube-cont LISTEN 0 128 192.168.66.10:10252 *:* users:(("kube-controller",pid=11364,fd=5))
  • 检查服务状态
复制代码
1
2
3
4
5
6
[root@master1 ~]# for node_ip in ${MASTER_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "systemctl status kube-controller-manager|grep Active" done
  • 健康检查
复制代码
1
2
3
4
[root@master1 ~]# curl -s --cacert /opt/install/cert/ca.pem --cert /opt/install/cert/kubectl-admin.pem --key /opt/cert/kubectl-admin-key.pem https://192.168.66.10:10252/healthz # 或者 [root@master1 ~]# wget https://192.168.66.10:10252/healthz --no-check-certificate
  • 遇到异常情况,可以查看日志
复制代码
1
2
[root@master1 ~]# journalctl -u kube-controller-manager

四、查看leader和metrics信息

  • 从下面的输出信息中可以看到,当前leader为master1
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@master1 ~]# kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml apiVersion: v1 kind: Endpoints metadata: annotations: control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master1_b81b351d-3590-4686-aac7-e8f60257e1c5","leaseDurationSeconds":15,"acquireTime":"2020-07-17T07:49:04Z","renewTime":"2020-07-17T07:50:03Z","leaderTransitions":0}' creationTimestamp: "2020-07-17T07:49:04Z" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:control-plane.alpha.kubernetes.io/leader: {} manager: kube-controller-manager operation: Update time: "2020-07-17T07:50:03Z" name: kube-controller-manager namespace: kube-system resourceVersion: "659" selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager uid: 7005f5af-2afd-499b-85b8-a8cdd40fdada
  • 查看metrics信息
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
[root@master1 kubernetes]# curl -s --cacert /opt/install/cert/ca.pem --cert /opt/install/cert/kubectl-admin.pem --key /opt/install/cert/kubectl-admin-key.pem https://192.168.66.12:10252/metrics |head # HELP apiserver_audit_event_total [ALPHA] Counter of audit events generated and sent to the audit backend. # TYPE apiserver_audit_event_total counter apiserver_audit_event_total 0 # HELP apiserver_audit_requests_rejected_total [ALPHA] Counter of apiserver requests rejected due to an error in audit logging backend. # TYPE apiserver_audit_requests_rejected_total counter apiserver_audit_requests_rejected_total 0 # HELP apiserver_client_certificate_expiration_seconds [ALPHA] Distribution of the remaining lifetime on the certificate used to authenticate a request. # TYPE apiserver_client_certificate_expiration_seconds histogram apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0 apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0

附:K8s-Demo集群版本信息

组件版本命令
kubernetes1.18.5kubectl version
docker-ce19.03.11docker version 或者 rpm -qa | grep docker
etcd3.4.3etcdctl version
calico3.13.3calico -v
coredns1.7.0coredns -version

附:专栏链接

K8S-Demo集群实践00:搭建镜像仓库Harbor+安全扫描
K8S-Demo集群实践01:准备VMware虚拟机模板
K8S-Demo集群实践02:准备VMware虚拟机3台Master+3台Node
K8S-Demo集群实践03:准备集群各组件间HTTPS通讯需要的x509证书
K8S-Demo集群实践04:部署etcd三节点高可用集群
K8S-Demo集群实践05:安装kubectl并配置集群管理员账户
K8S-Demo集群实践06:部署kube-apiserver到master节点(3个无状态实例)
K8S-Demo集群实践07:kube-apiserver高可用方案
K8S-Demo集群实践08:部署高可用kube-controller-manager集群
K8S-Demo集群实践09:部署高可用kube-scheduler集群
K8S-Demo集群实践10:部署ipvs模式的kube-proxy组件
K8S-Demo集群实践11:部署ipvs模式的kube-kubelet组件
K8S-Demo集群实践12:部署Calico网络
K8S-Demo集群实践13:部署集群CoreDNS
K8S-Demo集群实践14:部署集群监控服务Metrics Server
K8S-Demo集群实践15:部署Kubernetes Dashboard
K8S-Demo集群实践16:部署Kube-Prometheus
K8S-Demo集群实践17:部署私有云盘owncloud(10.6版本)
K8S-Demo集群实践18:构建宇宙中第一个基础容器镜像


  • 先用起来,通过操作实践认识k8s,积累多了自然就理解了
  • 把理解的知识分享出来,自造福田,自得福缘
  • 追求简单,容易使人理解,知识的上下文也是知识的一部分,例如版本,时间等
  • 欢迎留言交流,也可以提出问题,一般在周末回复和完善文档
  • Jason@vip.qq.com 2021-1-20。

最后

以上就是坚定巨人最近收集整理的关于K8S-Demo集群实践08:部署高可用kube-controller-manager集群一、创建和分发kubeconfig文件二、创建kube-controller-manager systemd unit并分发部署三、启动kube-controller-manager集群服务四、查看leader和metrics信息附:K8s-Demo集群版本信息附:专栏链接的全部内容,更多相关K8S-Demo集群实践08:部署高可用kube-controller-manager集群一、创建和分发kubeconfig文件二、创建kube-controller-manager内容请搜索靠谱客的其他文章。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(63)

评论列表共有 0 条评论

立即
投稿
返回
顶部