我是靠谱客的博主 顺心万宝路,最近开发中收集的这篇文章主要介绍k8s删除增加一个node及网络删除node清空集群信息清空网络信息重新加入,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

文章目录

  • 删除node
  • 清空集群信息
  • 清空网络信息
  • 重新加入

删除node

在master节点操作,准备删除manager.node

[root@worker ~]# kubectl get nodes
NAME
STATUS
ROLES
AGE
VERSION
manager.node
NotReady
<none>
6h36m
v1.17.0
master.node
Ready
<none>
6h46m
v1.17.0
worker.node
Ready
master
21h
v1.17.0

注意,这里是kubectl delete nodes,node加了s

[root@worker ~]# kubectl delete nodes manager.node
node "manager.node" deleted
[root@worker ~]# kubectl get nodes
NAME
STATUS
ROLES
AGE
VERSION
master.node
Ready
<none>
6h46m
v1.17.0
worker.node
Ready
master
21h
v1.17.0

清空集群信息

[root@manager network-scripts]# kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W1231 16:23:44.293553
27107 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get node registration: failed to get corresponding node: nodes "manager.node" not found
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1231 16:23:55.799045
27107 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
W1231 16:24:20.825890
27107 cleanupnode.go:65] [reset] The kubelet service could not be stopped by kubeadm: [exit status 1]
W1231 16:24:20.825934
27107 cleanupnode.go:66] [reset] Please ensure kubelet is stopped manually
[reset] Unmounting mounted directories in "/var/lib/kubelet"
W1231 16:24:20.936289
27107 cleanupnode.go:81] [reset] Failed to remove containers: exit status 1
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

清空网络信息

ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
ifconfig docker0 down
rm -rf /var/lib/cni/
rm -rf /etc/cni/net.d

重新加入

[root@manager ~]# kubeadm join XX.XX.XX.52:6443 --token 43umr8.df94e49pkj7fyv90 --discovery-token-ca-cert-hash sha256:9858fb015dd519696df382e675f3614630b2d3e7f2e6a83086bef1884bb0a0e2
W1231 17:19:10.669867
2933 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.03.0-ce. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

查看master节点的token

[root@manager ~]# kubeadm token list
TOKEN
TTL
EXPIRES
USAGES
DESCRIPTION
EXTRA GROUPS
43umr8.df94e49pkj7fyv90
56m
2019-12-31T18:32:36+08:00
authentication,signing
The default bootstrap token generated by 'kubeadm init'.
system:bootstrappers:kubeadm:default-node-token

重新生成token

[root@manager ~]# kubeadm token create
W1231 17:37:34.108407
7422 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1231 17:37:34.108440
7422 validation.go:28] Cannot validate kubelet config - no validator is available
pzq7je.9osdlv2t5t42mg5a

找不到--discovery-token-ca-cert-hash的值,可以使用以下命令生成

[root@manager ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
9858fb015dd519696df382e675f3614630b2d3e7f2e6a83086bef1884bb0a0e2

重新查看节点信息

[root@worker coredns]# kubectl get nodes
NAME
STATUS
ROLES
AGE
VERSION
manager.node
Ready
<none>
103s
v1.17.0
master.node
Ready
<none>
7h44m
v1.17.0
worker.node
Ready
master
22h
v1.17.0

最后

以上就是顺心万宝路为你收集整理的k8s删除增加一个node及网络删除node清空集群信息清空网络信息重新加入的全部内容,希望文章能够帮你解决k8s删除增加一个node及网络删除node清空集群信息清空网络信息重新加入所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(67)

评论列表共有 0 条评论

立即
投稿
返回
顶部