概述
开启本地集群,发现一台节点出问题了,想删除再换一台节点,结果就踩坑了,还好本地有好几套环境。
再master节点执行以下命令
[root@k8s-master ~]# kubectl drain k8s-node01 --delete-local-data --force --ignore-daemonsets node/k8s-node01 cordoned WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-jz65p, kube-system/kube-proxy-92x28 evicting pod "coredns-58cc8c89f4-88ns2" evicting pod "nginx-document1-854875f78d-8cbxw" pod/coredns-58cc8c89f4-88ns2 evicted pod/nginx-document1-854875f78d-8cbxw evicted node/k8s-node01 evicted [root@k8s-master ~]# kubectl delete node k8s-node01 node "k8s-node01" deleted [root@k8s-master ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master 3d4h v1.16.1 k8s-node02 Ready <none> 3d3h v1.16.1 [root@k8s-master ~]#
在node节点执行(网上有人说运行,没有指定那个环境,我直接在master运行了,导致一套集群出问题了)
[root@k8s-node01 ~]# kubeadm reset [reset] Reading configuration from the cluster... [reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' W1219 20:47:15.318403 89075 reset.go:96] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get node registration: failed to get corresponding node: nodes "k8s-node01" not found [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted. [reset] Are you sure you want to proceed? [y/N]: y [preflight] Running pre-flight checks W1219 20:47:19.276564 89075 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory [reset] No etcd config found. Assuming external etcd [reset] Please, manually reset etcd to prevent further issues [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] [reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/cni] The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually by using the "iptables" command. If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables. The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file.
注意:重新在K8s上获取Token
[root@k8s-master pki]# kubeadm token create --print-join-command kubeadm join 192.168.180.121:6443 --token p08kmc.nci5h0xfmlcw92vg --discovery-token-ca-cert-hash sha256:44315d59e08f4d94bc75d20730b861818dfeda6517c1b228399f061f4256329b
子节点重新加入
[root@k8s-node01 lib]# kubeadm join 192.168.180.121:6443 --token p08kmc.nci5h0xfmlcw92vg --discovery-token-ca-cert-hash sha256:44315d59e08f4d94bc75d20730b861818dfeda6517c1b228399f061f4256329b [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s-master pki]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 3d4h v1.16.1 k8s-node01 Ready <none> 2m35s v1.16.1 k8s-node02 Ready <none> 3d3h v1.16.1 [root@k8s-master pki]#
最后
以上就是稳重服饰为你收集整理的k8s删除节点后再重新添加进去(踩坑)的全部内容,希望文章能够帮你解决k8s删除节点后再重新添加进去(踩坑)所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
发表评论 取消回复