我是靠谱客的博主 忐忑绿草,最近开发中收集的这篇文章主要介绍【K8S集群安装二】K8S集群安装步骤K8S集群初始化k8s部署总结安装容器运行环境(1.24以后)添加K8S集群yum源常用命令从新初始化集群退出集群错误提示视频链接温馨提示,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

【K8S集群安装二】K8S集群安装步骤

  • K8S集群初始化
    • 遇到问题解决方法
    • **更改HostName
    • 验证K8S集群是否可用
  • k8s部署总结
  • 安装容器运行环境(1.24以后)
    • 1.先下载 containerd-1.6.6-linux-amd64.tar.gz
    • 安装 systemd
    • 安装runc
  • 添加K8S集群yum源
    • 降级软件包(非必须)
    • 自定义容器(每个节点都必须执行)
      • 查看需要的镜像
  • 常用命令
  • 从新初始化集群
  • 退出集群
  • 错误提示
  • 视频链接
  • 温馨提示

【K8S集群安装一】K8S集群安装步骤

K8S集群初始化

#指定containerd-version版本
kubeadm init  
--image-repository registry.aliyuncs.com/google_containers 
--containerd-version=v1.24.3
--pod-network-cidr=10.244.0.0/16 
--apiserver-advertise-address=192.168.10.100
[init] Using Kubernetes version: v1.24.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1] and IPs [10.96.0.1 192.168.10.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master1] and IPs [192.168.10.100 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master1] and IPs [192.168.10.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 11.016692 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: bck71j.fog6srhqn4admzag
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.100:6443 --token bck71j.fog6srhqn4admzag 
        --discovery-token-ca-cert-hash sha256:54dcaff96319c07f8de243c920751f4e54962b1587c8d9d5358990ef00c5b77f



遇到问题解决方法

#如果初始化遇到如下问题
[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR CRI]: container runtime is not running: output: time="2022-07-19T15:55:47+08:00" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/containerd/containerd.sock: connect: no such file or directory""
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher


#错误原因是v1.24.1版本开始 移除了CIR的包属性  需要单独下载

#解决办法
#解决地址https://github.com/kubernetes-sigs/cri-tools
VERSION="v1.24.2"
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-amd64.tar.gz
sudo tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
rm -f crictl-$VERSION-linux-amd64.tar.gz

VERSION="v1.24.2"
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/critest-$VERSION-linux-amd64.tar.gz
sudo tar zxvf critest-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
rm -f critest-$VERSION-linux-amd64.tar.gz


**更改HostName

hostnamectl set-hostname  master1

#移动root 权限管理集群的脚本
scp -r /root/.kube root@worker1:/root

验证K8S集群是否可用

#查看集群node
kubectl get nodes
#查看集群健康状态
kubectl get cs
kubectl cluster-info
kubectl get pods --namespace kube-system

k8s部署总结

( flannel安装地址)GitHub - flannel-io/flannel: flannel is a network fabric for containers, designed for Kubernetes

安装容器运行环境(1.24以后)

1.先下载 containerd-1.6.6-linux-amd64.tar.gz

containerd/getting-started.md at main · containerd/containerd

#设置containerd 状态
systemctl restart containerd
systemctl enable containerd
systemctl start containerd
systemctl status containerd

安装 systemd

#创建目录
mkdir -p  /usr/local/lib/systemd/system

# https://github.com/containerd/containerd/blob/main/containerd.service


# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target

安装runc

#下载地址:https://github.com/opencontainers/runc/releases
$ install -m 755 runc.amd64 /usr/local/sbin/runc

其他CNI安装都会在 kubeadm init 集群时候进行初始化。

添加K8S集群yum源

地址:https://developer.aliyun.com/mirror/?serviceType=&tag=
https://developer.aliyun.com/mirror/kubernetes?spm=a2c6h.13651102.0.0.59aa1b11I1JMTz

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet

降级软件包(非必须)

yum downgrade -y kubelet kubeadm kubectl

自定义容器(每个节点都必须执行)

containerd 使用位于的配置文件/etc/containerd/config.toml来指定守护进程级别选项。可以在此处找到示例配置文件。
默认配置可以通过生成containerd config default > /etc/containerd/config.toml。

mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml

#修改  
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
      SystemdCgroup = true

 [plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.7"

#registry.aliyuncs.com/google_containers/pause:3.7为本次安装的镜像地址(注意版本号)

查看需要的镜像

#查询需要的镜像
 kubeadm config images list
 kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.24.2

常用命令



#查看系统信息
kubectl  get node -o wide
kubectl get pods --all-namespaces -owide
journalctl -u kubelet
kubectl cluster-info
kubectl get pod -n kube-system |grep "flannel"
kubectl delete pod corendns-74586cf9b6-xjqp7
kubectl apply -f kube-flannel.yml

从新初始化集群

  334  kubeadm reset
  335  kubectl get pod -A
  336  kubectl get node
  337  kubeadm init  --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.224.0.0/16 --apiserver-advertise-address=192.168.10.100
  338   mkdir -p $HOME/.kube
  339    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  340  kubectl get node
  341  kubectl get nodes
  342  kubectl get pod -A
  343  kubectl get pod -n kube-system
  344  删除$HOME/.kube
  345  rm -rf $HOME/.kube
  346  kubectl get pod -n kube-system
  347  mkdir -p $HOME/.kube
  348    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  349    sudo chown $(id -u):$(id -g) $HOME/.kube/config
  350  kubectl get pod -n kube-system
  351  kubectl get pod -A
  352  kubectl apply -f kube-flannel.yml
  353  kubectl get pod -A
  354  kubectl get pod -n kube-system
  355  kubectl get pod -A
  356  kubectl get pod -n kube-system
  357  kubectl get pod -A
  358  history
  359  kubectl version
  360  cd /etc/
  361  ll
  362  cd containerd/
  363  ll
  364  vi /etc/containerd/config.toml
  365  kubectl get node -o wide
  366  kubectl get node -h
  367  kubectl get node -o
  368  kubectl get node -o -h
  369  kubectl get node -o --help
  370  kubectl get node -o yaml
  371  wide
  372  kubectl get node -o wide
  373  kubectl get node -A -o wide
  374  kubectl get pod -A -o wide
  375  scp -r /root/.kube root@worker1:/root
  376  scp -r /root/.kube root@worker2:/root
  377*
  378  kubectl get pod
  379  kubectl get pod -A
  380  cd /opt/bin/
  381  ll
  382  cd flanneld/
  383  ll
  384  kubectl apply -f kube-flannel.yml
  385  cat /run/flannel/subnet.env
  386  vi /run/flannel/subnet.env
  387  kubectl apply -f kube-flannel.yml
  388  kubectl get node -o wide
  389  journalctl -u kubelet
  390  kubectl get pods -n kube-system -owide
  391  kubectl get node -o wide
  392  kubectl get pods --all-namespaces -owide
  393  kubectl describe pod kube-flannel-ds-5v4rt -n kube-flannel
  394  ctr images
  395  ctr images pull docker.io/rancher/mirrored-flannelcni-flannel:v0.19.0
  396  kubectl describe pod kube-flannel-ds-5v4rt -n kube-flannel
  397  kubectl get pods --all-namespaces -owide
  398  cat /etc/containerd/config.toml
  399  journalctl -u flanneld
  400  kubectl logs --namespace kube-system kube-flannel-ds-5v4rt  -c kube-flannel
  401  kubectl logs  kube-flannel-ds-5v4rt  -c kube-flannel
  402  kubectl logs  kube-flannel-ds-5v4rt
  403  kubectl get pods --all-namespaces -owide
  404  kubectl describe pod kube-flannel -n kube-system
  405  kubectl describe pod kube-flannel
  406  kubectl describe pod kube-flannel-ds-fdtb7 -n kube-flannel
  407  kubectl get pods --all-namespaces -owide
  408  pwd
  409  ll
  410  cat kube-flannel.yml
  411  kubectl get pods --all-namespaces -owide
  412  kubectl describe pod kube-flannel-ds-fdtb7 -n kube-flannel
  413  kubectl get pods --all-namespaces -owide
  414  crictl images
  415  crictl ps -a
  416  crictl version
  417  crictl -logs a75
  418  crictl logs a75
  419  crictl logs a75ec7b48983e
  420  crictl ps -a
  421  crictl logs 18d
  422  vim /etc/kubernetes/manifests/kube-controller-manager.yaml
  423  crictl logs 18d
  424  crictl ps -a
  425  systemctl restart kubelet
  426  kubectl get nodes
  427  kubectl get pods --all-namespaces -owide
  428  cat /run/flannel/subnet.env
  429  ifconfig -a
  430  kubectl get pods --all-namespaces -owide
  431  kubectl delete pod -n kube-system kube-flannel-*
  432  kubectl delete pod -n kube-flannel kube-flannel-*
  433  kubectl delete pod -n kube-flannel kube-flannel-ds-5v4rt
  434  kubectl delete pod -n kube-flannel kube-flannel-ds-fdtb7
  435  kubectl get pods --all-namespaces -owide
  436  kubectl delete pod -n kube-flannel kube-flannel-ds-fdtb7
  437  kubectl get pods --all-namespaces -owide
  438  kubectl delete pod -n kube-system kube-controller-manager-master1
  439  kubectl get pods --all-namespaces -owide
  440  kubectl get nodes
  441  kubectl delete node worker1
  442  kubectl get nodes
  443  kubectl delete node worker2
  444  kubectl get nodes
  445  kubeadm config -h
  446  kubeadm config images list

退出集群


# 配置节点不可调度
kubectl cordon <NODE_NAME>
然后,对该节点上运行的pod资源进行驱逐。

kubectl drain --ignore-daemonsets <NODE_NAME>
# --ignore-daemonsets 选项为忽略DaemonSet类型的pod

# 获取node节点上所有的pod名称
kubectl get pod -A -o wide | grep <NODE_NAME>
# 删除pod资源,重新调度到其他节点
kubectl delete pod -n <NAMESPACE> <POD_NAME>

kubectl delete node worker2
# 验证已无该node资源
kubectl get node  | grep <NODE_NAME>
# 验证该node上无pod资源
kubectl get pod -A -o wide | grep <NODE_NAME>

错误提示

Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

视频链接

https://www.bilibili.com/video/BV1Qt4y1H7fV/?spm_id_from=333.788.recommend_more_video.2

温馨提示

卸载 k8s
(虽然在主节点上,我做了几次,包括最后一次排空节点)

kubectl drain mynodename --delete-local-data --force --ignore-daemonsets
kubectl delete node mynodename
kubeadm reset
systemctl stop kubelet
yum remove kubeadm kubectl kubelet kubernetes-cni kube*
yum autoremove
rm -rf ~/.kube
rm -rf /var/lib/kubelet/*

卸载docker:

docker rm 码头工人ps -a -q``
docker stop (as needed)
docker rmi -f 码头工人图像-q``
检查所有容器和图像是否已删除docker ps -a: docker images
systemctl stop docker
yum remove yum-utils device-mapper-persistent-data lvm2
yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux docker-engine-selinux docker-engine
yum remove docker-ce
rm -rf /var/lib/docker
12. rm -rf /etc/docker

卸载flannel

rm -rf /var/lib/cni/
rm -rf /run/flannel
rm -rf /etc/cni/

移除 docker 和 flannel 相关的接口:
ip link
对于 docker 或 flannel 的每个接口,执行以下操作
ifconfig <name of interface from ip link> down
ip link delete <name of interface from ip link>

最后

以上就是忐忑绿草为你收集整理的【K8S集群安装二】K8S集群安装步骤K8S集群初始化k8s部署总结安装容器运行环境(1.24以后)添加K8S集群yum源常用命令从新初始化集群退出集群错误提示视频链接温馨提示的全部内容,希望文章能够帮你解决【K8S集群安装二】K8S集群安装步骤K8S集群初始化k8s部署总结安装容器运行环境(1.24以后)添加K8S集群yum源常用命令从新初始化集群退出集群错误提示视频链接温馨提示所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(64)

评论列表共有 0 条评论

立即
投稿
返回
顶部