我是靠谱客的博主 谨慎路人,最近开发中收集的这篇文章主要介绍搭建k8s高可用多master集群环境安装docker(所有节点)安装必要工具(所有节点)准备集群镜像(所有节点)keepalived+haproxy高可用(三台master)初始化集群(master1)安装网络插件(所有节点)扩展,觉得挺不错的,现在分享给大家,希望可以做个参考。
概述
目录
- 环境
- 主机配置
- 配置好域名解析
- 所有节点安装依赖包(xshell终端可以工具:发送键入到所有会话)**
- 关闭防火墙、swap,重置iptables
- 系统参数设置
- 配置ipvs功能
- 时间同步
- 安装docker(所有节点)
- 安装必要工具(所有节点)
- 准备集群镜像(所有节点)
- keepalived+haproxy高可用(三台master)
- 编辑keepalived配置文件
- 编辑脚本文件
- 编辑haproxy配置文件
- 开启服务
- 初始化集群(master1)
- 安装网络插件(所有节点)
- 查看节点状态(master1节点)
- 扩展
- kubectl加入tab强化字典
- 开启ipvs模式的负载均衡
环境
主机配置
- k8s-master1:192.168.100.20:2G:2核
- k8s-master2:192.168.100.21:2G:2核
- k8s-master3:192.168.100.22:2G:2核
- k8s-node1:192.168.100.23:1G:1核
- k8s-node2:192.168.100.24:1G:1核
配置好域名解析
[root@k8s-master1 ~]# vim /etc/hosts
192.168.100.200 k8svip
192.168.100.20 k8s-master1
192.168.100.21 k8s-master2
192.168.100.22 k8s-master3
192.168.100.23 k8s-node1
192.168.100.24 k8s-node2
所有节点安装依赖包(xshell终端可以工具:发送键入到所有会话)**
[root@k8s-master1 ~]# yum -y update
[root@k8s-master1 ~]# yum -y install conntranck ipvsadm ipset jq sysstat curl iptables libseccomp
关闭防火墙、swap,重置iptables
// 关闭防火墙
[root@k8s-master1 ~]# systemctl stop firewalld && systemctl disable firewalld
//重置iptables
[root@k8s-master1 ~]# iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
//关闭swap
[root@k8s-master1 ~]# swapoff -a
[root@k8s-master1 ~]# sed -i '/swap/s/^(.*)$/#1/g' /etc/fstab
//关闭selinux
[root@k8s-master1 ~]# setenforce 0
[root@k8s-master1 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
//关闭dnsmasq(否则可能导致docker容器无法解析域名)
[root@k8s-master1 ~]# service dnsmasq stop && systemctl disable dnsmaq
系统参数设置
//制作配置文件
[root@k8s-master1 ~]# cat > /etc/sysctl.d/kubernetes.conf <<EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_forward = 1
> vm.swappiness=0
> vm.overcommit_memory = 1
> vm.panic_on_oom = 0
> fs.inotify.max_user_watches = 89100
> EOF
//生效文件
[root@k8s-master1 ~]# sysctl -p /etc/sysctl.d/kubernetes.conf
//加载网桥过滤模块
[root@k8s-master1 ~]# modprobe br_netfilter
配置ipvs功能
在kubernetes中service有两种代理模型,一种是基于iptables的,一种是基于ipvs的
两者比较的话,ipvs的性能明显要高一些,但是如果要使用它,需要手动载入ipvs模块
//添加需要加载的模块写入脚本文件
cat <<EOF > /etc/sysconfig/modules/ipvs.modules
> #!/bin/bash
> modprobe -- ip_vs
> modprobe -- ip_vs_rr
> modprobe -- ip_vs_wrr
> modprobe -- ip_vs_sh
> modprobe -- nf_conntrack_ipv4
> EOF
//为脚本文件添加执行权限
[root@k8s-master1 ~]# chmod +x /etc/sysconfig/modules/ipvs.modules
//执行脚本文件
/bin/bash /etc/sysconfig/modules/ipvs.modules
时间同步
//安装时间同步服务
[root@k8s-master1 ~]# yum -y install chrony
//开启服务
[root@k8s-master1 ~]# systemctl start chronyd
[root@k8s-master1 ~]# systemctl enable chronyd
安装docker(所有节点)
//卸载老版本
[root@k8s-master1 ~]# yum remove docker
> docker-client
> docker-client-latest
> docker-common
> docker-latest
> docker-latest-logrotate
> docker-logrotate
> docker-engine
//安装yum工具包
[root@k8s-master1 ~]# yum -y instlal yum-utils
//添加yum仓库(官方仓库可能会很慢,可以使用国内仓库)
[root@k8s-master1 ~]# yum-config-manager
> --add-repo
> https://download.docker.com/linux/centos/docker-ce.repo
//安装docker
[root@k8s-master1 ~]# yum install docker-ce docker-ce-cli containerd.io
//设置开机启动
[root@k8s-master1 ~]# systemctl start docker
[root@k8s-master1 ~]# systemctl enable docker
//配置阿里云镜像加速
//Docker在默认情况下使用的Cgroup Driver为cgroupfs,而kubernetes推荐使用systemd来代替cgroupfs
[root@k8s-master1 ~]# mkdir -p /etc/docker
[root@k8s-master1 ~]# tee /etc/docker/daemon.json <<-'EOF'
> {
> "exec-opts": ["native.cgroupdriver=systemd"],
> "registry-mirrors": ["https://xxxxxxx.mirror.aliyuncs.com"] //阿里云官网寻找自己的镜像加速
> }
> EOF
//重启docker
[root@k8s-master1 ~]# systemctl daemon-reload
[root@k8s-master1 ~]# systemctl restart docker
安装必要工具(所有节点)
工具说明:
- kubeadm:部署集群用的命令
- kubelet:在集群中每台机器上都要运行的组件,负责管理pod、容器的什么周期
- kubectl:集群管理工具
安装方法:
//添加yum仓库(阿里云)
[root@k8s-master1 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
> http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
//开始安装
[root@k8s-master1 ~]# yum -y install kubelet kubeadm kubectl
//配置kubelet的cgroup
//编辑/etc/sysconfig/kubelet,添加下面的配置
//设置开机自启动
[root@k8s-master1 ~]# systemctl start kubelet && systemctl enable kubelet
准备集群镜像(所有节点)
//在安装kubernetes集群之前,必须要提前准备好集群需要的镜像,所需镜像可以通过下面命令查看
[root@k8s-master1 ~]# kubeadm config images list
由于coredns/coredns:v1.8.0这个镜像在阿里云没有,所以需要对其单独操作
//dockerhub拉取镜像
[root@k8s-master1 ~]# docker pull coredns/coredns:1.8.0
//更改tag
[root@k8s-master1 ~]# docker tag coredns/coredns:1.8.0 k8s.gcr.io/coredns/coredns:v1.8.0
//删除原始镜像
[root@k8s-master1 ~]# docker rmi -f coredns/coredns:1.8.0
其他的则可以使用下列脚本一键完成
images=(
kube-apiserver:v1.21.2
kube-controller-manager:v1.21.2
kube-scheduler:v1.21.2
kube-proxy:v1.21.2
pause:3.4.1
etcd:3.4.13-0
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done
keepalived+haproxy高可用(三台master)
//安装
[root@k8s-master1 ~]# yum -y install haproxy keepalived
//备份keepalived配置文件
[root@k8s-master1 ~]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
//备份haproxy配置文件
[root@k8s-master1 ~]# mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
编辑keepalived配置文件
[root@k8s-master1 ~]# cat /etc/keepalived/keepalived.conf
! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script check_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state MASTER # 主从关系需要更改
interface ens32 # 网卡注意更改
virtual_router_id 50 # 虚拟id每一台要一样
priority 100 #优先级主为100、两从分别为98、96
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.100.200/24 # 虚拟ip设置
}
track_script {
check_apiserver
}
}
#需要按需修改的参数
#state MASTE/SLAVE
#interface 主网卡名称
#虚拟id
#优先级priority
#virtual_ipaddress 虚拟ip
编辑脚本文件
[root@k8s-master1 ~]# chmod +x /etc/keepalived/check_apiserver.sh
[root@k8s-master1 ~]# cat /etc/keepalived/check_apiserver.sh
#!/bin/bash
APISERVER_VIP=192.168.100.200 #虚拟IP地址
APISERVER_DEST_PORT=6443
errorExit() {
echo "*** $*" 1>&2
exit 1
}
curl --silent --max-time 2 --insecure https://localhost:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://localhost:${APISERVER_DEST_PORT}/"
if ip addr | grep -q ${APISERVER_VIP};then
curl --silent --max-time 2 --insecure https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/"
fi
编辑haproxy配置文件
[root@k8s-master1 ~]# cat /etc/haproxy/haproxy.cfg
# /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log /dev/log local0
log /dev/log local1 notice
daemon
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 1
timeout http-request 10s
timeout queue 20s
timeout connect 5s
timeout client 20s
timeout server 20s
timeout http-keep-alive 10s
timeout check 10s
#---------------------------------------------------------------------
# apiserver frontend which proxys to the masters
#---------------------------------------------------------------------
frontend apiserver
bind *:8443
mode tcp
option tcplog
default_backend apiserver
#---------------------------------------------------------------------
# round robin balancing for apiserver
#---------------------------------------------------------------------
backend apiserver
option httpchk GET /healthz
http-check expect status 200
mode tcp
option ssl-hello-chk
balance roundrobin
server k8s-master1 192.168.100.20:6443 check #三台masterIP及端口,还有master就往下加即可
server k8s-master2 192.168.100.21:6443 check
server k8s-master3 192.168.100.22:6443 check
开启服务
//开启keepalived
[root@k8s-master1 ~]# systemctl enable keepalived --now
//开启haproxy
[root@k8s-master1 ~]# systemctl enable haproxy --now
初始化集群(master1)
//初始化
[root@k8s-master1 ~]# kubeadm init
--control-plane-endpoint k8svip:8443
--kubernetes-version=v1.21.2
--pod-network-cidr=10.244.0.0/16
--service-cidr=10.96.0.0/12 --upload-certs
//完成之后根据提示执行如下命令
[root@k8s-master1 ~]# mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
//master加入主master
//根据master1提示显示分别在master2和master3上输入加入命令(每个人的不一样)
[root@k8s-master1 ~]# kubeadm join k8svip:8443 --token 8u158h.weos3wwjo57wdbpg
--discovery-token-ca-cert-hash sha256:18ce4062860b471d2cbae59975f1ebe655fccfe2197dddfd73a655c3af8b9ba4
--control-plane --certificate-key 31e79913e02c541d3712f375622b0a07b57dc4b2ad580171b68f7b7063d1dbd3
//如果哪台节点出现问题可以使用下列命令重置当前节点
[root@k8s-master2 ~]# kubeadm rest
//根据master2和master3提示在master2和master3输入一下命令
[root@k8s-master1 ~]# mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
//根据master1提示显示分别在node1和node2上输入加入命令(每个人的不一样)
[root@k8s-master1 ~]# kubeadm join k8svip:8443 --token 8u158h.weos3wwjo57wdbpg
--discovery-token-ca-cert-hash sha256:18ce4062860b471d2cbae59975f1ebe655fccfe2197dddfd73a655c3af8b9ba4
安装网络插件(所有节点)
//下载kube-flannel.yaml(此文件只需要master1节点下载即可)
[root@k8s-master1 ~]# wget https://github.com/flannel-io/flannel/blob/master/Documentation/kube-flannel.yml
//拉取镜像(所有节点)
[root@k8s-master1 ~]# docker pull lwolf/flannel:v0.14.0
//修改tag(所有节点)
[root@k8s-master1 ~]# docker tag lwolf/flannel:v0.14.0 quay.io/coreos/flannel:v0.14.0
//删除原有镜像(所有节点)
[root@k8s-master1 ~]# docker rmi lwolf/flannel:v0.14.0
//创建网络(master1节点上运行即可)
[root@k8s-master1 ~]# kubectl create -f kube-flannel.yml
查看节点状态(master1节点)
[root@k8s-master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready control-plane,master 54m v1.21.2
k8s-master2 Ready control-plane,master 18m v1.21.2
k8s-master3 Ready control-plane,master 18m v1.21.2
k8s-node1 Ready <none> 8m21s v1.21.2
k8s-node2 Ready <none> 8m18s v1.21.2
此时的kubectl命令只有master1可以使用要想所有master节点都可以使用则需要将master1中的$HOME/.kube目录复制给其他master
[root@k8s-master1 ~]# scp -r $HOME/.kube 192.168.100.21:$HOME/
[root@k8s-master1 ~]# scp -r $HOME/.kube 192.168.100.22:$HOME/
//master2节点测试
[root@k8s-master2 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready control-plane,master 61m v1.21.2
k8s-master2 Ready control-plane,master 26m v1.21.2
k8s-master3 Ready control-plane,master 25m v1.21.2
k8s-node1 Ready <none> 15m v1.21.2
k8s-node2 Ready <none> 15m v1.21.2
//master3节点测试
[root@k8s-master3 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready control-plane,master 62m v1.21.2
k8s-master2 Ready control-plane,master 26m v1.21.2
k8s-master3 Ready control-plane,master 25m v1.21.2
k8s-node1 Ready <none> 16m v1.21.2
k8s-node2 Ready <none> 16m v1.21.2
扩展
kubectl加入tab强化字典
yum -y install bash-completion
echo 'source <(kubectl completion bash)' >> ~/.bashrc
开启ipvs模式的负载均衡
//主节点上
kubectl edit cm kube-proxy -n kube-system
kubectl delete pod -l k8s-app=kube-proxy -n kube-system
最后
以上就是谨慎路人为你收集整理的搭建k8s高可用多master集群环境安装docker(所有节点)安装必要工具(所有节点)准备集群镜像(所有节点)keepalived+haproxy高可用(三台master)初始化集群(master1)安装网络插件(所有节点)扩展的全部内容,希望文章能够帮你解决搭建k8s高可用多master集群环境安装docker(所有节点)安装必要工具(所有节点)准备集群镜像(所有节点)keepalived+haproxy高可用(三台master)初始化集群(master1)安装网络插件(所有节点)扩展所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
发表评论 取消回复