我是靠谱客的博主 妩媚棉花糖,这篇文章主要介绍k8s集群搭建零、资料下载一、准备实验环境二、初始化实验环境三、安装kubernetes1.18.2单master节点的高可用集群,现在分享给大家,希望可以做个参考。

零、资料下载

1.下文需要的yaml文件所在的github地址如下

复制代码
1
2
https://github.com/luckylucky421/kubernetes1.17.3/tree/master

2.下文里提到的初始化k8s集群需要的镜像获取方式:镜像在百度网盘

复制代码
1
2
3
链接:https://pan.baidu.com/s/1k1heJy8lLnDk2JEFyRyJdA 提取码:udkj

一、准备实验环境

1.准备两台centos7虚拟机,用来安装k8s集群

操作系统:centos7.6以及更高版本都可以配置:2核cpu,4G内存,两块50G硬盘

网络:桥接网络

复制代码
1
2
3
master1 192.168.139.101 node1 192.168.139.102

二、初始化实验环境

1.配置静态ip

把虚拟机或者物理机配置成静态ip地址,这样机器重新启动后ip地址也不会发生改变。

1.1 在master1节点配置网络

修改/etc/sysconfig/network-scripts/ifcfg-ens33文件,变成如下:

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=static IPADDR=192.168.139.101 NETMASK=255.255.255.0 GATEWAY=192.168.139.2 DNS1=8.8.8.8 DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens33 DEVICE=ens33 ONBOOT=yes

修改配置文件之后需要重启网络服务才能使配置生效,重启网络服务命令如下:

复制代码
1
2
service network restart
复制代码
1
2
3
4
5
6
7
8
9
10
注:ifcfg-ens33文件配置解释: IPADDR=192.168.0.6 #ip地址,需要跟自己电脑所在网段一致 NETMASK=255.255.255.0 #子网掩码,需要跟自己电脑所在网段一致 GATEWAY=192.168.0.1 #网关,在自己电脑打开cmd,输入ipconfig /all可看到 DNS1=192.168.0.1 #DNS,在自己电脑打开cmd,输入ipconfig /all可看到

1.2 在node1节点配置网络

修改/etc/sysconfig/network-scripts/ifcfg-ens33文件,变成如下:

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=static IPADDR=192.168.139.102 NETMASK=255.255.255.0 GATEWAY=192.168.139.2 DNS1=8.8.8.8 DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens33 DEVICE=ens33 ONBOOT=yes
复制代码
1
2
3
修改配置文件之后需要重启网络服务才能使配置生效,重启网络服务命令如下: service network restart

2.修改yum源,各个节点操作

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
(1)备份原来的yum源 mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup (2)下载阿里的yum源  wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo (3)生成新的yum缓存 yum makecache fast (4)配置安装k8s需要的yum源 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 EOF (5)清理yum缓存  yum clean all (6)生成新的yum缓存  yum makecache fast (7)更新yum源  yum -y update (8)安装软件包 yum -y install yum-utils device-mapper-persistent-data  lvm2 (9)添加新的软件源  yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum clean all yum makecache fast

3.安装基础软件包,各个节点操作

复制代码
1
2
yum -y install wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntplibaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release openssh-server socat  ipvsadm conntrack ntpdate

4.关闭firewalld防火墙,各个节点操作

centos7系统默认使用的是firewalld防火墙,停止firewalld防火墙,并禁用这个服务

复制代码
1
2
systemctl stopfirewalld && systemctl disable firewalld

5.安装iptables,各个节点操作

如果你用firewalld不是很习惯,可以安装iptables,这个步骤可以不做,根据大家实际需求

复制代码
1
2
3
4
5
5.1 安装iptables yum install iptables-services -y 5.2 禁用iptables service iptables stop  && systemctl disable iptables

6.时间同步,各个节点操作

复制代码
1
2
3
4
5
6
7
8
6.1 时间同步 ntpdate cn.pool.ntp.org 6.2 编辑计划任务,每小时做一次同步 1)crontab -e * */1 * * * /usr/sbin/ntpdate   cn.pool.ntp.org 2)重启crond服务: service crond restart

7.关闭selinux,各个节点操作

复制代码
1
2
3
4
5
6
7
8
9
10
关闭selinux,设置永久关闭,这样重启机器selinux也处于关闭状态 修改/etc/sysconfig/selinux和/etc/selinux/config文件,把 SELINUX=enforcing变成SELINUX=disabled,也可用下面方式修改: sed -i  's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config  上面文件修改之后,需要重启虚拟机,可以强制重启: reboot -f

8.关闭交换分区,各个节点操作

复制代码
1
2
3
4
swapoff -a #永久禁用,打开/etc/fstab注释掉swap那一行。 sed -i 's/.*swap.*/#&/' /etc/fstab

9.修改内核参数,各个节点操作

复制代码
1
2
3
4
5
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF

sysctl --system

10.修改主机名

复制代码
1
2
3
4
5
在192.168.0.6上: hostnamectl set-hostname master1 在192.168.0.56上: hostnamectl set-hostname node1

11.配置hosts文件,各个节点操作

复制代码
1
2
3
4
在/etc/hosts文件增加如下几行: 192.168.0.6 master1 192.168.0.56 node1

12.配置master1到node1无密码登陆,配置master1到node1无密码登陆

复制代码
1
2
3
4
5
6
在master1上操作 ssh-keygen -t rsa #一直回车就可以 cd /root && ssh-copy-id -i .ssh/id_rsa.pub root@node1 #上面需要输入yes之后,输入密码,输入node1物理机密码即可

三、安装kubernetes1.18.2单master节点的高可用集群

1.安装docker19.03,各个节点操作

1.1 查看支持的docker版本

yum list docker-ce --showduplicates |sort -r

1.2 安装19.03.7版本

yum install -y docker-ce-19.03.7-3.el7

systemctl enable docker && systemctl start docker

#查看docker状态,如果状态是active(running),说明docker是正常运行状态

systemctl status docker

1.3 修改docker配置文件

cat > /etc/docker/daemon.json <<EOF

{
“exec-opts”:[“native.cgroupdriver=systemd”],

“log-driver”:“json-file”,

“log-opts”: {
“max-size”: “100m”

},

“storage-driver”:“overlay2”,

“storage-opts”: [

“overlay2.override_kernel_check=true”

]

}

EOF

1.4 重启docker使配置生效

systemctl daemon-reload && systemctl restartdocker

1.5 设置网桥包经IPTables,core文件生成路径,配置永久生效

echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

echo 1 >/proc/sys/net/bridge/bridge-nf-call-ip6tables

echo “”"

vm.swappiness = 0

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-ip6tables = 1

“”" > /etc/sysctl.conf

sysctl -p

1.6 开启ipvs,不开启ipvs将会使用iptables,但是效率低,所以官网推荐需要开通ipvs内核

cat > /etc/sysconfig/modules/ipvs.modules <<EOF

#!/bin/bash

ipvs_modules=“ip_vs ip_vs_lc ip_vs_wlc ip_vs_rrip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sedip_vs_ftp nf_conntrack”

for kernel_module in ${ipvs_modules}; do

/sbin/modinfo -Ffilename ${kernel_module} > /dev/null 2>&1

if [ $? -eq 0 ];then

/sbin/modprobe${kernel_module}

fi

done

EOF

modprobe ip_vs

chmod 755 /etc/sysconfig/modules/ipvs.modules &&bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

2.安装kubernetes1.18.6

2.1在master1和node1上安装kubeadm和kubelet

yum install kubeadm-1.18.6 kubelet-1.18.6 -y

systemctl enable kubelet

初始化k8s集群

kubeadm init --kubernetes-version=v1.18.6 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.0.6 --image-repository registry.aliyuncs.com/google_containers

复制代码
1
2
3
4
注释: --image-repository registry.aliyuncs.com/google_containers 是指定阿里云的镜像源,基于这个我们可以安装任何版本的k8s; --kubernetes-version=v1.18.6是指定k8s版本

初始化命令执行成功之后显示如下内容,说明初始化成功了

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regularuser:

mkdir -p $HOME/.kube

sudo cp -i/etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown ( i d − u ) : (id -u): (idu):(id -g)$HOME/.kube/config

You should now deploy a pod network to the cluster.

Run “kubectl apply -f [podnetwork].yaml” with one of the optionslisted at:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following oneach as root:

kubeadm join 192.168.0.6:6443 --token si1c9n.3c5os94xcuzq6wl3

–discovery-token-ca-cert-hashsha256:9d3a35eab0f6badba61ebb833d420902e4f9e0168ee1c1374121668ab382a596

注:kubeadm join … 这条命令需要记住,我们把k8s的node1节点加入到集群需要在这些节点节点输入这条命令,每次执行这个结果都是不一样的,大家记住自己执行的结果,在下面会用到

2.2 在master1节点执行如下,这样才能有权限操作k8s资源

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown ( i d − u ) : (id -u): (idu):(id -g) $HOME/.kube/config

在master1节点执行

kubectl get nodes

显示如下,master1节点是NotReady

NAME STATUS ROLES AGE VERSION master1 NotReady master 8m11s v1.18.6

kubectl get pods -n kube-system

显示如下,可看到cordns也是处于pending状态

coredns-7ff77c879f-j48h6 0/1 Pending 0 3m16scoredns-7ff77c879f-lrb77 0/1 Pending 0 3m16s

上面可以看到STATUS状态是NotReady,cordns是pending,是因为没有安装网络插件,需要安装calico或者flannel,接下来我们安装calico,在master1节点安装calico网络插件:

安装calico需要的镜像是quay.io/calico/cni:v3.5.3和quay.io/calico/node:v3.5.3,镜像在文章开头处的百度网盘地址

手动上传上面两个镜像的压缩包到各个节点,通过docker load -i解压

docker load -i cni.tar.gz
docker load -i calico-node.tar.gz

在master1节点执行如下

kubectl apply -f calico.yaml

calico.yaml文件内容在如下提供的地址,打开下面链接可复制内容:

https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/calico.yaml

如果打不开上面的链接,可以访问下面的github地址,把下面的目录clone和下载下来,解压之后,在把文件传到master1节点即可

https://github.com/luckylucky421/kubernetes1.17.3/tree/master

在master1节点执行

kubectl get nodes

显示如下,看到STATUS是Ready

NAME STATUS ROLES AGE VERSION

master1 Ready master 98m v1.18.6

kubectl get pods -n kube-system

看到cordns也是running状态,说明master1节点的calico安装完成

复制代码
1
2
3
4
5
6
7
8
9
10
NAME                           READY   STATUS   RESTARTS   AGE calico-node-6rvqm 1/1 Running 0 17m coredns-7ff77c879f-j48h6 1/1 Running 0 97m coredns-7ff77c879f-lrb77 1/1 Running 0 97m etcd-master1 1/1 Running 0 97m kube-apiserver-master1 1/1 Running 0 97m kube-controller-manager-master1 1/1 Running 0 97m kube-proxy-njft6 1/1 Running 0 97m kube-scheduler-master1 1/1 Running 0 97m

2.3 把node1节点加入到k8s集群,在node1节点操作

kubeadm join 192.168.0.6:6443 --token si1c9n.3c5os94xcuzq6wl3

–discovery-token-ca-cert-hashsha256:9d3a35eab0f6badba61ebb833d420902e4f9e0168ee1c1374121668ab382a596

注:上面的这个加入到k8s节点的一串命令kubeadm join就是在2.4初始化的时候生成的

2.8 在master1节点查看集群节点状态

kubectl get nodes

显示如下:

NAME STATUS ROLES AGE VERSION

master1 Ready master 3m36s v1.18.6

node1 Ready 3m36s v1.18.6

说明node1节点也加入到k8s集群了,通过以上就完成了k8s单master高可用集群的搭建

2.4 安装traefik

官网:https://docs.traefik.io/

把traefik镜像上传到各个节点,按照如下方法通过docker load -i解压,镜像地址在文章开头处的百度网盘里,可自行下载

docker load -i traefik_1_7_9.tar.gz

traefik用到的镜像是k8s.gcr.io/traefik:1.7.9

1)生成traefik证书,在master1上操作

mkdir ~/ikube/tls/ -p

echo “”"

[req]

distinguished_name = req_distinguished_name

prompt = yes

[ req_distinguished_name ]

countryName = Country Name (2 letter code)

countryName_value = CN

stateOrProvinceName = State orProvince Name (full name)

stateOrProvinceName_value = Beijing

localityName = Locality Name (eg, city)

localityName_value =Haidian

organizationName =Organization Name (eg, company)

organizationName_value = Channelsoft

organizationalUnitName = OrganizationalUnit Name (eg, p)

organizationalUnitName_value = R & D Department

commonName = Common Name (eg, your name or your server’s hostname)

commonName_value =*.multi.io

emailAddress = Email Address

emailAddress_value =lentil1016@gmail.com

“”" > ~/ikube/tls/openssl.cnf

openssl req -newkey rsa:4096 -nodes -config ~/ikube/tls/openssl.cnf -days3650 -x509 -out ~/ikube/tls/tls.crt -keyout ~/ikube/tls/tls.key

kubectl create -n kube-system secret tls ssl --cert ~/ikube/tls/tls.crt–key ~/ikube/tls/tls.key

2)执行yaml文件,创建traefik

kubectl apply -f traefik.yaml

traefik.yaml文件内容在如下链接地址处复制:

https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/traefik.yaml

上面如果访问不了,可以访问下面的链接,然后把下面的分支克隆和下载,手动把yaml文件传到master1上即可:

https://github.com/luckylucky421/kubernetes1.17.3

3)查看traefik是否部署成功:

kubectl get pods -n kube-system
traefik-ingress-controller-csbp8 1/1 Running 0 5s
traefik-ingress-controller-hqkwf 1/1 Running 0 5s

3.安装kubernetes-dashboard-2版本(kubernetes的web ui界面)

把kubernetes-dashboard镜像上传到各个节点,按照如下方法通过docker load -i解压,镜像地址在文章开头处的百度网盘里,可自行下载

docker load -i dashboard_2_0_0.tar.gz

docker load -i metrics-scrapter-1-0-1.tar.gz

解压出来的镜像是kubernetesui/dashboard:v2.0.0-beta8和kubernetesui/metrics-scraper:v1.0.1

在master1节点操作

kubectl apply -f kubernetes-dashboard.yaml

kubernetes-dashboard.yaml文件内容在如下链接地址处复制https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/kubernetes-dashboard.yaml

上面如果访问不了,可以访问下面的链接,然后把下面的分支克隆和下载,手动把yaml文件传到master1上即可:

https://github.com/luckylucky421/kubernetes1.17.3

查看dashboard是否安装成功:

kubectl get pods -n kubernetes-dashboard

显示如下,说明dashboard安装成功了

NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-694557449d-8xmtf 1/1 Running 0 60s
kubernetes-dashboard-5f98bdb684-ph9wg 1/1 Running 2 60s
查看dashboard前端的service

kubectl get svc -n kubernetes-dashboard

显示如下:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.100.23.9 8000/TCP 50s
kubernetes-dashboard ClusterIP 10.105.253.155 443/TCP 50s
修改service type类型变成NodePort:

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

把type: ClusterIP变成 type: NodePort,保存退出即可。

kubectl get svc -n kubernetes-dashboard

显示如下:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.100.23.9 8000/TCP 3m59s
kubernetes-dashboard NodePort 10.105.253.155 443:31175/TCP 4m
上面可看到service类型是NodePort,访问master1节点ip:31175端口即可访问kubernetes dashboard,我的环境需要输入如下地址

https://192.168.0.6:31775/

可看到出现了dashboard界面

3.1通过yaml文件里指定的默认的token登陆dashboard

1)查看kubernetes-dashboard名称空间下的secret

kubectl get secret -n kubernetes-dashboard

显示如下:

NAME TYPE DATA AGE
default-token-vxd7t kubernetes.io/service-account-token 3 5m27s
kubernetes-dashboard-certs Opaque 0 5m27s
kubernetes-dashboard-csrf Opaque 1 5m27s
kubernetes-dashboard-key-holder Opaque 2 5m27s
kubernetes-dashboard-token-ngcmg kubernetes.io/service-account-token 3 5m27s
2)找到对应的带有token的kubernetes-dashboard-token-ngcmg

kubectl describe secret kubernetes-dashboard-token-ngcmg -n kubernetes-dashboard

显示如下:


token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjZUTVVGMDN4enFTREpqV0s3cDRWa254cTRPc2xPRTZ3bk8wcFJBSy1JSzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uZ2NtZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwMDFhNTM0LWE2ZWQtNGQ5MC1iMzdjLWMxMWU5Njk2MDE0MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.WQFE0ygYdKkUjaQjFFU-BeWqys07J98N24R_azv6f-o9AB8Zy1bFWZcNrOlo6WYQuh-xoR8tc5ZDuLQlnZMBSwl2jo9E9FLZuEt7klTfXf4TkrQGLCxzDMD5c2nXbdDdLDtRbSwQMcQwePwp5WTAfuLyqJPFs22Xi2awpLRzbHn3ei_czNuamWUuoGHe6kP_rTnu6OUpVf1txi9C1Tg_3fM2ibNy-NWXLvrxilG3x3SbW1A3G6Y2Vbt1NxqVNtHRRQsYCvTnp3NZQqotV0-TxnvRJ3SLo_X6oxdUVnqt3DZgebyIbmg3wvgAzGmuSLlqMJ-mKQ7cNYMFR2Z8vnhhtA

记住token后面的值,把下面的token值复制到浏览器token登陆处即可登陆:

eyJhbGciOiJSUzI1NiIsImtpZCI6IjZUTVVGMDN4enFTREpqV0s3cDRWa254cTRPc2xPRTZ3bk8wcFJBSy1JSzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uZ2NtZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwMDFhNTM0LWE2ZWQtNGQ5MC1iMzdjLWMxMWU5Njk2MDE0MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.WQFE0ygYdKkUjaQjFFU-BeWqys07J98N24R_azv6f-o9AB8Zy1bFWZcNrOlo6WYQuh-xoR8tc5ZDuLQlnZMBSwl2jo9E9FLZuEt7klTfXf4TkrQGLCxzDMD5c2nXbdDdLDtRbSwQMcQwePwp5WTAfuLyqJPFs22Xi2awpLRzbHn3ei_czNuamWUuoGHe6kP_rTnu6OUpVf1txi9C1Tg_3fM2ibNy-NWXLvrxilG3x3SbW1A3G6Y2Vbt1NxqVNtHRRQsYCvTnp3NZQqotV0-TxnvRJ3SLo_X6oxdUVnqt3DZgebyIbmg3wvgAzGmuSLlqMJ-mKQ7cNYMFR2Z8vnhhtA

点击sing in登陆,显示如下,默认是只能看到default名称空间内容

3.2 创建管理员token,可查看任何空间权限

kubectl create clusterrolebinding dashboard-cluster-admin–clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard

1)查看kubernetes-dashboard名称空间下的secret

kubectl get secret -n kubernetes-dashboard

显示如下:

NAME TYPE DATA AGE
default-token-vxd7t kubernetes.io/service-account-token 3 5m27s
kubernetes-dashboard-certs Opaque 0 5m27s
kubernetes-dashboard-csrf Opaque 1 5m27s
kubernetes-dashboard-key-holder Opaque 2 5m27s
kubernetes-dashboard-token-ngcmg kubernetes.io/service-account-token 3 5m27s
2)找到对应的带有token的kubernetes-dashboard-token-ngcmg

kubectl describe secret kubernetes-dashboard-token-ngcmg -n kubernetes-dashboard

显示如下:


token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjZUTVVGMDN4enFTREpqV0s3cDRWa254cTRPc2xPRTZ3bk8wcFJBSy1JSzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uZ2NtZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwMDFhNTM0LWE2ZWQtNGQ5MC1iMzdjLWMxMWU5Njk2MDE0MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.WQFE0ygYdKkUjaQjFFU-BeWqys07J98N24R_azv6f-o9AB8Zy1bFWZcNrOlo6WYQuh-xoR8tc5ZDuLQlnZMBSwl2jo9E9FLZuEt7klTfXf4TkrQGLCxzDMD5c2nXbdDdLDtRbSwQMcQwePwp5WTAfuLyqJPFs22Xi2awpLRzbHn3ei_czNuamWUuoGHe6kP_rTnu6OUpVf1txi9C1Tg_3fM2ibNy-NWXLvrxilG3x3SbW1A3G6Y2Vbt1NxqVNtHRRQsYCvTnp3NZQqotV0-TxnvRJ3SLo_X6oxdUVnqt3DZgebyIbmg3wvgAzGmuSLlqMJ-mKQ7cNYMFR2Z8vnhhtA

记住token后面的值,把下面的token值复制到浏览器token登陆处即可登陆:

eyJhbGciOiJSUzI1NiIsImtpZCI6IjZUTVVGMDN4enFTREpqV0s3cDRWa254cTRPc2xPRTZ3bk8wcFJBSy1JSzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uZ2NtZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwMDFhNTM0LWE2ZWQtNGQ5MC1iMzdjLWMxMWU5Njk2MDE0MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.WQFE0ygYdKkUjaQjFFU-BeWqys07J98N24R_azv6f-o9AB8Zy1bFWZcNrOlo6WYQuh-xoR8tc5ZDuLQlnZMBSwl2jo9E9FLZuEt7klTfXf4TkrQGLCxzDMD5c2nXbdDdLDtRbSwQMcQwePwp5WTAfuLyqJPFs22Xi2awpLRzbHn3ei_czNuamWUuoGHe6kP_rTnu6OUpVf1txi9C1Tg_3fM2ibNy-NWXLvrxilG3x3SbW1A3G6Y2Vbt1NxqVNtHRRQsYCvTnp3NZQqotV0-TxnvRJ3SLo_X6oxdUVnqt3DZgebyIbmg3wvgAzGmuSLlqMJ-mKQ7cNYMFR2Z8vnhhtA

点击sing in登陆,显示如下,这次就可以看到和操作任何名称空间的资源了

4.安装metrics组件

把metrics-server-amd64_0_3_1.tar.gz和addon.tar.gz镜像上传到各个节点,按照如下方法通过docker load -i解压,镜像地址在文章开头处的百度网盘里,可自行下载

docker load -i metrics-server-amd64_0_3_1.tar.gz

docker load -i addon.tar.gz

metrics-server版本0.3.1,用到的镜像是k8s.gcr.io/metrics-server-amd64:v0.3.1

addon-resizer版本是1.8.4,用到的镜像是k8s.gcr.io/addon-resizer:1.8.4

在k8s的master1节点操作

kubectl apply -f metrics.yaml

metrics.yaml文件内容在如下链接地址处复制

https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/metrics.yaml

上面如果访问不了,可以访问下面的链接,然后把下面的分支克隆和下载,手动把yaml文件传到master1上即可:

https://github.com/luckylucky421/kubernetes1.17.3

上面组件都安装之后,查看组件安装是否正常,STATUS状态是Running,说明组件正常,如下所示:

kubectl get pods -n kube-system -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATE
calico-node-h66ll 1/1 Running 0 51m 192.168.0.56 node1
calico-node-r4k6w 1/1 Running 0 58m 192.168.0.6 master1
coredns-66bff467f8-2cj5k 1/1 Running 0 70m 10.244.0.3 master1
coredns-66bff467f8-nl9zt 1/1 Running 0 70m 10.244.0.2 master1
etcd-master1 1/1 Running 0 70m 192.168.0.6 master1
kube-apiserver-master1 1/1 Running 0 70m 192.168.0.6 master1
kube-controller-manager-master1 1/1 Running 0 70m 192.168.0.6 master1
kube-proxy-qts4n 1/1 Running 0 70m 192.168.0.6 master1
kube-proxy-x647c 1/1 Running 0 51m 192.168.0.56 node1
kube-scheduler-master1 1/1 Running 0 70m 192.168.0.6 master1
metrics-server-8459f8db8c-gqsks 2/2 Running 0 16s 10.244.1.6 node1
traefik-ingress-controller-xhcfb 1/1 Running 0 39m 192.168.0.6 master1
traefik-ingress-controller-zkdpt 1/1 Running 0 39m 192.168.0.56 node1

上面如果看到metrics-server-8459f8db8c-gqsks是running状态,说明metrics-server组件部署成功了,

接下来就可以在master1节点上使用kubectl top pods -n kube-system或者kubectl top nodes命令

最后

以上就是妩媚棉花糖最近收集整理的关于k8s集群搭建零、资料下载一、准备实验环境二、初始化实验环境三、安装kubernetes1.18.2单master节点的高可用集群的全部内容,更多相关k8s集群搭建零、资料下载一、准备实验环境二、初始化实验环境三、安装kubernetes1.18.2单master节点内容请搜索靠谱客的其他文章。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(62)

评论列表共有 0 条评论

立即
投稿
返回
顶部