我是靠谱客的博主 曾经小丸子,最近开发中收集的这篇文章主要介绍Kubernetes系列~Master节点安装Master安装,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

Master安装

gary@172.36.13.5’s password:

​ ┌────────────────────────────────────────────────────────────────────┐

​ │ • MobaXterm 10.9 • │

​ │ (SSH client, X-server and networking tools) │

​ │ │

​ │ ➤ SSH session to gary@172.36.13.5 │

​ │ • SSH compression : ✔ │

​ │ • SSH-browser : ✔ │

​ │ • X11-forwarding : ✔ (remote display is forwarded through SSH) │

​ │ • DISPLAY : ✔ (automatically set on remote server) │

​ │ │

​ │ ➤ For more info, ctrl+click on help or visit our website │

​ └────────────────────────────────────────────────────────────────────┘

Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-47-generic x86_64)

* Documentation: https://help.ubuntu.com

* Management: https://landscape.canonical.com

* Support: https://ubuntu.com/advantage

System information as of Wed Apr 3 02:03:17 UTC 2019

System load: 0.01 Processes: 168

Usage of /: 9.6% of 58.80GB Users logged in: 1

Memory usage: 7% IP address for ens33: 172.36.13.154

Swap usage: 0%

55 packages can be updated.

25 updates are security updates.

Last login: Wed Apr 3 01:56:52 2019

/usr/bin/xauth: file /home/gary/.Xauthority does not exist

把用户切到root下

gary@master:~$ su

Password:

root@master:/home/gary# cd ~

关闭防火墙

root@master:~# swapoff -a

注释掉以下语句

root@master:~# vim /etc/fstab

UUID=13a888ae-2898-4531-a369-d8480435d121 / ext4 defaults 0 0

/swap.img none swap sw 0 0

在这里插入图片描述

改主机名为master

root@master:~# vim /etc/hostname
在这里插入图片描述

修改hosts文件,添加master主ip与机器名称

root@master:~# vim /etc/hosts

root@master:~# cat /etc/hosts

127.0.0.1 localhost.localdomain localhost

::1 localhost6.localdomain6 localhost6

172.36.13.5 master

172.36.13.8 slave2

# The following lines are desirable for IPv6 capable hosts

::1 localhost ip6-localhost ip6-loopback

fe00::0 ip6-localnet

ff02::1 ip6-allnodes

ff02::2 ip6-allrouters

ff02::3 ip6-allhosts

在这里插入图片描述

安装Docker

root@master:~# apt-get install docker.io

Reading package lists… Done

Building dependency tree

Reading state information… Done

The following additional packages will be installed:

bridge-utils cgroupfs-mount libltdl7 pigz ubuntu-fan

Suggested packages:

ifupdown aufs-tools debootstrap docker-doc rinse zfs-fuse | zfsutils

The following NEW packages will be installed:

bridge-utils cgroupfs-mount docker.io libltdl7 pigz ubuntu-fan

0 upgraded, 6 newly installed, 0 to remove and 56 not upgraded.

Need to get 46.6 MB of archives.

After this operation, 235 MB of additional disk space will be used.

Do you want to continue? [Y/n] y

Get:1 [http://archive.ubuntu.com/ubuntu bionic/universe](http://archive.ubuntu.com/ubuntu bionic/universe) amd64 pigz amd64 2.4-1 [57.4 kB]

Get:2 [http://archive.ubuntu.com/ubuntu bionic/main](http://archive.ubuntu.com/ubuntu bionic/main) amd64 bridge-utils amd64 1.5-15ubuntu1 [30.1 kB]

Get:3 [http://archive.ubuntu.com/ubuntu bionic/universe](http://archive.ubuntu.com/ubuntu bionic/universe) amd64 cgroupfs-mount all 1.4 [6,320 B]

Get:4 [http://archive.ubuntu.com/ubuntu bionic/main](http://archive.ubuntu.com/ubuntu bionic/main) amd64 libltdl7 amd64 2.4.6-2 [38.8 kB]

Get:5 [http://archive.ubuntu.com/ubuntu bionic-updates/universe](http://archive.ubuntu.com/ubuntu bionic-updates/universe) amd64 docker.io amd64 18.09.2-0ubuntu1~18.04.1 [46.4 MB]

Created symlink /etc/systemd/system/sockets.target.wants/docker.socket → /lib/systemd/system/docker.socket.

Processing triggers for ureadahead (0.100.0-20) …

Processing triggers for libc-bin (2.27-3ubuntu1) …

Processing triggers for systemd (237-3ubuntu10.12) …

Docker安装完成,执行如下命令

root@master:~# apt-get update

Hit:1 http://archive.ubuntu.com/ubuntu bionic InRelease

Hit:2 http://archive.ubuntu.com/ubuntu bionic-updates InRelease

Hit:3 http://archive.ubuntu.com/ubuntu bionic-backports InRelease

Hit:4 http://archive.ubuntu.com/ubuntu bionic-security InRelease

Reading package lists… Done

在这里插入图片描述

安装 kubernetes环境前执行以下命令**

root@master:~# apt-get update && apt-get install -y apt-transport-https curl

Hit:1 http://archive.ubuntu.com/ubuntu bionic InRelease

Hit:2 http://archive.ubuntu.com/ubuntu bionic-updates InRelease

Hit:3 http://archive.ubuntu.com/ubuntu bionic-backports InRelease

Hit:4 http://archive.ubuntu.com/ubuntu bionic-security InRelease

Reading package lists… Done

Reading package lists… Done

Building dependency tree

Reading state information… Done

curl is already the newest version (7.58.0-2ubuntu3.6).

The following NEW packages will be installed:

apt-transport-https

0 upgraded, 1 newly installed, 0 to remove and 56 not upgraded.

Need to get 1,692 B of archives.

After this operation, 153 kB of additional disk space will be used.

Get:1 [http://archive.ubuntu.com/ubuntu bionic-updates/universe](http://archive.ubuntu.com/ubuntu bionic-updates/universe) amd64 apt-transport-https all 1.6.10 [1,692 B]

Fetched 1,692 B in 1s (2,820 B/s)

Selecting previously unselected package apt-transport-https.

(Reading database … 66996 files and directories currently installed.)

Preparing to unpack …/apt-transport-https_1.6.10_all.deb …

Unpacking apt-transport-https (1.6.10) …

Setting up apt-transport-https (1.6.10) …

添加Key

root@master:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

OK

root@master:~# cat </etc/apt/sources.list.d/kubernetes.list

> deb http://apt.kubernetes.io/ kubernetes-xenial main

> EOF

root@master:~# apt-get update

Hit:2 http://archive.ubuntu.com/ubuntu bionic InRelease

Get:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8,993 B]

Get:3 [https://packages.cloud.google.com/apt kubernetes-xenial/main](https://packages.cloud.google.com/apt kubernetes-xenial/main) amd64 Packages [25.0 kB]

Hit:4 http://archive.ubuntu.com/ubuntu bionic-updates InRelease

Hit:5 http://archive.ubuntu.com/ubuntu bionic-backports InRelease

Hit:6 http://archive.ubuntu.com/ubuntu bionic-security InRelease

Fetched 34.0 kB in 3s (13.1 kB/s)

Reading package lists… Done

安装 kubelet kubeadm kubectl

root@master:~# apt-get install -y kubelet kubeadm kubectl

Reading package lists… Done

Building dependency tree

Reading state information… Done

The following additional packages will be installed:

conntrack cri-tools kubernetes-cni socat

The following NEW packages will be installed:

conntrack cri-tools kubeadm kubectl kubelet kubernetes-cni socat

0 upgraded, 7 newly installed, 0 to remove and 56 not upgraded.

Need to get 50.6 MB of archives.

After this operation, 290 MB of additional disk space will be used.

Get:3 [http://archive.ubuntu.com/ubuntu bionic/main](http://archive.ubuntu.com/ubuntu bionic/main) amd64 conntrack amd64 1:1.4.4+snapshot20161117-6ubuntu2 [30.6 kB]

Get:1 [https://packages.cloud.google.com/apt kubernetes-xenial/main](https://packages.cloud.google.com/apt kubernetes-xenial/main) amd64 cri-tools amd64 1.12.0-00 [5,343 kB]

Get:7 [http://archive.ubuntu.com/ubuntu bionic/main](http://archive.ubuntu.com/ubuntu bionic/main) amd64 socat amd64 1.7.3.2-2ubuntu2 [342 kB]

Get:2 [https://packages.cloud.google.com/apt kubernetes-xenial/main](https://packages.cloud.google.com/apt kubernetes-xenial/main) amd64 kubernetes-cni amd64 0.7.5-00 [6,473 kB]

Get:4 [https://packages.cloud.google.com/apt kubernetes-xenial/main](https://packages.cloud.google.com/apt kubernetes-xenial/main) amd64 kubelet amd64 1.14.0-00 [21.5 MB]

Get:5 [https://packages.cloud.google.com/apt kubernetes-xenial/main](https://packages.cloud.google.com/apt kubernetes-xenial/main) amd64 kubectl amd64 1.14.0-00 [8,801 kB]

Get:6 [https://packages.cloud.google.com/apt kubernetes-xenial/main](https://packages.cloud.google.com/apt kubernetes-xenial/main) amd64 kubeadm amd64 1.14.0-00 [8,147 kB]

Fetched 50.6 MB in 30s (1,694 kB/s)

Selecting previously unselected package conntrack.

(Reading database … 67000 files and directories currently installed.)

Preparing to unpack …/0-conntrack_1%3a1.4.4+snapshot20161117-6ubuntu2_amd64.deb …

Unpacking conntrack (1:1.4.4+snapshot20161117-6ubuntu2) …

Selecting previously unselected package cri-tools.

Preparing to unpack …/1-cri-tools_1.12.0-00_amd64.deb …

Unpacking cri-tools (1.12.0-00) …

Selecting previously unselected package kubernetes-cni.

Preparing to unpack …/2-kubernetes-cni_0.7.5-00_amd64.deb …

Unpacking kubernetes-cni (0.7.5-00) …

Selecting previously unselected package socat.

Preparing to unpack …/3-socat_1.7.3.2-2ubuntu2_amd64.deb …

Unpacking socat (1.7.3.2-2ubuntu2) …

Selecting previously unselected package kubelet.

Preparing to unpack …/4-kubelet_1.14.0-00_amd64.deb …

Unpacking kubelet (1.14.0-00) …

Selecting previously unselected package kubectl.

Preparing to unpack …/5-kubectl_1.14.0-00_amd64.deb …

Unpacking kubectl (1.14.0-00) …

Selecting previously unselected package kubeadm.

Preparing to unpack …/6-kubeadm_1.14.0-00_amd64.deb …

Unpacking kubeadm (1.14.0-00) …

Setting up conntrack (1:1.4.4+snapshot20161117-6ubuntu2) …

Setting up kubernetes-cni (0.7.5-00) …

Setting up cri-tools (1.12.0-00) …

Setting up socat (1.7.3.2-2ubuntu2) …

Setting up kubelet (1.14.0-00) …

Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.

Setting up kubectl (1.14.0-00) …

Processing triggers for man-db (2.8.3-2ubuntu0.1) …

Setting up kubeadm (1.14.0-00) …

修改docker的Cgroup Driver

原因是docker的Cgroup Driver和kubelet的Cgroup Driver不一致

执行命令 kubeadm init --apiserver-advertise-address=172.36.13.5 --pod-network-cidr=192.168.0.0/16时也会报错

在这里插入图片描述

解决方案:

1、运行systemctl enable docker.service

2、修改docker的Cgroup Driver

修改/etc/docker/daemon.json文件添加以下内容

{

“exec-opts”: [“native.cgroupdriver=systemd”]

}

如图:

touch /etc/docker/daemon.json

root@master:~# vim /etc/docker/daemon.json

root@master:~# systemctl daemon-reload

root@master:~# systemctl restart docker
在这里插入图片描述

以上操作同样使用于集群的其他NODE

以下操作只用于master节点

root@master:~# kubeadm init --apiserver-advertise-address=172.36.13.5 --pod-network-cidr=192.168.0.0/16

[init] Using Kubernetes version: v1.14.0

[preflight] Running pre-flight checks

[preflight] Pulling images required for setting up a Kubernetes cluster

[preflight] This might take a minute or two, depending on the speed of your internet connection

[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’

[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”

[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”

[kubelet-start] Activating the kubelet service

[certs] Using certificateDir folder “/etc/kubernetes/pki”

[certs] Generating “etcd/ca” certificate and key

[certs] Generating “etcd/healthcheck-client” certificate and key

[certs] Generating “apiserver-etcd-client” certificate and key

[certs] Generating “etcd/server” certificate and key

[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [172.36.13.154 127.0.0.1 ::1]

[certs] Generating “etcd/peer” certificate and key

[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [172.36.13.154 127.0.0.1 ::1]

[certs] Generating “ca” certificate and key

[certs] Generating “apiserver-kubelet-client” certificate and key

[certs] Generating “apiserver” certificate and key

[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.36.13.154]

[certs] Generating “front-proxy-ca” certificate and key

[certs] Generating “front-proxy-client” certificate and key

[certs] Generating “sa” key and public key

[kubeconfig] Using kubeconfig folder “/etc/kubernetes”

[kubeconfig] Writing “admin.conf” kubeconfig file

[kubeconfig] Writing “kubelet.conf” kubeconfig file

[kubeconfig] Writing “controller-manager.conf” kubeconfig file

[kubeconfig] Writing “scheduler.conf” kubeconfig file

[control-plane] Using manifest folder “/etc/kubernetes/manifests”

[control-plane] Creating static Pod manifest for “kube-apiserver”

[control-plane] Creating static Pod manifest for “kube-controller-manager”

[control-plane] Creating static Pod manifest for “kube-scheduler”

[etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s

[apiclient] All control plane components are healthy after 17.503524 seconds

[upload-config] storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace

[kubelet] Creating a ConfigMap “kubelet-config-1.14” in namespace kube-system with the configuration for the kubelets in the cluster

[upload-certs] Skipping phase. Please see --experimental-upload-certs

[mark-control-plane] Marking the node master as control-plane by adding the label “node-role.kubernetes.io/master=’’”

[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

[bootstrap-token] Using token: fz878p.20232lq43xq9f2ru

[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles

[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstrap-token] creating the “cluster-info” ConfigMap in the “kube-public” namespace

[addons] Applied essential addon: CoreDNS

[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown ( i d − u ) : (id -u): (idu):(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.36.13.154:6443 --token fz878p.20232lq43xq9f2ru --discovery-token-ca-cert-hash sha256:2f278d56b0845974405722f06a1bd7f514119d805f3f5493cf05a110f8c5d302

记录下上面标红的内容

忘记初始master节点时的node节点加入集群命令怎么办

# 简单方法
kubeadm token create --print-join-command

# 第二种方法
token=$(kubeadm token generate)
kubeadm token create $token --print-join-command --ttl=0

获取集群所有节点

root@master:~# kubectl get nodes

The connection to the server localhost:8080 was refused - did you specify the right host or port?
在这里插入图片描述

如果执行上面命令报错,则执行以下命令解决

root@master:~# mkdir -p $HOME/.kube

root@master:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

root@master:~# sudo chown ( i d − u ) : (id -u): (idu):(id -g) $HOME/.kube/config

root@master:~# kubectl get nodes

NAME STATUS ROLES AGE VERSION

master NotReady master 19m v1.14.0

root@master:~# apt-get update

Hit:2 http://archive.ubuntu.com/ubuntu bionic InRelease

Get:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]

Hit:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease

Get:4 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]

Get:5 http://archive.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]

Fetched 252 kB in 4s (64.6 kB/s)

Reading package lists… Done

root@master:~# kubectl get nodes

NAME STATUS ROLES AGE VERSION

master NotReady master 46m v1.14.0

slave1 NotReady 14s v1.14.0

没有ready不要急,执行完下面命令就OK了

安装网络插件Canal

获取集群中所有pod状态

root@master:~# kubectl get pods -o wide --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

kube-system coredns-fb8b8dccf-c9ncm 0/1 Pending 0 108m

kube-system coredns-fb8b8dccf-qhptb 0/1 Pending 0 108m

kube-system etcd-master 1/1 Running 0 107m 172.36.13.154 master

kube-system kube-apiserver-master 1/1 Running 0 107m 172.36.13.154 master

kube-system kube-controller-manager-master 1/1 Running 0 106m 172.36.13.154 master

kube-system kube-proxy-lttst 1/1 Running 0 108m 172.36.13.154 master

kube-system kube-proxy-nxttj 1/1 Running 0 61m 172.36.13.180 slave1

kube-system kube-scheduler-master 1/1 Running 0 107m 172.36.13.154 master

可以看到CoreDND的状态是Pending,这是因为我们还没有安装网络插件。

可以使用如下命令命令来安装Canal插件:

root@master:~# kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml

clusterrole.rbac.authorization.k8s.io/calico-node created

clusterrolebinding.rbac.authorization.k8s.io/calico-node created

root@master:~# kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

configmap/calico-config created

service/calico-typha created

deployment.apps/calico-typha created

poddisruptionbudget.policy/calico-typha created

daemonset.extensions/calico-node created

serviceaccount/calico-node created

customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created

再次获取pod状态发现还在创建呢,不急等等

root@master:~# kubectl get pods --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system calico-node-2kv29 2/2 Running 0 47s

kube-system calico-node-gd2nh 0/2 ContainerCreating 0 47s

kube-system coredns-fb8b8dccf-c9ncm 0/1 ContainerCreating 0 112m

kube-system coredns-fb8b8dccf-qhptb 0/1 ContainerCreating 0 112m

kube-system etcd-master 1/1 Running 0 112m

kube-system kube-apiserver-master 1/1 Running 0 111m

kube-system kube-controller-manager-master 1/1 Running 0 111m

kube-system kube-proxy-lttst 1/1 Running 0 112m

kube-system kube-proxy-nxttj 1/1 Running 0 66m

kube-system kube-scheduler-master 1/1 Running 0 112m

过一小会再查看下状态发现一切OK了

root@master:~# kubectl get pods --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system calico-node-2kv29 2/2 Running 0 4m35s

kube-system calico-node-gd2nh 2/2 Running 0 4m35s

kube-system coredns-fb8b8dccf-c9ncm 1/1 Running 0 116m

kube-system coredns-fb8b8dccf-qhptb 1/1 Running 0 116m

kube-system etcd-master 1/1 Running 0 115m

kube-system kube-apiserver-master 1/1 Running 0 115m

kube-system kube-controller-manager-master 1/1 Running 0 115m

kube-system kube-proxy-lttst 1/1 Running 0 116m

kube-system kube-proxy-nxttj 1/1 Running 0 70m

kube-system kube-scheduler-master 1/1 Running 0 115m

再次查看集群所有节点状态时,发现都已OK了

root@master:~# kubectl get nodes

NAME STATUS ROLES AGE VERSION

master Ready master 119m v1.14.0

slave1 Ready 73m v1.14.0

安装UI

root@master:~# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml

secret/kubernetes-dashboard-certs created

secret/kubernetes-dashboard-csrf created

serviceaccount/kubernetes-dashboard created

role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created

rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created

deployment.apps/kubernetes-dashboard created

service/kubernetes-dashboard created

root@master:~# kubectl get pods --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system calico-node-2kv29 2/2 Running 0 46m

kube-system calico-node-gd2nh 2/2 Running 0 46m

kube-system coredns-fb8b8dccf-c9ncm 1/1 Running 0 158m

kube-system coredns-fb8b8dccf-qhptb 1/1 Running 0 158m

kube-system etcd-master 1/1 Running 0 157m

kube-system kube-apiserver-master 1/1 Running 0 157m

kube-system kube-controller-manager-master 1/1 Running 0 157m

kube-system kube-proxy-lttst 1/1 Running 0 158m

kube-system kube-proxy-nxttj 1/1 Running 0 112m

kube-system kube-scheduler-master 1/1 Running 0 157m

kube-system kubernetes-dashboard-5f7b999d65-5rzm6 1/1 Running 0 15m

获取命名空间kube-system下的所有服务

root@master:~# kubectl -n kube-system get svc

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

calico-typha ClusterIP 10.103.50.174 5473/TCP 56m

kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 168m

kubernetes-dashboard ClusterIP 10.102.124.6 443/TCP 25m

修改服务 kubernetes-dashboard

root@master:~# kubectl -n kube-system edit svc kubernetes-dashboard

ports下添加nodePort: 30001

修改type为NodePort
在这里插入图片描述

保存成功则会显示如下信息:

service/kubernetes-dashboard edited

获取命名空间kube-system下的所有服务

root@master:~# kubectl -n kube-system get svc

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

calico-typha ClusterIP 10.103.50.174 5473/TCP 124m

kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 3h56m

kubernetes-dashboard NodePort 10.102.124.6 443:30001/TCP 93m
在这里插入图片描述

root@master:~# mkdir keys

root@master:~# cd keys

root@master:~/keys# pwd

/root/keys

root@master:~/keys# ls

root@master:~/keys# openssl genrsa -out dashboard.key 2048

Generating RSA private key, 2048 bit long modulus

…+++

…+++

e is 65537 (0x010001)

root@master:~/keys# openssl req -new -out dashboard.csr -key dashboard.key -subj ‘/CN=172.36.13.5’

root@master:~/keys# ls

dashboard.csr dashboard.key

root@master:~/keys# openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt

Signature ok

subject=CN = 172.36.13.5

Getting Private key

root@master:~/keys# ls

dashboard.crt dashboard.csr dashboard.key

root@master:~/keys# openssl x509 -in dashboard.crt -text -noout

Certificate:

​ Data:

​ Version: 1 (0x0)

​ Serial Number:

​ a6:c4:52:7d:b4:4e:9c:42

​ Signature Algorithm: sha256WithRSAEncryption

​ Issuer: CN = 172.36.13.154

​ Validity

​ Not Before: Apr 3 06:55:20 2019 GMT

​ Not After : May 3 06:55:20 2019 GMT

​ Subject: CN = 172.36.13.154

​ Subject Public Key Info:

​ Public Key Algorithm: rsaEncryption

​ Public-Key: (2048 bit)

​ Modulus:

​ 00:ce:0c:c6:0c:f8:6e:45:f1:4a:c5:5f:9c:b3:1e:

​ c5:ee:64:f0:f8:04:34:dc:8c:bd:16:e0:5f:4c:2e:

​ f8:a9:9e:57:a3:03:75:f2:56:db:93:8f:01:be:54:

​ 33:36:f4:79:1f:63:ae:08:21:e4:17:62:9f:c9:f3:

​ 82:b1:d0:a8:9a:ec:94:19:6c:08:6b:5f:b6:ea:f5:

​ be:e7:3e:2d?21:6c:81:58:44:8a:e9:dd:fd:42:

​ cb:64:81:71:46:b3:23:13:87:4a:7e:67:99:c4:fa:

​ 40:a8:7d:83:78:b0:1a:bc:10:01:78:c5:db:1b:9f:

​ ca:44:14:25:21:2c:ca:83:74:9a:13:82:a5:f2:bc:

​ b7:80:b1:e1:b9:53:62:df:4d:a4:90:19:19:20?

​ 3f:73:ba:fc:62:65:34:81:55:c7:b1:51:d1:66:8c:

​ d6:4e:c3:68:fe:52:e0:7f:23:11:99:94:d4:24:83:

​ 3d:71:b5:9b:c2:9b:a8:29:72:9a:56:ba:8b:b0:2d:

​ a0:29:8b:56:df:df:41:a0:e3:ea:68:c6:47:f4:19:

​ 03:94:be:1e:b9:6f:2e:9a:6d:52:d9:07:c6:4f:42:

​ 48:97:bf:8e:c9:01:b2:f7:cb:f7:8a:b6:ea:88:af:

​ 9e:06:5c:42:ae:7f:e9:13:30:8c:4c:29:94:89:6c:

​ 0f:31

​ Exponent: 65537 (0x10001)

​ Signature Algorithm: sha256WithRSAEncryption

​ 44:3f:58:39:c6:76:d8:51:fb:8a:5d:7e:02:6d:1b:bf:fb:d8:

​ 7d:53:56:3a:d7:af:8a:66:54:4c:57:4d:a2:cc:e4:ed:66:c8:

​ 6e:4a:35:3e:f4:cc:25:05:52:dc:90:50:cb:c6:15:e1:2c:3c:

​ 4b:7a:c2:4c:ae:ba:0d:58:3b:b4:1a:a3:34:3f:63:ae:f6:d9:

​ 31:db:d2:9d:b9:88:c1:5c:bc:f3:af:ea:29:f9:73:9c:87:99:

​ 95:92:f7:89:f3:d2:3b:8f:ee:69:4e:6e:57:cb:ca:e6:f5:9a:

​ db:5e:c1:84:5f:4c:32:eb:53:25:e1:fc:71:fa:df:8c:d4:d7:

​ 13:af:c2:f6:e7:6c:74:2a:bf:36:56:1e:54:33:a8:d8:31:79:

​ 69:96:f2:9c:93:6c:26:a2:0e:f8:d8??bc:71:71:42:fa:7d:

​ 6f:75:5f:3b:78:20:88:56:7e:57:d3:42:b3:c1:e1:3b:2c:4b:

​ 81:67:ce:bf:9f:05:e8:9c:d0:6a:24:af:d8:60:fb:d6:f4:5d:

​ 98:04:9f:b4:82:48:bc:b5:c4:72:2a:ce:71:a5:c3:d6:8a:fa:

​ 14:06:62:86:57:39:50:e5:c0:27:e9:4f:12:7b:ba:54:89:65:

​ 40:86:7e:4c:6e:97:52:3e:50:c6:2c:4e:95:4e:59:92:31:1b:

​ 8d:cb:a9:b8

下载kubernetes-dashboard.yaml文件

root@master:~/keys# wget https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml

–2019-04-03 06:56:59-- https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml

Resolving raw.githubusercontent.com (raw.githubusercontent.com)… 151.101.192.133, 151.101.128.133, 151.101.64.133, …

Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.192.133|:443… connected.

HTTP request sent, awaiting response… 200 OK

Length: 4784 (4.7K) [text/plain]

Saving to: ‘kubernetes-dashboard.yaml’

kubernetes-dashboard.yaml 100%[===========================================================================================>] 4.67K --.-KB/s in 0s

2019-04-03 06:57:00 (40.9 MB/s) - ‘kubernetes-dashboard.yaml’ saved [4784/4784]

root@master:~/keys# ls

dashboard.crt dashboard.csr dashboard.key kubernetes-dashboard.yaml
在这里插入图片描述

root@master:~/keys# cat kubernetes-dashboard.yaml

# Copyright 2017 The Kubernetes Authors.

#

# Licensed under the Apache License, Version 2.0 (the “License”);

# you may not use this file except in compliance with the License.

# You may obtain a copy of the License at

#

# http://www.apache.org/licenses/LICENSE-2.0

#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an “AS IS” BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

# ------------------- Dashboard Secrets ------------------- #

apiVersion: v1

kind: Secret

metadata:

labels:

​ k8s-app: kubernetes-dashboard

name: kubernetes-dashboard-certs

namespace: kube-system

type: Opaque

apiVersion: v1

kind: Secret

metadata:

labels:

​ k8s-app: kubernetes-dashboard

name: kubernetes-dashboard-csrf

namespace: kube-system

type: Opaque

data:

csrf: “”

# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1

kind: ServiceAccount

metadata:

labels:

​ k8s-app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kube-system

# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: kubernetes-dashboard-minimal

namespace: kube-system

rules:

# Allow Dashboard to create ‘kubernetes-dashboard-key-holder’ secret.

- apiGroups: [""]

resources: [“secrets”]

verbs: [“create”]

# Allow Dashboard to create ‘kubernetes-dashboard-settings’ config map.

- apiGroups: [""]

resources: [“configmaps”]

verbs: [“create”]

# Allow Dashboard to get, update and delete Dashboard exclusive secrets.

- apiGroups: [""]

resources: [“secrets”]

resourceNames: [“kubernetes-dashboard-key-holder”, “kubernetes-dashboard-certs”, “kubernetes-dashboard-csrf”]

verbs: [“get”, “update”, “delete”]

# Allow Dashboard to get and update ‘kubernetes-dashboard-settings’ config map.

- apiGroups: [""]

resources: [“configmaps”]

resourceNames: [“kubernetes-dashboard-settings”]

verbs: [“get”, “update”]

# Allow Dashboard to get metrics from heapster.

- apiGroups: [""]

resources: [“services”]

resourceNames: [“heapster”]

verbs: [“proxy”]

- apiGroups: [""]

resources: [“services/proxy”]

resourceNames: [“heapster”, “http:heapster:”, “https:heapster:”]

verbs: [“get”]

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

name: kubernetes-dashboard-minimal

namespace: kube-system

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: Role

name: kubernetes-dashboard-minimal

subjects:

- kind: ServiceAccount

name: kubernetes-dashboard

namespace: kube-system

# ------------------- Dashboard Deployment ------------------- #

kind: Deployment

apiVersion: apps/v1

metadata:

labels:

​ k8s-app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kube-system

spec:

replicas: 1

revisionHistoryLimit: 10

selector:

​ matchLabels:

​ k8s-app: kubernetes-dashboard

template:

​ metadata:

​ labels:

​ k8s-app: kubernetes-dashboard

​ spec:

​ containers:

​ - name: kubernetes-dashboard

​ image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

​ ports:

​ - containerPort: 8443

​ protocol: TCP

​ args:

​ - --auto-generate-certificates

​ # Uncomment the following line to manually specify Kubernetes API server Host

​ # If not specified, Dashboard will attempt to auto discover the API server and connect

​ # to it. Uncomment only if the default does not work.

​ # - --apiserver-host=http://my-address:port

​ volumeMounts:

​ - name: kubernetes-dashboard-certs

​ mountPath: /certs

​ # Create on-disk volume to store exec logs

​ - mountPath: /tmp

​ name: tmp-volume

​ livenessProbe:

​ httpGet:

​ scheme: HTTPS

​ path: /

​ port: 8443

​ initialDelaySeconds: 30

​ timeoutSeconds: 30

​ volumes:

​ - name: kubernetes-dashboard-certs

​ secret:

​ secretName: kubernetes-dashboard-certs

​ - name: tmp-volume

​ emptyDir: {}

​ serviceAccountName: kubernetes-dashboard

​ # Comment the following tolerations if Dashboard must not be deployed on master

​ tolerations:

​ - key: node-role.kubernetes.io/master

​ effect: NoSchedule

# ------------------- Dashboard Service ------------------- #

kind: Service

apiVersion: v1

metadata:

labels:

​ k8s-app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kube-system

spec:

ports:

​ - port: 443

​ targetPort: 8443

selector:

​ k8s-app: kubernetes-dashboard

root@master:~/keys# kubectl delete -f kubernetes-dashboard.yaml

secret “kubernetes-dashboard-certs” deleted

secret “kubernetes-dashboard-csrf” deleted

serviceaccount “kubernetes-dashboard” deleted

role.rbac.authorization.k8s.io “kubernetes-dashboard-minimal” deleted

rolebinding.rbac.authorization.k8s.io “kubernetes-dashboard-minimal” deleted

deployment.apps “kubernetes-dashboard” deleted

service “kubernetes-dashboard” deleted

root@master:~/keys# kubectl -n kube-system create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt

secret/kubernetes-dashboard-certs created

root@master:~/keys# vim kubernetes-dashboard.yaml

删除以下内容

apiVersion: v1

kind: Secret

metadata:

labels:

​ k8s-app: kubernetes-dashboard

name: kubernetes-dashboard-certs

namespace: kube-system

type: Opaque

保存退出
在这里插入图片描述

创建kubernetes dashboard

root@master:~/keys# kubectl create -f kubernetes-dashboard.yaml

secret/kubernetes-dashboard-csrf created

serviceaccount/kubernetes-dashboard created

role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created

rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created

deployment.apps/kubernetes-dashboard created

service/kubernetes-dashboard created

root@master:~/keys# kubectl -n kube-system get svc

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

calico-typha ClusterIP 10.103.50.174 5473/TCP 3h28m

kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 5h20m

kubernetes-dashboard ClusterIP 10.102.178.17 443/TCP 74s

root@master:~/keys# kubectl -n kube-system edit svc kubernetes-dashboard

ports下添加nodePort: 30001

修改type为NodePort
在这里插入图片描述

保存退出

查看kube-system命名空间下所有服务信息

root@master:~/keys# kubectl -n kube-system get svc

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

calico-typha ClusterIP 10.103.50.174 5473/TCP 3h31m

kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 5h23m

kubernetes-dashboard NodePort 10.102.178.17 443:30001/TCP 4m17s

在这里插入图片描述

root@master:~/keys# kubectl -n kube-system describe secret kubernetes-dashboard-certs
在这里插入图片描述

如上所示创建的证书已经成功添加了

创建 serviceaccount dashboard

root@master:~/keys# kubectl create serviceaccount dashboard -n default

serviceaccount/dashboard created

root@master:~/keys# kubectl create clusterrolebinding dashboard-admin -n default

–clusterrole=cluster-admin

–serviceaccount=default:dashboard
在这里插入图片描述

获取登录K8S时所用token

root@master:~/keys# kubectl get secret $(kubectl get serviceaccount dashboard -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode

eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRhc2hib2FyZC10b2tlbi1ydDJmMiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkYXNoYm9hcmQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkZGI5YTA5Yy01NWU4LTExZTktYTNjYi0wMDBjMjk5ZDljZWEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkYXNoYm9hcmQifQ.QaXZYuN1j12vISzQXYhUJuO0BQquhdYNm7U_gHrIBBEqt0xekguYz2TQSAVO487Yo0aTi33bO8y52iGFqYrLlyLuQOq8fHjhQP2reCOaKZssq3b66R8_ANHHYXGICfqbp4bKGnV9ht-jFJyjoLS8pPxldxxwL9HIMxBf_WI0RgE7TxH_hr0I7JLLlx1czOtSJr1cCNbMiWBO-w9bvOb2kYfiwf2Zz09ivirPZA1fHNNujBtqUQiuWKDDthlujMhNZYMmpyB_Sbu36K-TJAoEtS1TL3m9nnJcz2rBIRPkGX06wRGKNs4VYYj_nmDUULIR-YOzDPhXZ0C_Ya0U0N9tNA

root@master:~/keys#

复制以上红色内容并保存,用于以后登录K8S

在这里插入图片描述
在这里插入图片描述

最后

以上就是曾经小丸子为你收集整理的Kubernetes系列~Master节点安装Master安装的全部内容,希望文章能够帮你解决Kubernetes系列~Master节点安装Master安装所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(53)

评论列表共有 0 条评论

立即
投稿
返回
顶部