我是靠谱客的博主 愉快麦片,最近开发中收集的这篇文章主要介绍kubernetes完整搭建过程一环境准备,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

环境准备

1. 修改所有机器hosts文件

MasterNode节点

修改/etc/hosts文件,加入主机地址

192.168.182.128   Master

192.168.182.129   Node1

2. 关闭防火墙、swaplinux安全配置

systemctl stop firewalld

getenforce      

setenforce 0

swapoff -a

3. 安装docker配置国内镜像源

yum -y install docker

 

修改docker镜像源

cat  /etc/docker/daemon.json

{

"registry-mirrors": ["http://68e02ab9.m.daocloud.io"]

}

启动docker

systemctl enable docker

systemctl start docker

 

查看docker状态

systemctl status docker

 

4. 修改配置文件并重启docker服务

echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables

systemctl  restart  docker

5. 配置k8s安装包的yum仓库地址

vi kubernetes.repo

 

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=0

 

mv kubernetes.repo /etc/yum.repos.d/

 

6. 安装k8s相关服务

yum -y install kubectl kubelet kubeadm kubernetes-cni

 

7. 启动kubelet服务

sudo systemctl enable kubelet && sudo systemctl start kubelet

此时kubelet服务状态可能是未启动状态,这里不需要处理

8. 下载k8s集群服务依赖的docker镜像,并标签为google镜像

docker pull warrior/pause-amd64:3.0

docker tag warrior/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0

 

docker pull warrior/etcd-amd64:3.0.17

docker tag warrior/etcd-amd64:3.0.17  gcr.io/google_containers/etcd-amd64:3.0.17

 

docker pull warrior/kube-apiserver-amd64:v1.6.0

docker tag warrior/kube-apiserver-amd64:v1.6.0 gcr.io/google_containers/kube-apiserver-amd64:v1.6.0

 

docker pull warrior/kube-scheduler-amd64:v1.6.0

docker tag warrior/kube-scheduler-amd64:v1.6.0  gcr.io/google_containers/kube-scheduler-amd64:v1.6.0

 

docker pull warrior/kube-controller-manager-amd64:v1.6.0

docker tag warrior/kube-controller-manager-amd64:v1.6.0  gcr.io/google_containers/kube-controller-manager-amd64:v1.6.0

 

docker pull warrior/kube-proxy-amd64:v1.6.0

docker tag warrior/kube-proxy-amd64:v1.6.0  gcr.io/google_containers/kube-proxy-amd64:v1.6.0

 

docker pull gysan/dnsmasq-metrics-amd64:1.0

docker tag gysan/dnsmasq-metrics-amd64:1.0  gcr.io/google_containers/dnsmasq-metrics-amd64:1.0

 

docker pull warrior/k8s-dns-kube-dns-amd64:1.14.1

docker tag warrior/k8s-dns-kube-dns-amd64:1.14.1 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4

 

docker pull warrior/k8s-dns-dnsmasq-nanny-amd64:1.14.1

docker tag warrior/k8s-dns-dnsmasq-nanny-amd64:1.14.1  gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4

 

docker pull warrior/k8s-dns-sidecar-amd64:1.14.1

docker tag warrior/k8s-dns-sidecar-amd64:1.14.1  gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4

 

docker pull awa305/kube-discovery-amd64:1.0

docker tag awa305/kube-discovery-amd64:1.0 gcr.io/google_containers/kube-discovery-amd64:1.0

 

docker pull gysan/exechealthz-amd64:1.2

docker tag gysan/exechealthz-amd64:1.2  gcr.io/google_containers/exechealthz-amd64:1.2

 

docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/kubernetes-dashboard-amd64:v1.6.0

 

docker tag registry.cn-hangzhou.aliyuncs.com/google-containers/kubernetes-dashboard-amd64:v1.6.0 gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.0

 

9. 下载Calico网络相关镜像(不安装calico网络无需下载)

docker pull quay.io/calico/node:v2.6.5

docker pull quay.io/calico/kube-controllers:v1.0.2

docker pull quay.io/calico/cni:v1.11.2

Master节点安装

1. 初始化k8s集群

Master节点执行创建命令,初始化时要知道k8s的版本号,要与之前下载的k8sdocker镜像版本一致,否则会从公网下载镜像,没有翻墙的话下载失败会一直卡住

1.1 使用calico网络执行(calico的默认ip192.168.0.0,可能与本地网络ip环境冲突,这里修改为192.168.111.0

kubeadm init --kubernetes-version=v1.6.0  --pod-network-cidr=192.168.111.0/24

1.2 使用weave网络执行

kubeadm init --kubernetes-version=v1.6.0

 

2. 出现success代表安装成功

执行命令:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

记录下初始化成功后的命令,以后添加Node节点时要使用这个命令

kubeadm join --token 31bfa8.ae4b0c7836c127b1 192.168.x.x:6443(非常重要)

 

查看服务状态

 

因为未安装网络 此时kube-dns服务状态为pending

3 安装网络,以下二选一

3.1 安装weave网络

kubectl apply -f https://git.io/weave-kube-1.6

  3.2 安装calico网络

3.2.1安装依赖工具

yum install ebtables ethtool

3.2.2下载calico.yaml文件

       wget https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml

3.2.3 修改calico.yaml文件的ip地址(加黑部分)为k8sMaster节点初始化时设置的IP地址

# Configure the IP Pool from which Pod IPs will be chosen.
- name: CALICO_IPV4POOL_CIDR
value: “192.168.111.0/24
- name: CALICO_IPV4POOL_IPIP
value: “always”
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: “false”

3.2.4启动calico镜像

kubectl apply -f  ./calico.yaml

 

等待约10分钟(此时安装calico相关镜像),查看相关服务状态,此时kub-dns服务状态已经Running,表示网络安装成功

4 安装dashboard管理界面

  4.1 下载dashboard模板文件或者自己拷贝一份

wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

  4.2 或者直接复制下面的内容保存为kubernetes-dashboard.yaml

# Copyright 2015 Google Inc. All Rights Reserved.

#

# Licensed under the Apache License, Version 2.0 (the "License");

# you may not use this file except in compliance with the License.

# You may obtain a copy of the License at

#

#     http://www.apache.org/licenses/LICENSE-2.0

#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

 

# Configuration to deploy release version of the Dashboard UI compatible with

# Kubernetes 1.6 (RBAC enabled).

#

# Example usage: kubectl create -f <this_file>

 

apiVersion: v1

kind: ServiceAccount

metadata:

  labels:

    app: kubernetes-dashboard

  name: kubernetes-dashboard

  namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1beta1

kind: ClusterRoleBinding

metadata:

  name: kubernetes-dashboard

  labels:

    app: kubernetes-dashboard

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: cluster-admin

subjects:

- kind: ServiceAccount

  name: kubernetes-dashboard

  namespace: kube-system

---

kind: Deployment

apiVersion: extensions/v1beta1

metadata:

  labels:

    app: kubernetes-dashboard

  name: kubernetes-dashboard

  namespace: kube-system

spec:

  replicas: 1

  revisionHistoryLimit: 10

  selector:

    matchLabels:

      app: kubernetes-dashboard

  template:

    metadata:

      labels:

        app: kubernetes-dashboard

    spec:

      containers:

      - name: kubernetes-dashboard

        image:  registry.cn-hangzhou.aliyuncs.com/google-containers/kubernetes-dashboard-amd64:v1.6.0

        imagePullPolicy: Always

        ports:

        - containerPort: 9090

          protocol: TCP

        args:

          # Uncomment the following line to manually specify Kubernetes API server Host

          # If not specified, Dashboard will attempt to auto discover the API server and connect

          # to it. Uncomment only if the default does not work.

          # - --apiserver-host=http://my-address:port

        livenessProbe:

          httpGet:

            path: /

            port: 9090

          initialDelaySeconds: 30

          timeoutSeconds: 30

      serviceAccountName: kubernetes-dashboard

      # Comment the following tolerations if Dashboard must not be deployed on master

      tolerations:

      - key: node-role.kubernetes.io/master

        effect: NoSchedule

---

kind: Service

apiVersion: v1

metadata:

  labels:

    app: kubernetes-dashboard

  name: kubernetes-dashboard

  namespace: kube-system

spec:

  type: NodePort

  ports:

  - port: 80

    targetPort: 9090

    nodePort: 31000

  selector:

    app: kubernetes-dashboard

4.3修改kubernetes-dashboard.yaml文件

修改下面加黑字体的部分

spec:

type: NodePort

selector:

k8s-app: kubernetes-dashboard

ports:

  - port: 80

targetPort: 9090

nodePort: 31000  ##对外端口

如果typeClusterPort改为NodePort

修改image地址为可访问地址(默认为Google地址,需要翻墙才能访问)

image: registry.cn-hangzhou.aliyuncs.com/google-containers/kubernetes-dashboard-amd64:v1.6.0

   4.4 创建dashboard服务

kubectl create -f kubernetes-dashboard.yaml

      等待几分钟后,当kubernetes-dashboard的状态为Running时表示服务启动成功

 

   4.5 访问web页面,https://MaterIp:31000

Node节点安装

1. Node节点也完成部署步骤一的环境配置

2.Node节点执行Master节点初始化时记录下的命令

kubeadm join --token ede9af.c8aed8f4efc9571a 192.168.182.132:6443

 

3.等待几分钟后,在Master节点可以看到新添加的节点信息,此时环境创建完毕

其他

1 查看k8s相关服务日志

   当系统出现异常时可以通过journalctl命令查看一些组件的服务。

Linux系统上systemd系统来管理kubernetes服务,并且journal系统会接管服务程序的输出日志,可以通过systemctl status <xxx>journalctl -u <xxx> -f来查看kubernetes服务的日志。

其中kubernetes组件包括:

k8s组件

涉及日志内容

备注

kube-apiserver

 

 

kube-controller-manager

Pod扩容相关或RC相关

 

kube-scheduler

Pod扩容相关或RC相关

 

kubelet

Pod生命周期相关:创建、停止等

 

etcd

 

 

  例如:journalctl  -u  -f   kubelet

2  关于calico网络配置

修改Calico默认ip
Calico默认ip段是192.168.0.0/16,如果本地也是192.168.x.x容易引起冲突,所以最好重新定义Calicoip

第一步在kubeadm init 的时候修改ip段。
例如:kubeadm init –pod-network-cidr=192.168.111.0/24

第二步修改calico.yaml 文件

# Configure the IP Pool from which Pod IPs will be chosen.
- name: CALICO_IPV4POOL_CIDR
value: “192.168.111.0/24”
- name: CALICO_IPV4POOL_IPIP
value: “always”
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: “false”

3.若是单节点部署(即既为master,也为slave)

正常情况Master节点不参与负载,执行以下命令,让master参与负载

kubectl taint nodes --all  node-role.kubernetes.io/master-

最后

以上就是愉快麦片为你收集整理的kubernetes完整搭建过程一环境准备的全部内容,希望文章能够帮你解决kubernetes完整搭建过程一环境准备所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(47)

评论列表共有 0 条评论

立即
投稿
返回
顶部