我是靠谱客的博主 害怕砖头,最近开发中收集的这篇文章主要介绍全国内环境安装,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

前期实验性的代码:
k8s安装命令(前期测试性)

############################################################################################################################################
############################docker 安装########################################################################
##############################################################################################################
yum update
yum install -y yum-utils
yum-config-manager     --add-repo     https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce docker-ce-cli containerd.io
docker --version
systemctl start docker
cat > /etc/docker/daemon.json <<EOF
{  
 "registry-mirrors": ["https://q2hy3fzi.mirror.aliyuncs.com"],
 "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl daemon-reload && systemctl restart docker

############################################################################################################################################
############################k8s 安装########################################################################
##############################################################################################################


#cat /sys/class/dmi/id/product_uuid
lsmod | grep br_netfilter
modprobe br_netfilter
#smod | grep br_netfilter
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

#开启桥接模式(不懂,官网说的,不需要配置什么host操蛋玩意)
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sudo sysctl --system
#systemctl status firewalld
#ls /run
#yum update
#yum remove docker                   docker-client                   docker-client-latest                   docker-common                   docker-latest                   docker-latest-logrotate                   docker-logrotate                   docker-engine
yum install -y yum-utils
yum-config-manager     --add-repo     https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce docker-ce-cli containerd.io
#yum list docker-ce --showduplicates | sort -r
docker --version
systemctl start docker
#配置docker 使用阿里云加速
cat > /etc/docker/daemon.json <<EOF
{  
 "registry-mirrors": ["https://q2hy3fzi.mirror.aliyuncs.com"],
 "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl daemon-reload && systemctl restart docker
#########################和外网|或用http代理唯一的区别就在这里(外网能下载这些镜像而已)
#如果出现问题,不要找这些镜像的问题。请用 kubectl describe pod xxxxx 查看问题
#之前就是查看yaml文件,发现人家的镜像是这个样子的registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/metrics-scraper:v1.0.4@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf
#然后拼命找 digest的问题。其实这是自带的一种唯一标记根本不用管
cat > /etc/yum.repos.d/kubernetes.repo  <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

#其他
#yum clean all
#yum makecache
#yum makecache fast
#其他
swapoff -a
sysctl --system
sudo setenforce 0
SELINUX=disable
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables

#kubectl get cm -n kube-system | grep kubelet-config
systemctl enable docker.service

#要根据这个命令列出的镜像,用阿里云下载并tag为这些镜像
kubeadm config images list      

docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.2
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.2    k8s.gcr.io/kube-apiserver:v1.21.2
docker rmi registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.2

docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.2
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.2    k8s.gcr.io/kube-controller-manager:v1.21.2
docker rmi registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.2

docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.2
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.2    k8s.gcr.io/kube-scheduler:v1.21.2
docker rmi registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.2

docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.21.2
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.21.2    k8s.gcr.io/kube-proxy:v1.21.2
docker rmi registry.aliyuncs.com/google_containers/kube-proxy:v1.21.2

docker pull registry.aliyuncs.com/google_containers/pause:3.4.1
docker tag registry.aliyuncs.com/google_containers/pause:3.4.1    k8s.gcr.io/pause:3.4.1
docker rmi registry.aliyuncs.com/google_containers/pause:3.4.1

docker pull registry.aliyuncs.com/google_containers/etcd:3.4.13-0
docker tag registry.aliyuncs.com/google_containers/etcd:3.4.13-0    k8s.gcr.io/etcd:3.4.13-0
docker rmi registry.aliyuncs.com/google_containers/etcd:3.4.13-0

docker pull coredns/coredns:1.8.0
docker tag docker.io/coredns/coredns:1.8.0    k8s.gcr.io/coredns/coredns:v1.8.0
docker rmi docker.io/coredns/coredns:1.8.0
#########################和外网|或用http代理唯一的区别就在这里(外网能下载这些镜像而已)
##########毛参数都不用带,,,有默认值放心。默认address使用的是内网地址,
####重点:k8s要部署在阿里云的同一地域且同一区
kubeadm init

rm -f $HOME/.kube/config
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
#docker images
#docker ps
# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d 'n')"
kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
kubectl get po -A

kubeadm join 172.19.1.193:6443 --token vg3uyn.37zies0xxm3bgm9r         --discovery-token-ca-cert-hash sha256:109645b4faaae35e16b68a448ef69d7bb9cdb03a832adfb4e04875d0f071cef8




kubectl reset

############################################################################################################################################
############################################################################################################################################
############################################################################################################################################


###########问题:用下面的命令
# Events:
#   Type     Reason            Age                From               Message
#   ----     ------            ----               ----               -------
#   Warning  FailedScheduling  65s (x2 over 66s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/disk-pressure: }, that the pod didn't tolerate.
#直译意思是节点有了污点无法容忍,执行 kubectl get no -o yaml | grep taint -A 5 之后发现该节点是不可调度的。这是因为kubernetes出于安全考虑默认情况下无法在master节点上部署pod,于是用下面方法解决:

kubectl taint nodes --all node-role.kubernetes.io/master-
##########还是不行,必须配置了 node才能安装dashboard##############
####没用命令:不停操作dashboard,是没用的。
kubectl delete deploy xxxxxxx
kubectl delete svc xxxxxxx
########################

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml

#kubectl proxy --address='0.0.0.0'  --accept-hosts='^*$'
#对比下面的命令这个命令显得没什么用,不仅麻烦还要结合nginx代理。 下面的命令就有nginx的功能
kubectl port-forward --namespace kubernetes-dashboard --address 0.0.0.0 service/kubernetes-dashboard 443:443
#浏览器打开
https://47.102.187.234

#查看/使用默认的token
kubectl get secret -n=kube-system
kubectl describe secret -n=kube-system default-xxxx-xxxxxxxxxxx
#添加用户,使用超级权限的token
cat > dashboard-admin.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF
kubectl apply -f dashboard-admin.yaml
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')


kubectl port-forward --namespace kubernetes-dashboard --address 0.0.0.0 service/kubernetes-dashboard 443:443
kubectl port-forward --namespace kubernetes-dashboard --address 0.0.0.0 service/kubernetes-dashboard 443:443
kubectl port-forward --namespace kubernetes-dashboard --address 0.0.0.0 service/kubernetes-dashboard 443:443







##########问题1:
        [kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn·t running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

######### 原因:驱动不一致,都使用默认的cgroupfs就可以了,百度破教程乱搞 修改了docker的默认驱动

docker info | grep Cgroup查看使用的驱动
#修改文件(默认没有这个文件,这是根据网上抄的)
vim /etc/docker/daemon.json

cat > /etc/docker/daemon.json  <<EOF
EOF
kubeadm reset
kubeadm init 


报错内容:node notReady
命令查看: kubectl describe nodes xxx   没有任何问题
回头看 pod: kubectl get po -A  发现了很多pod都有问题,挨个describe
<<EOF 
Events:
  Type     Reason                  Age                    From               Message
  ----     ------                  ----                   ----               -------
  Normal   Scheduled               22m                    default-scheduler  Successfully assigned kube-system/kube-proxy-wkdkp to ceshi1
  Warning  FailedCreatePodSandBox  7m24s (x28 over 22m)   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.4.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Normal   BackOff                 2m31s (x6 over 5m15s)  kubelet            Back-off pulling image "k8s.gcr.io/kube-proxy:v1.21.2"
EOF

################过户到 node ceshi1 下面,,,,,,这个概念重要了,,造成原来因为科学上网已经下载的镜像要在node上重新下载,因为这个pod已经在node上运行








######################istio示例
#常用这两个命令排错
kubectl get nodes
kubectl get po -A

curl -L https://istio.io/downloadIstio | sh -
ls
cd istio-1.10.1
export PATH=$PWD/bin:$PATH
istioctl install --set profile=demo -y
kubectl label namespace default istio-injection=enabled
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
kubectl get pods
####查看所有的镜像,重点:在只有一个node的时候,在master node上还是有istio的镜像的,两个node干脆就都在node上运行了,,,起多个服务的的也会平摊下去,比如istio/examples-bookinfo-reviews-v3 起两个服务,非别在两台机器上运行
docker images -a        #在node节点上运行
#kubectl get pods
kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -s productpage:9080/productpage | grep -o "<title>.*</title>"
kubectl get svc istio-ingressgateway -n istio-system
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')
export INGRESS_HOST=$(kubectl get po -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].status.hostIP}')
export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
echo "$GATEWAY_URL"
echo "http://$GATEWAY_URL/productpage"
#http://172.19.186.77:31500/productpage
#这个某个node节点的内网IP,之前Minikube和主从双节点结构都显示的是master节点的内网IP
ubectl get svc
#在外网的浏览器上运行(仅仅需要下面一条命令,之前因为这点花了很长时间【1 用expore 2用nginx正向代理 好多方式)
kubectl port-forward --address 0.0.0.0 service/productpage 7080:9080

最后

以上就是害怕砖头为你收集整理的全国内环境安装的全部内容,希望文章能够帮你解决全国内环境安装所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(50)

评论列表共有 0 条评论

立即
投稿
返回
顶部