概述
kubernetes常用命令
参考1
以kubesphere单节点集群结果为例
# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-846ddd49bc-srmvm 1/1 Running 0 16h
kube-system calico-node-dvcwm 1/1 Running 0 16h
kube-system coredns-558b97598-czbk8 1/1 Running 0 16h
kube-system coredns-558b97598-w7jff 1/1 Running 0 16h
kube-system kube-apiserver-node1 1/1 Running 0 16h
kube-system kube-controller-manager-node1 1/1 Running 3 (16h ago) 16h
kube-system kube-proxy-hg7jh 1/1 Running 0 16h
kube-system kube-scheduler-node1 1/1 Running 3 (16h ago) 16h
kube-system nodelocaldns-czd9c 1/1 Running 0 16h
kube-system openebs-localpv-provisioner-6f54869bc7-gvxq2 1/1 Running 4 (16h ago) 16h
kube-system snapshot-controller-0 1/1 Running 0 16h
kubesphere-controls-system default-http-backend-59d5cf569f-l5krv 1/1 Running 0 16h
kubesphere-controls-system kubectl-admin-7ffdf4596b-vwxvl 1/1 Running 0 16h
kubesphere-monitoring-system alertmanager-main-0 0/2 Pending 0 16h
kubesphere-monitoring-system kube-state-metrics-5474f8f7b-sfslv 3/3 Running 0 16h
kubesphere-monitoring-system node-exporter-r9svx 2/2 Running 0 16h
kubesphere-monitoring-system notification-manager-deployment-7b586bd8fb-tsn7h 2/2 Running 0 16h
kubesphere-monitoring-system notification-manager-operator-64ff97cb98-z58zq 2/2 Running 1 (16h ago) 16h
kubesphere-monitoring-system prometheus-k8s-0 0/2 Pending 0 16h
kubesphere-monitoring-system prometheus-operator-64b7b4db85-g4qgr 2/2 Running 0 16h
kubesphere-system ks-apiserver-6f6c79f4c4-lt2qc 1/1 Running 0 16h
kubesphere-system ks-console-7dbb8655fb-426h5 1/1 Running 0 16h
kubesphere-system ks-controller-manager-646cc9cdcb-d6b22 1/1 Running 0 16h
kubesphere-system ks-installer-d6dcd67b9-ksppf 1/1 Running 0 16h
查询集群节点
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 Ready control-plane,master,worker 16h v1.22.12
查看所有namespace的pod,不加-A为默认命名空间,添加-n namespace为指定命名空间
# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-846ddd49bc-srmvm 1/1 Running 0 16h
kube-system calico-node-dvcwm 1/1 Running 0 16h
kube-system coredns-558b97598-czbk8 1/1 Running 0 16h
kube-system coredns-558b97598-w7jff 1/1 Running 0 16h
kube-system kube-apiserver-node1 1/1 Running 0 16h
kube-system kube-controller-manager-node1 1/1 Running 3 (16h ago) 16h
kube-system kube-proxy-hg7jh 1/1 Running 0 16h
kube-system kube-scheduler-node1 1/1 Running 3 (16h ago) 16h
kube-system nodelocaldns-czd9c 1/1 Running 0 16h
kube-system openebs-localpv-provisioner-6f54869bc7-gvxq2 1/1 Running 4 (16h ago) 16h
kube-system snapshot-controller-0 1/1 Running 0 16h
kubesphere-controls-system default-http-backend-59d5cf569f-l5krv 1/1 Running 0 16h
kubesphere-controls-system kubectl-admin-7ffdf4596b-vwxvl 1/1 Running 0 16h
kubesphere-monitoring-system alertmanager-main-0 0/2 Pending 0 16h
kubesphere-monitoring-system kube-state-metrics-5474f8f7b-sfslv 3/3 Running 0 16h
kubesphere-monitoring-system node-exporter-r9svx 2/2 Running 0 16h
kubesphere-monitoring-system notification-manager-deployment-7b586bd8fb-tsn7h 2/2 Running 0 16h
kubesphere-monitoring-system notification-manager-operator-64ff97cb98-z58zq 2/2 Running 1 (16h ago) 16h
kubesphere-monitoring-system prometheus-k8s-0 0/2 Pending 0 16h
kubesphere-monitoring-system prometheus-operator-64b7b4db85-g4qgr 2/2 Running 0 16h
kubesphere-system ks-apiserver-6f6c79f4c4-lt2qc 1/1 Running 0 16h
kubesphere-system ks-console-7dbb8655fb-426h5 1/1 Running 0 16h
kubesphere-system ks-controller-manager-646cc9cdcb-d6b22 1/1 Running 0 16h
kubesphere-system ks-installer-d6dcd67b9-ksppf 1/1 Running 0 16h
# kubectl get pods -o wide -A
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-846ddd49bc-srmvm 1/1 Running 0 17h 10.233.90.5 node1 <none> <none>
kube-system calico-node-dvcwm 1/1 Running 0 17h 10.1.1.10 node1 <none> <none>
kube-system coredns-558b97598-czbk8 1/1 Running 0 17h 10.233.90.2 node1 <none> <none>
kube-system coredns-558b97598-w7jff 1/1 Running 0 17h 10.233.90.3 node1 <none> <none>
kube-system kube-apiserver-node1 1/1 Running 0 17h 10.1.1.10 node1 <none> <none>
kube-system kube-controller-manager-node1 1/1 Running 3 (17h ago) 17h 10.1.1.10 node1 <none> <none>
kube-system kube-proxy-hg7jh 1/1 Running 0 17h 10.1.1.10 node1 <none> <none>
kube-system kube-scheduler-node1 1/1 Running 3 (17h ago) 17h 10.1.1.10 node1 <none> <none>
kube-system nodelocaldns-czd9c 1/1 Running 0 17h 10.1.1.10 node1 <none> <none>
kube-system openebs-localpv-provisioner-6f54869bc7-gvxq2 1/1 Running 4 (17h ago) 17h 10.233.90.1 node1 <none> <none>
kube-system snapshot-controller-0 1/1 Running 0 17h 10.233.90.6 node1 <none> <none>
kubesphere-controls-system default-http-backend-59d5cf569f-l5krv 1/1 Running 0 17h 10.233.90.8 node1 <none> <none>
kubesphere-controls-system kubectl-admin-7ffdf4596b-vwxvl 1/1 Running 0 17h 10.233.90.15 node1 <none> <none>
kubesphere-monitoring-system alertmanager-main-0 0/2 Pending 0 17h <none> <none> <none> <none>
kubesphere-monitoring-system kube-state-metrics-5474f8f7b-sfslv 3/3 Running 0 17h 10.233.90.10 node1 <none> <none>
kubesphere-monitoring-system node-exporter-r9svx 2/2 Running 0 17h 10.1.1.10 node1 <none> <none>
kubesphere-monitoring-system notification-manager-deployment-7b586bd8fb-tsn7h 2/2 Running 0 17h 10.233.90.12 node1 <none> <none>
kubesphere-monitoring-system notification-manager-operator-64ff97cb98-z58zq 2/2 Running 1 (17h ago) 17h 10.233.90.11 node1 <none> <none>
kubesphere-monitoring-system prometheus-k8s-0 0/2 Pending 0 17h <none> <none> <none> <none>
kubesphere-monitoring-system prometheus-operator-64b7b4db85-g4qgr 2/2 Running 0 17h 10.233.90.9 node1 <none> <none>
kubesphere-system ks-apiserver-6f6c79f4c4-lt2qc 1/1 Running 0 17h 10.233.90.14 node1 <none> <none>
kubesphere-system ks-console-7dbb8655fb-426h5 1/1 Running 0 17h 10.233.90.7 node1 <none> <none>
kubesphere-system ks-controller-manager-646cc9cdcb-d6b22 1/1 Running 0 17h 10.233.90.13 node1 <none> <none>
kubesphere-system ks-installer-d6dcd67b9-ksppf 1/1 Running 0 17h 10.233.90.4 node1 <none> <none>
监视pod资源变动
# kubectl get pod -o wide -w -A
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-846ddd49bc-srmvm 1/1 Running 0 17h 10.233.90.5 node1 <none> <none>
kube-system calico-node-dvcwm 1/1 Running 0 17h 10.1.1.10 node1 <none> <none>
kube-system coredns-558b97598-czbk8 1/1 Running 0 17h 10.233.90.2 node1 <none> <none>
kube-system coredns-558b97598-w7jff 1/1 Running 0 17h 10.233.90.3 node1 <none> <none>
kube-system kube-apiserver-node1 1/1 Running 0 17h 10.1.1.10 node1 <none> <none>
kube-system kube-controller-manager-node1 1/1 Running 3 (17h ago) 17h 10.1.1.10 node1 <none> <none>
kube-system kube-proxy-hg7jh 1/1 Running 0 17h 10.1.1.10 node1 <none> <none>
kube-system kube-scheduler-node1 1/1 Running 3 (17h ago) 17h 10.1.1.10 node1 <none> <none>
kube-system nodelocaldns-czd9c 1/1 Running 0 17h 10.1.1.10 node1 <none> <none>
kube-system openebs-localpv-provisioner-6f54869bc7-gvxq2 1/1 Running 4 (17h ago) 17h 10.233.90.1 node1 <none> <none>
kube-system snapshot-controller-0 1/1 Running 0 17h 10.233.90.6 node1 <none> <none>
kubesphere-controls-system default-http-backend-59d5cf569f-l5krv 1/1 Running 0 17h 10.233.90.8 node1 <none> <none>
kubesphere-controls-system kubectl-admin-7ffdf4596b-vwxvl 1/1 Running 0 17h 10.233.90.15 node1 <none> <none>
kubesphere-monitoring-system alertmanager-main-0 0/2 Pending 0 17h <none> <none> <none> <none>
kubesphere-monitoring-system kube-state-metrics-5474f8f7b-sfslv 3/3 Running 0 17h 10.233.90.10 node1 <none> <none>
kubesphere-monitoring-system node-exporter-r9svx 2/2 Running 0 17h 10.1.1.10 node1 <none> <none>
kubesphere-monitoring-system notification-manager-deployment-7b586bd8fb-tsn7h 2/2 Running 0 17h 10.233.90.12 node1 <none> <none>
kubesphere-monitoring-system notification-manager-operator-64ff97cb98-z58zq 2/2 Running 1 (17h ago) 17h 10.233.90.11 node1 <none> <none>
kubesphere-monitoring-system prometheus-k8s-0 0/2 Pending 0 17h <none> <none> <none> <none>
kubesphere-monitoring-system prometheus-operator-64b7b4db85-g4qgr 2/2 Running 0 17h 10.233.90.9 node1 <none> <none>
kubesphere-system ks-apiserver-6f6c79f4c4-lt2qc 1/1 Running 0 17h 10.233.90.14 node1 <none> <none>
kubesphere-system ks-console-7dbb8655fb-426h5 1/1 Running 0 17h 10.233.90.7 node1 <none> <none>
kubesphere-system ks-controller-manager-646cc9cdcb-d6b22 1/1 Running 0 17h 10.233.90.13 node1 <none> <none>
kubesphere-system ks-installer-d6dcd67b9-ksppf 1/1 Running 0 17h 10.233.90.4 node1 <none> <none>
查看pod和所有service的信息
# kubectl get pods,services -o wide -A
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system pod/calico-kube-controllers-846ddd49bc-srmvm 1/1 Running 0 17h 10.233.90.5 node1 <none> <none>
kube-system pod/calico-node-dvcwm 1/1 Running 0 17h 10.1.1.10 node1 <none> <none>
kube-system pod/coredns-558b97598-czbk8 1/1 Running 0 17h 10.233.90.2 node1 <none> <none>
kube-system pod/coredns-558b97598-w7jff 1/1 Running 0 17h 10.233.90.3 node1 <none> <none>
kube-system pod/kube-apiserver-node1 1/1 Running 0 17h 10.1.1.10 node1 <none> <none>
kube-system pod/kube-controller-manager-node1 1/1 Running 3 (17h ago) 17h 10.1.1.10 node1 <none> <none>
kube-system pod/kube-proxy-hg7jh 1/1 Running 0 17h 10.1.1.10 node1 <none> <none>
kube-system pod/kube-scheduler-node1 1/1 Running 3 (17h ago) 17h 10.1.1.10 node1 <none> <none>
kube-system pod/nodelocaldns-czd9c 1/1 Running 0 17h 10.1.1.10 node1 <none> <none>
kube-system pod/openebs-localpv-provisioner-6f54869bc7-gvxq2 1/1 Running 4 (17h ago) 17h 10.233.90.1 node1 <none> <none>
kube-system pod/snapshot-controller-0 1/1 Running 0 17h 10.233.90.6 node1 <none> <none>
kubesphere-controls-system pod/default-http-backend-59d5cf569f-l5krv 1/1 Running 0 17h 10.233.90.8 node1 <none> <none>
kubesphere-controls-system pod/kubectl-admin-7ffdf4596b-vwxvl 1/1 Running 0 17h 10.233.90.15 node1 <none> <none>
kubesphere-monitoring-system pod/alertmanager-main-0 0/2 Pending 0 17h <none> <none> <none> <none>
kubesphere-monitoring-system pod/kube-state-metrics-5474f8f7b-sfslv 3/3 Running 0 17h 10.233.90.10 node1 <none> <none>
kubesphere-monitoring-system pod/node-exporter-r9svx 2/2 Running 0 17h 10.1.1.10 node1 <none> <none>
kubesphere-monitoring-system pod/notification-manager-deployment-7b586bd8fb-tsn7h 2/2 Running 0 17h 10.233.90.12 node1 <none> <none>
kubesphere-monitoring-system pod/notification-manager-operator-64ff97cb98-z58zq 2/2 Running 1 (17h ago) 17h 10.233.90.11 node1 <none> <none>
kubesphere-monitoring-system pod/prometheus-k8s-0 0/2 Pending 0 17h <none> <none> <none> <none>
kubesphere-monitoring-system pod/prometheus-operator-64b7b4db85-g4qgr 2/2 Running 0 17h 10.233.90.9 node1 <none> <none>
kubesphere-system pod/ks-apiserver-6f6c79f4c4-lt2qc 1/1 Running 0 17h 10.233.90.14 node1 <none> <none>
kubesphere-system pod/ks-console-7dbb8655fb-426h5 1/1 Running 0 17h 10.233.90.7 node1 <none> <none>
kubesphere-system pod/ks-controller-manager-646cc9cdcb-d6b22 1/1 Running 0 17h 10.233.90.13 node1 <none> <none>
kubesphere-system pod/ks-installer-d6dcd67b9-ksppf 1/1 Running 0 17h 10.233.90.4 node1 <none> <none>
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 17h <none>
kube-system service/coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 17h k8s-app=kube-dns
kube-system service/kube-controller-manager-svc ClusterIP None <none> 10257/TCP 17h component=kube-controller-manager
kube-system service/kube-scheduler-svc ClusterIP None <none> 10259/TCP 17h component=kube-scheduler
kube-system service/kubelet ClusterIP None <none> 10250/TCP,10255/TCP,4194/TCP 17h <none>
kubesphere-controls-system service/default-http-backend ClusterIP 10.233.61.93 <none> 80/TCP 17h app=kubesphere,component=kubesphere-router
kubesphere-monitoring-system service/alertmanager-main ClusterIP 10.233.57.209 <none> 9093/TCP,8080/TCP 17h app.kubernetes.io/component=alert-router,app.kubernetes.io/instance=main,app.kubernetes.io/name=alertmanager,app.kubernetes.io/part-of=kube-prometheus
kubesphere-monitoring-system service/alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 17h app.kubernetes.io/name=alertmanager
kubesphere-monitoring-system service/kube-state-metrics ClusterIP None <none> 8443/TCP,9443/TCP 17h app.kubernetes.io/component=exporter,app.kubernetes.io/name=kube-state-metrics,app.kubernetes.io/part-of=kube-prometheus
kubesphere-monitoring-system service/node-exporter ClusterIP None <none> 9100/TCP 17h app.kubernetes.io/component=exporter,app.kubernetes.io/name=node-exporter,app.kubernetes.io/part-of=kube-prometheus
kubesphere-monitoring-system service/notification-manager-controller-metrics ClusterIP 10.233.61.40 <none> 8443/TCP 17h control-plane=controller-manager
kubesphere-monitoring-system service/notification-manager-svc ClusterIP 10.233.29.95 <none> 19093/TCP 17h app=notification-manager,notification-manager=notification-manager
kubesphere-monitoring-system service/notification-manager-webhook ClusterIP 10.233.47.30 <none> 443/TCP 17h control-plane=controller-manager
kubesphere-monitoring-system service/prometheus-k8s ClusterIP 10.233.57.30 <none> 9090/TCP,8080/TCP 17h app.kubernetes.io/component=prometheus,app.kubernetes.io/instance=k8s,app.kubernetes.io/name=prometheus,app.kubernetes.io/part-of=kube-prometheus
kubesphere-monitoring-system service/prometheus-operated ClusterIP None <none> 9090/TCP 17h app.kubernetes.io/name=prometheus
kubesphere-monitoring-system service/prometheus-operator ClusterIP None <none> 8443/TCP 17h app.kubernetes.io/component=controller,app.kubernetes.io/name=prometheus-operator,app.kubernetes.io/part-of=kube-prometheus
kubesphere-system service/ks-apiserver ClusterIP 10.233.7.37 <none> 80/TCP 17h app=ks-apiserver,tier=backend
kubesphere-system service/ks-console NodePort 10.233.32.173 <none> 80:30880/TCP 17h app=ks-console,tier=frontend
kubesphere-system service/ks-controller-manager ClusterIP 10.233.49.71 <none> 443/TCP 17h app=ks-controller-manager,tier=backend
查看pod运行环境变量
# kubectl exec node-exporter-r9svx -n kubesphere-monitoring-system -- env
Defaulted container "node-exporter" out of: node-exporter, kube-rbac-proxy
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=node1
PROMETHEUS_K8S_PORT_9090_TCP_PORT=9090
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.233.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.233.0.1
PROMETHEUS_K8S_SERVICE_PORT=9090
PROMETHEUS_K8S_SERVICE_PORT_RELOADER_WEB=8080
PROMETHEUS_K8S_PORT_9090_TCP=tcp://10.233.57.30:9090
PROMETHEUS_K8S_PORT_8080_TCP=tcp://10.233.57.30:8080
PROMETHEUS_K8S_PORT_8080_TCP_PORT=8080
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP_PORT=443
PROMETHEUS_K8S_PORT_8080_TCP_PROTO=tcp
PROMETHEUS_K8S_PORT_8080_TCP_ADDR=10.233.57.30
KUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443
PROMETHEUS_K8S_PORT_9090_TCP_PROTO=tcp
PROMETHEUS_K8S_PORT_9090_TCP_ADDR=10.233.57.30
KUBERNETES_SERVICE_HOST=10.233.0.1
PROMETHEUS_K8S_SERVICE_HOST=10.233.57.30
PROMETHEUS_K8S_SERVICE_PORT_WEB=9090
PROMETHEUS_K8S_PORT=tcp://10.233.57.30:9090
HOME=/home
查看pod的详细内容,必须加-n指定命名空间
# kubectl describe pod alertmanager-main-0 -n kubesphere-monitoring-system
Name: alertmanager-main-0
Namespace: kubesphere-monitoring-system
Priority: 0
Node: <none>
Labels: alertmanager=main
app.kubernetes.io/component=alert-router
app.kubernetes.io/instance=main
app.kubernetes.io/managed-by=prometheus-operator
app.kubernetes.io/name=alertmanager
app.kubernetes.io/part-of=kube-prometheus
app.kubernetes.io/version=0.23.0
controller-revision-hash=alertmanager-main-6d966d94b5
statefulset.kubernetes.io/pod-name=alertmanager-main-0
Annotations: kubectl.kubernetes.io/default-container: alertmanager
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/alertmanager-main
Containers:
alertmanager:
Image: dockerhub.kubekey.local/kubesphereio/alertmanager:v0.23.0
Ports: 9093/TCP, 9094/TCP, 9094/UDP
Host Ports: 0/TCP, 0/TCP, 0/UDP
Args:
--config.file=/etc/alertmanager/config/alertmanager.yaml
--storage.path=/alertmanager
--data.retention=120h
--cluster.listen-address=
--web.listen-address=:9093
--web.route-prefix=/
--cluster.peer=alertmanager-main-0.alertmanager-operated:9094
--cluster.reconnect-timeout=5m
Limits:
cpu: 200m
memory: 200Mi
Requests:
cpu: 20m
memory: 30Mi
Liveness: http-get http://:web/-/healthy delay=0s timeout=3s period=10s #success=1 #failure=10
Readiness: http-get http://:web/-/ready delay=3s timeout=3s period=5s #success=1 #failure=10
Environment:
POD_IP: (v1:status.podIP)
Mounts:
/alertmanager from alertmanager-main-db (rw)
/etc/alertmanager/certs from tls-assets (ro)
/etc/alertmanager/config from config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z5rwd (ro)
config-reloader:
Image: dockerhub.kubekey.local/kubesphereio/prometheus-config-reloader:v0.55.1
Port: 8080/TCP
Host Port: 0/TCP
Command:
/bin/prometheus-config-reloader
Args:
--listen-address=:8080
--reload-url=http://localhost:9093/-/reload
--watched-dir=/etc/alertmanager/config
Limits:
cpu: 100m
memory: 50Mi
Requests:
cpu: 100m
memory: 50Mi
Environment:
POD_NAME: alertmanager-main-0 (v1:metadata.name)
SHARD: -1
Mounts:
/etc/alertmanager/config from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z5rwd (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config-volume:
Type: Secret (a volume populated by a Secret)
SecretName: alertmanager-main-generated
Optional: false
tls-assets:
Type: Projected (a volume that contains injected data from multiple sources)
SecretName: alertmanager-main-tls-assets-0
SecretOptionalName: <nil>
alertmanager-main-db:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-z5rwd:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 16h default-scheduler 0/1 nodes are available: 1 Insufficient cpu.
查看pod的日志
# kubectl logs node-exporter-r9svx -c node-exporter -n kubesphere-monitoring-system
ts=2022-12-22T09:54:46.419Z caller=node_exporter.go:182 level=info msg="Starting node_exporter" version="(version=1.3.1, branch=HEAD, revision=a2321e7b940ddcff26873612bccdf7cd4c42b6b6)"
ts=2022-12-22T09:54:46.419Z caller=node_exporter.go:183 level=info msg="Build context" build_context="(go=go1.17.3, user=root@243aafa5525c, date=20211205-11:09:49)"
ts=2022-12-22T09:54:46.420Z caller=filesystem_common.go:94 level=warn collector=filesystem msg="--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude"
ts=2022-12-22T09:54:46.420Z caller=filesystem_common.go:103 level=warn collector=filesystem msg="--collector.filesystem.ignored-fs-types is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.fs-types-exclude"
ts=2022-12-22T09:54:46.420Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|sys|var/lib/docker/.+)($|/)
ts=2022-12-22T09:54:46.420Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:108 level=info msg="Enabled collectors"
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=arp
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=bcache
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=bonding
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=btrfs
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=conntrack
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=cpu
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=cpufreq
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=diskstats
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=dmi
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=edac
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=entropy
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=fibrechannel
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=filefd
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=filesystem
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=infiniband
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=ipvs
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=loadavg
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=mdadm
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=meminfo
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=netclass
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=netdev
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=netstat
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=nfs
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=nfsd
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=nvme
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=os
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=powersupplyclass
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=pressure
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=rapl
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=schedstat
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=sockstat
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=softnet
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=stat
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=tapestats
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=textfile
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=thermal_zone
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=time
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=timex
ts=2022-12-22T09:54:46.421Z caller=node_exporter.go:115 level=info collector=udp_queues
ts=2022-12-22T09:54:46.430Z caller=node_exporter.go:115 level=info collector=uname
ts=2022-12-22T09:54:46.430Z caller=node_exporter.go:115 level=info collector=vmstat
ts=2022-12-22T09:54:46.430Z caller=node_exporter.go:115 level=info collector=xfs
ts=2022-12-22T09:54:46.430Z caller=node_exporter.go:115 level=info collector=zfs
ts=2022-12-22T09:54:46.430Z caller=node_exporter.go:199 level=info msg="Listening on" address=127.0.0.1:9100
ts=2022-12-22T09:54:46.431Z caller=tls_config.go:195 level=info msg="TLS is disabled." http2=false
查看集群信息
# kubectl cluster-info
Kubernetes control plane is running at https://lb.kubesphere.local:6443
coredns is running at https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/services/coredns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
查看集群健康状态
# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0 Healthy {"health":"true"}
controller-manager Healthy ok
查看所有的deployment
# kubectl get deployment -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system calico-kube-controllers 1/1 1 1 17h
kube-system coredns 2/2 2 2 17h
kube-system openebs-localpv-provisioner 1/1 1 1 17h
kubesphere-controls-system default-http-backend 1/1 1 1 17h
kubesphere-controls-system kubectl-admin 1/1 1 1 17h
kubesphere-monitoring-system kube-state-metrics 1/1 1 1 17h
kubesphere-monitoring-system notification-manager-deployment 1/1 1 1 17h
kubesphere-monitoring-system notification-manager-operator 1/1 1 1 17h
kubesphere-monitoring-system prometheus-operator 1/1 1 1 17h
kubesphere-system ks-apiserver 1/1 1 1 17h
kubesphere-system ks-console 1/1 1 1 17h
kubesphere-system ks-controller-manager 1/1 1 1 17h
kubesphere-system ks-installer 1/1 1 1 17h
查看deployment更新
# kubectl get deployments coredns -n kube-system -w
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 2/2 2 2 17h
查看事件
# kubectl get events
LAST SEEN TYPE REASON OBJECT MESSAGE
49s Normal Synced clusterrolebinding/admin-cluster-admin ClusterRoleBinding synced successfully
48s Normal Synced loginrecord/admin-cskmv LoginRecord synced successfully
48s Normal Synced loginrecord/admin-q87zb LoginRecord synced successfully
49s Normal Synced globalrolebinding/admin GlobalRoleBinding synced successfully
49s Normal Synced globalrolebinding/anonymous GlobalRoleBinding synced successfully
49s Normal Synced globalrolebinding/authenticated GlobalRoleBinding synced successfully
49s Normal Synced clusterrolebinding/calico-kube-controllers ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/calico-node ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/cluster-admin ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/ks-installer ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/kubeadm:get-nodes ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/kubeadm:kubelet-bootstrap ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/kubeadm:node-autoapprove-bootstrap ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/kubeadm:node-autoapprove-certificate-rotation ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/kubeadm:node-proxier ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/kubesphere-kube-state-metrics ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/kubesphere-node-exporter ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/kubesphere-prometheus-k8s ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/kubesphere-prometheus-operator ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/kubesphere ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/notification-manager-controller-rolebinding ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/notification-manager-proxy-rolebinding ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/notification-manager-tenant-sidecar-rolebinding ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/openebs-maya-operator ClusterRoleBinding synced successfully
49s Normal Synced globalrolebinding/pre-registration GlobalRoleBinding synced successfully
49s Normal Synced clusterrolebinding/snapshot-controller-role ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:basic-user ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:attachdetach-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:certificate-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:clusterrole-aggregation-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:cronjob-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:daemon-set-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:deployment-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:disruption-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:endpoint-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:endpointslice-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:endpointslicemirroring-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:ephemeral-volume-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:expand-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:generic-garbage-collector ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:horizontal-pod-autoscaler ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:job-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:namespace-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:node-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:persistent-volume-binder ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:pod-garbage-collector ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:pv-protection-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:pvc-protection-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:replicaset-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:replication-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:resourcequota-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:root-ca-cert-publisher ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:route-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:service-account-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:service-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:statefulset-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:ttl-after-finished-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:controller:ttl-controller ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:coredns ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:discovery ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:kube-controller-manager ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:kube-dns ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:kube-scheduler ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:kubesphere-cluster-admin ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:monitoring ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:nginx-ingress-clusterrole-nisa-binding ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:node-proxier ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:node ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:public-info-viewer ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:service-account-issuer-discovery ClusterRoleBinding synced successfully
49s Normal Synced clusterrolebinding/system:volume-scheduler ClusterRoleBinding synced successfully
50m Normal Synced namespace/test Synced successfully
39m Normal Synced namespace/test Synced successfully
33m Normal Synced namespace/test Synced successfully
28m Normal Synced namespace/test Synced successfully
查看pod yaml信息
# kubectl get pods node-exporter-r9svx -n kubesphere-monitoring-system -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2022-12-22T09:54:17Z"
generateName: node-exporter-
labels:
app.kubernetes.io/component: exporter
app.kubernetes.io/name: node-exporter
app.kubernetes.io/part-of: kube-prometheus
app.kubernetes.io/version: 1.3.1
controller-revision-hash: 57fb78877b
pod-template-generation: "1"
name: node-exporter-r9svx
namespace: kubesphere-monitoring-system
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: DaemonSet
name: node-exporter
uid: 72151bcf-302e-412d-a9da-90ed5957ae04
resourceVersion: "2531"
uid: 0b479599-fce0-462f-a0a9-01e4efa87e9d
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchFields:
- key: metadata.name
operator: In
values:
- node1
containers:
- args:
- --web.listen-address=127.0.0.1:9100
- --path.procfs=/host/proc
- --path.sysfs=/host/sys
- --path.rootfs=/host/root
- --no-collector.wifi
- --no-collector.hwmon
- --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
- --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
image: dockerhub.kubekey.local/kubesphereio/node-exporter:v1.3.1
imagePullPolicy: IfNotPresent
name: node-exporter
resources:
limits:
cpu: "1"
memory: 500Mi
requests:
cpu: 102m
memory: 180Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /host/proc
name: proc
readOnly: true
- mountPath: /host/sys
name: sys
readOnly: true
- mountPath: /host/root
mountPropagation: HostToContainer
name: root
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-2zhsk
readOnly: true
- args:
- --logtostderr
- --secure-listen-address=[$(IP)]:9100
- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
- --upstream=http://127.0.0.1:9100/
env:
- name: IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
image: dockerhub.kubekey.local/kubesphereio/kube-rbac-proxy:v0.11.0
imagePullPolicy: IfNotPresent
name: kube-rbac-proxy
ports:
- containerPort: 9100
hostPort: 9100
name: https
protocol: TCP
resources:
limits:
cpu: "1"
memory: 100Mi
requests:
cpu: 10m
memory: 20Mi
securityContext:
runAsGroup: 65532
runAsNonRoot: true
runAsUser: 65532
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-2zhsk
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
hostNetwork: true
hostPID: true
nodeName: node1
nodeSelector:
kubernetes.io/os: linux
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsNonRoot: true
runAsUser: 65534
serviceAccount: node-exporter
serviceAccountName: node-exporter
terminationGracePeriodSeconds: 30
tolerations:
- operator: Exists
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/disk-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/memory-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/pid-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/unschedulable
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/network-unavailable
operator: Exists
volumes:
- hostPath:
path: /proc
type: ""
name: proc
- hostPath:
path: /sys
type: ""
name: sys
- hostPath:
path: /
type: ""
name: root
- name: kube-api-access-2zhsk
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-12-22T09:54:20Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2022-12-22T09:55:10Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2022-12-22T09:55:10Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2022-12-22T09:54:20Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://24f73b83600370fae47e93285b208aeb51a100e8de351c258678400700001e5c
image: dockerhub.kubekey.local/kubesphereio/kube-rbac-proxy:v0.11.0
imageID: docker-pullable://dockerhub.kubekey.local/kubesphereio/kube-rbac-proxy@sha256:9739e288f351b5839f4d8af506a2b8fc66769f55ae2672085356e275c809cba3
lastState: {}
name: kube-rbac-proxy
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2022-12-22T09:55:07Z"
- containerID: docker://1827bab5f2076d0e85649a3a56341747f8a4b9bccfba00ca9cb8dc9708488f0e
image: dockerhub.kubekey.local/kubesphereio/node-exporter:v1.3.1
imageID: docker-pullable://dockerhub.kubekey.local/kubesphereio/node-exporter@sha256:ba742564ae074aec7a569dde0ccc6557a8e5de6e0394ad87932b7544d2d0d389
lastState: {}
name: node-exporter
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2022-12-22T09:54:46Z"
hostIP: 10.1.1.10
phase: Running
podIP: 10.1.1.10
podIPs:
- ip: 10.1.1.10
qosClass: Burstable
startTime: "2022-12-22T09:54:20Z"
查看kubernetes版本
# kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.12", GitCommit:"b058e1760c79f46a834ba59bd7a3486ecf28237d", GitTreeState:"clean", BuildDate:"2022-07-13T14:59:18Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.12", GitCommit:"b058e1760c79f46a834ba59bd7a3486ecf28237d", GitTreeState:"clean", BuildDate:"2022-07-13T14:53:39Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
# kubectl version --short=true
Client Version: v1.22.12
Server Version: v1.22.12
查看api版本信息
# kubectl version -s
Error: flag needs an argument: 's' in -s
See 'kubectl version --help' for usage.
[root@node1 ~]# kubectl api-versions
admissionregistration.k8s.io/v1
apiextensions.k8s.io/v1
apiregistration.k8s.io/v1
app.k8s.io/v1beta1
application.kubesphere.io/v1alpha1
apps/v1
authentication.k8s.io/v1
authorization.k8s.io/v1
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
batch/v1
batch/v1beta1
certificates.k8s.io/v1
cluster.kubesphere.io/v1alpha1
coordination.k8s.io/v1
crd.projectcalico.org/v1
discovery.k8s.io/v1
discovery.k8s.io/v1beta1
events.k8s.io/v1
events.k8s.io/v1beta1
flowcontrol.apiserver.k8s.io/v1beta1
gateway.kubesphere.io/v1alpha1
iam.kubesphere.io/v1alpha2
installer.kubesphere.io/v1alpha1
monitoring.coreos.com/v1
monitoring.coreos.com/v1alpha1
monitoring.kubesphere.io/v1alpha1
monitoring.kubesphere.io/v1alpha2
network.kubesphere.io/v1alpha1
networking.k8s.io/v1
node.k8s.io/v1
node.k8s.io/v1beta1
notification.kubesphere.io/v2beta1
notification.kubesphere.io/v2beta2
policy/v1
policy/v1beta1
quota.kubesphere.io/v1alpha2
rbac.authorization.k8s.io/v1
scheduling.k8s.io/v1
servicemesh.kubesphere.io/v1alpha2
snapshot.storage.k8s.io/v1
snapshot.storage.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
storage.kubesphere.io/v1alpha1
tenant.kubesphere.io/v1alpha1
tenant.kubesphere.io/v1alpha2
v1
查看kubeconfig配置
# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://lb.kubesphere.local:6443
name: cluster.local
contexts:
- context:
cluster: cluster.local
user: kubernetes-admin
name: kubernetes-admin@cluster.local
current-context: kubernetes-admin@cluster.local
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
创建新的namespace
# kubectl create namespace test
namespace/test created
# kubectl get namespace
NAME STATUS AGE
default Active 17h
kube-node-lease Active 17h
kube-public Active 17h
kube-system Active 17h
kubekey-system Active 17h
kubesphere-controls-system Active 16h
kubesphere-monitoring-federated Active 16h
kubesphere-monitoring-system Active 17h
kubesphere-system Active 17h
test Active 13s
# kubectl get ns
NAME STATUS AGE
default Active 17h
kube-node-lease Active 17h
kube-public Active 17h
kube-system Active 17h
kubekey-system Active 17h
kubesphere-controls-system Active 16h
kubesphere-monitoring-federated Active 16h
kubesphere-monitoring-system Active 17h
kubesphere-system Active 17h
test Active 46s
test namespace下运行一个nginx pod
# kubectl run pod --image=nginx -n test
pod/pod created
# kubectl get pod -n test
NAME READY STATUS RESTARTS AGE
pod 1/1 Running 0 37s
查看新建pod信息
# kubectl describe pod pod -n test
Name: pod
Namespace: test
Priority: 0
Node: node1/10.1.1.10
Start Time: Thu, 22 Dec 2022 21:44:32 -0500
Labels: run=pod
Annotations: cni.projectcalico.org/containerID: c3bfb686103ba79787679c76c4e0374f6332ecbf5fe0dabfcb23e3de296698a0
cni.projectcalico.org/podIP: 10.233.90.16/32
cni.projectcalico.org/podIPs: 10.233.90.16/32
Status: Running
IP: 10.233.90.16
IPs:
IP: 10.233.90.16
Containers:
pod:
Container ID: docker://20820d0f876a86033d6e15757cac13cbe73a49d195101fc79aaad72d1adf405d
Image: nginx
Image ID: docker-pullable://nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 22 Dec 2022 21:45:04 -0500
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cwdq5 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-cwdq5:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 91s default-scheduler Successfully assigned test/pod to node1
Normal Pulling 83s kubelet Pulling image "nginx"
Normal Pulled 64s kubelet Successfully pulled image "nginx" in 19.071692482s
Normal Created 61s kubelet Created container pod
Normal Started 60s kubelet Started container pod
删除pod
# kubectl delete pods pod -n test
pod "pod" deleted
# kubectl get pod -n test
No resources found in test namespace.
使用yaml创建资源
nginx_pod.yml
apiVersion: v1
kind: Namespace
metadata:
name: test #指定的Namespace空间名称
---
apiVersion: v1
kind: Pod
metadata:
name: test-nginx-pod #指定pod的名称
namespace: test #指定该pod对应的Namespace
spec:
containers:
- name: test-nginx-container #运行一个nginx容器
image: nginx:1.17.9 #指定镜像的名称
创建和销毁和查看资源
# kubectl create -f nginx_pod.yml
namespace/test created
pod/test-nginx-pod created
# kubectl get ns
NAME STATUS AGE
default Active 17h
kube-node-lease Active 17h
kube-public Active 17h
kube-system Active 17h
kubekey-system Active 17h
kubesphere-controls-system Active 17h
kubesphere-monitoring-federated Active 17h
kubesphere-monitoring-system Active 17h
kubesphere-system Active 17h
test Active 9s
# kubectl get pod -n test
NAME READY STATUS RESTARTS AGE
test-nginx-pod 1/1 Running 0 14s
# kubectl describe pod test-nginx-pod -n test
Name: test-nginx-pod
Namespace: test
Priority: 0
Node: node1/10.1.1.10
Start Time: Thu, 22 Dec 2022 22:00:21 -0500
Labels: <none>
Annotations: cni.projectcalico.org/containerID: 61d247724ac33f1d4dce38140eac18daaa9f02d4c539fc53ac1b9229e5427eeb
cni.projectcalico.org/podIP: 10.233.90.19/32
cni.projectcalico.org/podIPs: 10.233.90.19/32
Status: Running
IP: 10.233.90.19
IPs:
IP: 10.233.90.19
Containers:
test-nginx-container:
Container ID: docker://2bba63869e1952ebe913761155474afb99d99e154263f67717301b7ed6eebc6b
Image: nginx:1.17.9
Image ID: docker-pullable://nginx@sha256:88ea86df324b03b3205cbf4ca0d999143656d0a3394675630e55e49044d38b50
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 22 Dec 2022 22:00:22 -0500
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zzmk9 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-zzmk9:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13s default-scheduler Successfully assigned test/test-nginx-pod to node1
Normal Pulled 12s kubelet Container image "nginx:1.17.9" already present on machine
Normal Created 12s kubelet Created container test-nginx-container
Normal Started 12s kubelet Started container test-nginx-container
# kubectl get -f nginx_pod.yml
NAME STATUS AGE
namespace/test Active 59s
NAME READY STATUS RESTARTS AGE
pod/test-nginx-pod 1/1 Running 0 59s
# kubectl delete -f nginx_pod.yml
namespace "test" deleted
pod "test-nginx-pod" deleted
kubectl create 和 kubectl apply区别
kubectl create:
- (1)kubectl create命令,是先删除所有现有的东西,重新根据yaml文件生成新的。所以要求yaml文件中的配置必须是完整的
- (2)kubectl create命令,用同一个yaml 文件执行替换replace命令,将会不成功,fail掉。
kubectl apply:
- kubectl apply命令,根据配置文件里面列出来的内容,升级现有的。所以yaml文件的内容可以只写需要升级的属性
kubectl apply创建和销毁资源
# kubectl apply -f nginx_pod.yml
namespace/test created
pod/test-nginx-pod created
# kubectl get pod -n test
NAME READY STATUS RESTARTS AGE
test-nginx-pod 1/1 Running 0 10s
# kubectl delete -f nginx_pod.yml
namespace "test" deleted
pod "test-nginx-pod" deleted
最后
以上就是迷路世界为你收集整理的kubernetes常用命令kubernetes常用命令的全部内容,希望文章能够帮你解决kubernetes常用命令kubernetes常用命令所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
发表评论 取消回复