我是靠谱客的博主 美丽保温杯,最近开发中收集的这篇文章主要介绍通过脚本将kubeadm安装的k8s证书延期10年,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

前言

kubernetes 集群证书是各个组件交互的一个凭证,证书过期之后回直接影响到集群的使用,且kubeadm的证书默认有效期是1年,因此证书做延期,我们义不容辞。

检查下当前证书的有效期

[root@k8s-master ]# kubeadm alpha certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 May 19, 2022 02:32 UTC   364d                                    no      
apiserver                  May 19, 2022 02:22 UTC   364d            ca                      no      
apiserver-etcd-client      May 19, 2022 02:22 UTC   364d            etcd-ca                 no      
apiserver-kubelet-client   May 19, 2022 02:22 UTC   364d            ca                      no      
controller-manager.conf    May 19, 2022 02:32 UTC   364d                                    no      
etcd-healthcheck-client    May 19, 2022 02:22 UTC   364d            etcd-ca                 no      
etcd-peer                  May 19, 2022 02:22 UTC   364d            etcd-ca                 no      
etcd-server                May 19, 2022 02:22 UTC   364d            etcd-ca                 no      
front-proxy-client         May 19, 2022 02:22 UTC   364d            front-proxy-ca          no      
scheduler.conf             May 19, 2022 02:32 UTC   364d                                    no      

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Jan 13, 2031 09:14 UTC   9y              no      
etcd-ca                 Jan 13, 2031 09:14 UTC   9y              no      
front-proxy-ca          Jan 13, 2031 09:14 UTC   9y              no      

捡漏

查阅了很多资料,大部分都是要通过重新编译kubeadm程序更改证书有效期,或者是默认证书一年的情况定时通过kubeadm alpha certs renew all kubeadm init phase kubeconfig all续期。还好有幸捡到此位大佬的干货,对于我这个ca认证玩不明白的,可以说是如获至宝,此处使用大佬的证书脚本进行延期10年,后面为脚本源地址https://github.com/yuyicai/update-kube-cert

对证书延期

该脚本只需要在master节点执行,将更新以下证书和 kubeconfig 配置文件

/etc/kubernetes
├── admin.conf
├── controller-manager.conf
├── scheduler.conf
├── kubelet.conf
└── pki
    ├── apiserver.crt
    ├── apiserver-etcd-client.crt
    ├── apiserver-kubelet-client.crt
    ├── front-proxy-client.crt
    └── etcd
        ├── healthcheck-client.crt
        ├── peer.crt
        └── server.crt
[root@k8s-master ]# chmod  +x update-kubeadm-cert.sh 

[root@k8s-master ]# ./update-kubeadm-cert.sh  all
[2021-05-19T13:41:58.207905547+0800]: INFO: backup /etc/kubernetes to /etc/kubernetes.old-20210519
Signature ok
subject=/CN=etcd-server
Getting CA Private Key
[2021-05-19T13:41:58.246820810+0800]: INFO: generated /etc/kubernetes/pki/etcd/server.crt
Signature ok
subject=/CN=etcd-peer
Getting CA Private Key
[2021-05-19T13:41:58.284009861+0800]: INFO: generated /etc/kubernetes/pki/etcd/peer.crt
Signature ok
subject=/O=system:masters/CN=kube-etcd-healthcheck-client
Getting CA Private Key
[2021-05-19T13:41:58.309274517+0800]: INFO: generated /etc/kubernetes/pki/etcd/healthcheck-client.crt
Signature ok
subject=/O=system:masters/CN=kube-apiserver-etcd-client
Getting CA Private Key
[2021-05-19T13:41:58.341147233+0800]: INFO: generated /etc/kubernetes/pki/apiserver-etcd-client.crt
1d131fa2fae5
[2021-05-19T13:41:58.804074417+0800]: INFO: restarted etcd
Signature ok
subject=/CN=kube-apiserver
Getting CA Private Key
[2021-05-19T13:41:58.908819578+0800]: INFO: generated /etc/kubernetes/pki/apiserver.crt
Signature ok
subject=/O=system:masters/CN=kube-apiserver-kubelet-client
Getting CA Private Key
[2021-05-19T13:41:58.977676808+0800]: INFO: generated /etc/kubernetes/pki/apiserver-kubelet-client.crt
Signature ok
subject=/CN=system:kube-controller-manager
Getting CA Private Key
[2021-05-19T13:41:59.108761696+0800]: INFO: generated /etc/kubernetes/controller-manager.crt
[2021-05-19T13:41:59.132627523+0800]: INFO: generated new /etc/kubernetes/controller-manager.conf
Signature ok
subject=/CN=system:kube-scheduler
Getting CA Private Key
[2021-05-19T13:41:59.232466354+0800]: INFO: generated /etc/kubernetes/scheduler.crt
[2021-05-19T13:41:59.243551338+0800]: INFO: generated new /etc/kubernetes/scheduler.conf
Signature ok
subject=/O=system:masters/CN=kubernetes-admin
Getting CA Private Key
[2021-05-19T13:41:59.335915735+0800]: INFO: generated /etc/kubernetes/admin.crt
[2021-05-19T13:41:59.344848003+0800]: INFO: generated new /etc/kubernetes/admin.conf
[2021-05-19T13:41:59.371736364+0800]: INFO: copy the admin.conf to ~/.kube/config for kubectl
Signature ok
subject=/O=system:nodes/CN=system:node:k8s-master
Getting CA Private Key
[2021-05-19T13:41:59.465989449+0800]: INFO: generated /etc/kubernetes/kubelet.crt
[2021-05-19T13:41:59.479313708+0800]: INFO: generated new /etc/kubernetes/kubelet.conf
Signature ok
subject=/CN=front-proxy-client
Getting CA Private Key
[2021-05-19T13:41:59.536852348+0800]: INFO: generated /etc/kubernetes/pki/front-proxy-client.crt
4baa892e89b0
[2021-05-19T13:42:02.121464114+0800]: INFO: restarted kube-apiserver
cc7bd70945c5
[2021-05-19T13:42:02.852348773+0800]: INFO: restarted kube-controller-manager
b5cfc74d4bd9
[2021-05-19T13:42:03.710168773+0800]: INFO: restarted kube-scheduler
[2021-05-19T13:42:03.850652072+0800]: INFO: restarted kubelet

验证

[root@k8s-master ]# kubeadm alpha certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 May 17, 2031 05:41 UTC   9y                                      no      
apiserver                  May 17, 2031 05:41 UTC   9y              ca                      no      
apiserver-etcd-client      May 17, 2031 05:41 UTC   9y              etcd-ca                 no      
apiserver-kubelet-client   May 17, 2031 05:41 UTC   9y              ca                      no      
controller-manager.conf    May 17, 2031 05:41 UTC   9y                                      no      
etcd-healthcheck-client    May 17, 2031 05:41 UTC   9y              etcd-ca                 no      
etcd-peer                  May 17, 2031 05:41 UTC   9y              etcd-ca                 no      
etcd-server                May 17, 2031 05:41 UTC   9y              etcd-ca                 no      
front-proxy-client         May 17, 2031 05:41 UTC   9y              front-proxy-ca          no      
scheduler.conf             May 17, 2031 05:41 UTC   9y                                      no      

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Jan 13, 2031 09:14 UTC   9y              no      
etcd-ca                 Jan 13, 2031 09:14 UTC   9y              no      
front-proxy-ca          Jan 13, 2031 09:14 UTC   9y              no      

[root@k8s-master ]# kubectl get pod   //测试证书是否可以正常使用
NAME                                      READY   STATUS    RESTARTS   AGE
configmap-demo-pod                        1/1     Running   30         10d
nfs-client-provisioner-7f8dd584cb-lnb9t   1/1     Running   42         25h
web-5899d78c9-msx8h                       1/1     Running   0          6d3h

失败回滚

脚本会自动备份 /etc/kubernetes 目录到 /etc/kubernetes.old-$(date +%Y%m%d) 目录(备份目录命名示例:kubernetes.old-20210519)

若更新证书失败需要回滚,手动将备份 /etc/kubernetes.old-$(date +%Y%m%d)目录覆盖 /etc/kubernetes 目录

脚本

#!/bin/bash
#脚本转载自https://github.com/yuyicai/update-kube-cert

set -o errexit
set -o pipefail
# set -o xtrace

log::err() {
  printf "[$(date +'%Y-%m-%dT%H:%M:%S.%N%z')]: 33[31mERROR: 33[0m$@n"
}

log::info() {
  printf "[$(date +'%Y-%m-%dT%H:%M:%S.%N%z')]: 33[32mINFO: 33[0m$@n"
}

log::warning() {
  printf "[$(date +'%Y-%m-%dT%H:%M:%S.%N%z')]: 33[33mWARNING: 33[0m$@n"
}

check_file() {
  if [[ ! -r  ${1} ]]; then
    log::err "can not find ${1}"
    exit 1
  fi
}

# get x509v3 subject alternative name from the old certificate
cert::get_subject_alt_name() {
  local cert=${1}.crt
  check_file "${cert}"
  local alt_name=$(openssl x509 -text -noout -in ${cert} | grep -A1 'Alternative' | tail -n1 | sed 's/[[:space:]]*Address//g')
  printf "${alt_name}n"
}

# get subject from the old certificate
cert::get_subj() {
  local cert=${1}.crt
  check_file "${cert}"
  local subj=$(openssl x509 -text -noout -in ${cert}  | grep "Subject:" | sed 's/Subject:///g;s/,///;s/[[:space:]]//g')
  printf "${subj}n"
}

cert::backup_file() {
  local file=${1}
  if [[ ! -e ${file}.old-$(date +%Y%m%d) ]]; then
    cp -rp ${file} ${file}.old-$(date +%Y%m%d)
    log::info "backup ${file} to ${file}.old-$(date +%Y%m%d)"
  else
    log::warning "does not backup, ${file}.old-$(date +%Y%m%d) already exists"
  fi
}

# generate certificate whit client, server or peer
# Args:
#   $1 (the name of certificate)
#   $2 (the type of certificate, must be one of client, server, peer)
#   $3 (the subject of certificates)
#   $4 (the validity of certificates) (days)
#   $5 (the x509v3 subject alternative name of certificate when the type of certificate is server or peer)
cert::gen_cert() {
  local cert_name=${1}
  local cert_type=${2}
  local subj=${3}
  local cert_days=${4}
  local alt_name=${5}
  local cert=${cert_name}.crt
  local key=${cert_name}.key
  local csr=${cert_name}.csr
  local csr_conf="distinguished_name = dnn[dn]n[v3_ext]nkeyUsage = critical, digitalSignature, keyEnciphermentn"

  check_file "${key}"
  check_file "${cert}"

  # backup certificate when certificate not in ${kubeconf_arr[@]}
  # kubeconf_arr=("controller-manager.crt" "scheduler.crt" "admin.crt" "kubelet.crt")
  # if [[ ! "${kubeconf_arr[@]}" =~ "${cert##*/}" ]]; then
  #   cert::backup_file "${cert}"
  # fi

  case "${cert_type}" in
    client)
      openssl req -new  -key ${key} -subj "${subj}" -reqexts v3_ext 
        -config <(printf "${csr_conf} extendedKeyUsage = clientAuthn") -out ${csr}
      openssl x509 -in ${csr} -req -CA ${CA_CERT} -CAkey ${CA_KEY} -CAcreateserial -extensions v3_ext 
        -extfile <(printf "${csr_conf} extendedKeyUsage = clientAuthn") -days ${cert_days} -out ${cert}
      log::info "generated ${cert}"
    ;;
    server)
      openssl req -new  -key ${key} -subj "${subj}" -reqexts v3_ext 
        -config <(printf "${csr_conf} extendedKeyUsage = serverAuthnsubjectAltName = ${alt_name}n") -out ${csr}
      openssl x509 -in ${csr} -req -CA ${CA_CERT} -CAkey ${CA_KEY} -CAcreateserial -extensions v3_ext 
        -extfile <(printf "${csr_conf} extendedKeyUsage = serverAuthnsubjectAltName = ${alt_name}n") -days ${cert_days} -out ${cert}
      log::info "generated ${cert}"
    ;;
    peer)
      openssl req -new  -key ${key} -subj "${subj}" -reqexts v3_ext 
        -config <(printf "${csr_conf} extendedKeyUsage = serverAuth, clientAuthnsubjectAltName = ${alt_name}n") -out ${csr}
      openssl x509 -in ${csr} -req -CA ${CA_CERT} -CAkey ${CA_KEY} -CAcreateserial -extensions v3_ext 
        -extfile <(printf "${csr_conf} extendedKeyUsage = serverAuth, clientAuthnsubjectAltName = ${alt_name}n") -days ${cert_days} -out ${cert}
      log::info "generated ${cert}"
    ;;
    *)
      log::err "unknow, unsupported etcd certs type: ${cert_type}, supported type: client, server, peer"
      exit 1
  esac

  rm -f ${csr}
}

cert::update_kubeconf() {
  local cert_name=${1}
  local kubeconf_file=${cert_name}.conf
  local cert=${cert_name}.crt
  local key=${cert_name}.key

  # generate  certificate
  check_file ${kubeconf_file}
  # get the key from the old kubeconf
  grep "client-key-data" ${kubeconf_file} | awk {'print$2'} | base64 -d > ${key}
  # get the old certificate from the old kubeconf
  grep "client-certificate-data" ${kubeconf_file} | awk {'print$2'} | base64 -d > ${cert}
  # get subject from the old certificate
  local subj=$(cert::get_subj ${cert_name})
  cert::gen_cert "${cert_name}" "client" "${subj}" "${CAER_DAYS}"
  # get certificate base64 code
  local cert_base64=$(base64 -w 0 ${cert})

  # backup kubeconf
  # cert::backup_file "${kubeconf_file}"

  # set certificate base64 code to kubeconf
  sed -i 's/client-certificate-data:.*/client-certificate-data: '${cert_base64}'/g' ${kubeconf_file}

  log::info "generated new ${kubeconf_file}"
  rm -f ${cert}
  rm -f ${key}

  # set config for kubectl
  if [[ ${cert_name##*/} == "admin" ]]; then
    mkdir -p ~/.kube
    cp -fp ${kubeconf_file} ~/.kube/config
    log::info "copy the admin.conf to ~/.kube/config for kubectl"
  fi
}

cert::update_etcd_cert() {
  PKI_PATH=${KUBE_PATH}/pki/etcd
  CA_CERT=${PKI_PATH}/ca.crt
  CA_KEY=${PKI_PATH}/ca.key

  check_file "${CA_CERT}"
  check_file "${CA_KEY}"

  # generate etcd server certificate
  # /etc/kubernetes/pki/etcd/server
  CART_NAME=${PKI_PATH}/server
  subject_alt_name=$(cert::get_subject_alt_name ${CART_NAME})
  cert::gen_cert "${CART_NAME}" "peer" "/CN=etcd-server" "${CAER_DAYS}" "${subject_alt_name}"

  # generate etcd peer certificate
  # /etc/kubernetes/pki/etcd/peer
  CART_NAME=${PKI_PATH}/peer
  subject_alt_name=$(cert::get_subject_alt_name ${CART_NAME})
  cert::gen_cert "${CART_NAME}" "peer" "/CN=etcd-peer" "${CAER_DAYS}" "${subject_alt_name}"

  # generate etcd healthcheck-client certificate
  # /etc/kubernetes/pki/etcd/healthcheck-client
  CART_NAME=${PKI_PATH}/healthcheck-client
  cert::gen_cert "${CART_NAME}" "client" "/O=system:masters/CN=kube-etcd-healthcheck-client" "${CAER_DAYS}"

  # generate apiserver-etcd-client certificate
  # /etc/kubernetes/pki/apiserver-etcd-client
  check_file "${CA_CERT}"
  check_file "${CA_KEY}"
  PKI_PATH=${KUBE_PATH}/pki
  CART_NAME=${PKI_PATH}/apiserver-etcd-client
  cert::gen_cert "${CART_NAME}" "client" "/O=system:masters/CN=kube-apiserver-etcd-client" "${CAER_DAYS}"

  # restart etcd
  docker ps | awk '/k8s_etcd/{print$1}' | xargs -r -I '{}' docker restart {} || true
  log::info "restarted etcd"
}

cert::update_master_cert() {
  PKI_PATH=${KUBE_PATH}/pki
  CA_CERT=${PKI_PATH}/ca.crt
  CA_KEY=${PKI_PATH}/ca.key

  check_file "${CA_CERT}"
  check_file "${CA_KEY}"

  # generate apiserver server certificate
  # /etc/kubernetes/pki/apiserver
  CART_NAME=${PKI_PATH}/apiserver
  subject_alt_name=$(cert::get_subject_alt_name ${CART_NAME})
  cert::gen_cert "${CART_NAME}" "server" "/CN=kube-apiserver" "${CAER_DAYS}" "${subject_alt_name}"

  # generate apiserver-kubelet-client certificate
  # /etc/kubernetes/pki/apiserver-kubelet-client
  CART_NAME=${PKI_PATH}/apiserver-kubelet-client
  cert::gen_cert "${CART_NAME}" "client" "/O=system:masters/CN=kube-apiserver-kubelet-client" "${CAER_DAYS}"

  # generate kubeconf for controller-manager,scheduler,kubectl and kubelet
  # /etc/kubernetes/controller-manager,scheduler,admin,kubelet.conf
  cert::update_kubeconf "${KUBE_PATH}/controller-manager"
  cert::update_kubeconf "${KUBE_PATH}/scheduler"
  cert::update_kubeconf "${KUBE_PATH}/admin"
  # check kubelet.conf
  # https://github.com/kubernetes/kubeadm/issues/1753
  set +e
  grep kubelet-client-current.pem /etc/kubernetes/kubelet.conf > /dev/null 2>&1
  kubelet_cert_auto_update=$?
  set -e
  if [[ "$kubelet_cert_auto_update" == "0" ]]; then
    log::warning "does not need to update kubelet.conf"
  else
    cert::update_kubeconf "${KUBE_PATH}/kubelet"
  fi

  # generate front-proxy-client certificate
  # use front-proxy-client ca
  CA_CERT=${PKI_PATH}/front-proxy-ca.crt
  CA_KEY=${PKI_PATH}/front-proxy-ca.key
  check_file "${CA_CERT}"
  check_file "${CA_KEY}"
  CART_NAME=${PKI_PATH}/front-proxy-client
  cert::gen_cert "${CART_NAME}" "client" "/CN=front-proxy-client" "${CAER_DAYS}"

  # restart apiserve, controller-manager, scheduler and kubelet
  docker ps | awk '/k8s_kube-apiserver/{print$1}' | xargs -r -I '{}' docker restart {} || true
  log::info "restarted kube-apiserver"
  docker ps | awk '/k8s_kube-controller-manager/{print$1}' | xargs -r -I '{}' docker restart {} || true
  log::info "restarted kube-controller-manager"
  docker ps | awk '/k8s_kube-scheduler/{print$1}' | xargs -r -I '{}' docker restart {} || true
  log::info "restarted kube-scheduler"
  systemctl restart kubelet
  log::info "restarted kubelet"
}

main() {
  local node_tpye=$1
  
  KUBE_PATH=/etc/kubernetes
  CAER_DAYS=3650

  # backup $KUBE_PATH to $KUBE_PATH.old-$(date +%Y%m%d)
  cert::backup_file "${KUBE_PATH}"

  case ${node_tpye} in
    etcd)
	  # update etcd certificates
      cert::update_etcd_cert
    ;;
    master)
	  # update master certificates and kubeconf
      cert::update_master_cert
    ;;
    all)
      # update etcd certificates
      cert::update_etcd_cert
      # update master certificates and kubeconf
      cert::update_master_cert
    ;;
    *)
      log::err "unknow, unsupported certs type: ${cert_type}, supported type: all, etcd, master"
      printf "Documentation: https://github.com/yuyicai/update-kube-cert
  example:
    '33[32m./update-kubeadm-cert.sh all33[0m' update all etcd certificates, master certificates and kubeconf
      /etc/kubernetes
      ├── admin.conf
      ├── controller-manager.conf
      ├── scheduler.conf
      ├── kubelet.conf
      └── pki
          ├── apiserver.crt
          ├── apiserver-etcd-client.crt
          ├── apiserver-kubelet-client.crt
          ├── front-proxy-client.crt
          └── etcd
              ├── healthcheck-client.crt
              ├── peer.crt
              └── server.crt
    '33[32m./update-kubeadm-cert.sh etcd33[0m' update only etcd certificates
      /etc/kubernetes
      └── pki
          ├── apiserver-etcd-client.crt
          └── etcd
              ├── healthcheck-client.crt
              ├── peer.crt
              └── server.crt
    '33[32m./update-kubeadm-cert.sh master33[0m' update only master certificates and kubeconf
      /etc/kubernetes
      ├── admin.conf
      ├── controller-manager.conf
      ├── scheduler.conf
      ├── kubelet.conf
      └── pki
          ├── apiserver.crt
          ├── apiserver-kubelet-client.crt
          └── front-proxy-client.crt
"
      exit 1
    esac
}

main "$@"

如需延期更长的时间只需要调整脚本内CAER_DAYS=3650 的值就可以了

注意:需要说明的是上面的列表中没有包含 kubelet,因为 kubeadm 将 kubelet 配置为自动更新证书。

参考链接:https://github.com/yuyicai/update-kube-cert

最后

以上就是美丽保温杯为你收集整理的通过脚本将kubeadm安装的k8s证书延期10年的全部内容,希望文章能够帮你解决通过脚本将kubeadm安装的k8s证书延期10年所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(43)

评论列表共有 0 条评论

立即
投稿
返回
顶部