我是靠谱客的博主 长情芹菜,这篇文章主要介绍kubernetes的eviction机制,现在分享给大家,希望可以做个参考。

eviction,即驱赶的意思,意思是当节点出现异常时,kubernetes将有相应的机制驱赶该节点上的Pod。eviction在openstack的nova组件中也存在。

目前kubernetes中存在两种eviction机制,分别由kube-controller-manager和kubelet实现

1. kube-controller-manager实现的eviction

kube-controller-manager主要由多个控制器构成,而eviction的功能主要由node controller这个控制器实现。

kube-controller-manager提供了以下启动参数控制eviction

  • pod-eviction-timeout:即当节点宕机该事件间隔后,开始eviction机制,驱赶宕机节点上的Pod,默认为5min
  • node-eviction-rate: 驱赶速率,即驱赶Node的速率,由令牌桶流控算法实现,默认为0.1,即每秒驱赶0.1个节点,注意这里不是驱赶Pod的速率,而是驱赶节点的速率。相当于每隔10s,清空一个节点
  • secondary-node-eviction-rate: 二级驱赶速率,当集群中宕机节点过多时,相应的驱赶速率也降低,默认为0.01
  • unhealthy-zone-threshold:不健康zone阈值,会影响什么时候开启二级驱赶速率,默认为0.55,即当该zone中节点宕机数目超过55%,而认为该zone不健康
  • large-cluster-size-threshold:大集群法制,当该zone的节点多余该阈值时,则认为该zone是一个大集群。大集群节点宕机数目超过55%时,则将驱赶速率降为0.0.1,假如是小集群,则将速率直接降为0

node-controller代码主要位于pkg/controller/node目录下

1.1 zone

为了控制eviction,kubernete将节点划分为不同的zone,主要通过给节点加label实现

  • failure-domain.beta.kubernetes.io/zone
  • failure-domain.beta.kubernetes.io/region

zone名称由上述的zone和region标签组合而成,两个节点zone和region分别相同,则位于同一个zone,否则不同zone。假如二者都为空,就位于default zone

zone有四种不同的状态

  • stateInitial
  • stateNormal
  • stateFullDisruption
  • statePartialDisruption

初始化状态比较好理解,假如节点刚刚加入集群,它所在的zone刚刚被发现,则该zone的状态是initial,这是一个非常短暂的时间,其余的状态,由以下函数决定

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
func (nc *NodeController) ComputeZoneState(nodeReadyConditions []*v1.NodeCondition) (int, zoneState) { readyNodes := 0 notReadyNodes := 0 for i := range nodeReadyConditions { if nodeReadyConditions[i] != nil && nodeReadyConditions[i].Status == v1.ConditionTrue { readyNodes++ } else { notReadyNodes++ } } switch { case readyNodes == 0 && notReadyNodes > 0: return notReadyNodes, stateFullDisruption case notReadyNodes > 2 && float32(notReadyNodes)/float32(notReadyNodes+readyNodes) >= nc.unhealthyZoneThreshold: return notReadyNodes, statePartialDisruption default: return notReadyNodes, stateNormal } }

注意这里统计的某个zone下面的节点状态,而不是所有。当该zone下面ready的节点为0,而notReady节点大于0时,即认为所有节点都宕机了,所以状态为stateFullDisruption;当notReady节点大于两个,而且notReady节点占比超过unhealthyZoneThreshold,即0.55时,认为是statePartialDisruption,其他情况则认为stateNormal

这四种状态会如何影响eviction速度呢?看如下函数

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
func (nc *NodeController) setLimiterInZone(zone string, zoneSize int, state zoneState) { switch state { case stateNormal: if nc.useTaintBasedEvictions { nc.zoneNotReadyOrUnreachableTainer[zone].SwapLimiter(nc.evictionLimiterQPS) } else { nc.zonePodEvictor[zone].SwapLimiter(nc.evictionLimiterQPS) } case statePartialDisruption: if nc.useTaintBasedEvictions { nc.zoneNotReadyOrUnreachableTainer[zone].SwapLimiter( nc.enterPartialDisruptionFunc(zoneSize)) } else { nc.zonePodEvictor[zone].SwapLimiter( nc.enterPartialDisruptionFunc(zoneSize)) } case stateFullDisruption: if nc.useTaintBasedEvictions { nc.zoneNotReadyOrUnreachableTainer[zone].SwapLimiter( nc.enterFullDisruptionFunc(zoneSize)) } else { nc.zonePodEvictor[zone].SwapLimiter( nc.enterFullDisruptionFunc(zoneSize)) } } } 其中enterPartialDisruptionFunc就是函数ReducedQPSFunc func (nc *NodeController) ReducedQPSFunc(nodeNum int) float32 { if int32(nodeNum) > nc.largeClusterThreshold { return nc.secondaryEvictionLimiterQPS } return 0 } 而enterFullDisruptionFunc是函数HealthyQPSFunc func (nc *NodeController) HealthyQPSFunc(nodeNum int) float32 { return nc.evictionLimiterQPS }

即加入zone状态是normal,那么速率为0.1,假如zone状态是FullDisruption,速率也是0.1;假如zone是PartialDisruption,假如是大集群,速率为0.0.1,小集群则直接降为0

1.2 两种不同的eviction方法

目前node controller存在两种不同的eviction方法,即通过taint或者传统方法

复制代码
1
2
3
4
5
6
7
if nc.useTaintBasedEvictions { go wait.Until(nc.doTaintingPass, nodeEvictionPeriod, wait.NeverStop) } else { go wait.Until(nc.doEvictionPass, nodeEvictionPeriod, wait.NeverStop) }

其中nodeEvictionPeriod为100ms,即每隔100ms就会执行doEvictionPass或doTaintingPass

1.2.1 传统eviction方法

zonePodEvictor类型为map[string]*RateLimitedTimedQueue,即每个zone都有一个队列,队列带了流控算法,里面存储的是unready的节点,节点上的pod需要被eviction

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
func (nc *NodeController) doEvictionPass() { nc.evictorLock.Lock() defer nc.evictorLock.Unlock() for k := range nc.zonePodEvictor { nc.zonePodEvictor[k].Try(func(value TimedValue) (bool, time.Duration) { node, err := nc.nodeLister.Get(value.Value) ... nodeUid, _ := value.UID.(string) remaining, err := deletePods(nc.kubeClient, nc.recorder, value.Value, nodeUid, nc.daemonSetStore) ... if remaining { glog.Infof("Pods awaiting deletion due to NodeController eviction") } return true, 0 }) } }

deletePods是驱逐节点的主要函数

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
func deletePods(kubeClient clientset.Interface, recorder record.EventRecorder, nodeName, nodeUID string, daemonStore extensionslisters.DaemonSetLister) (bool, error) { remaining := false selector := fields.OneTermEqualSelector(api.PodHostField, nodeName).String() options := metav1.ListOptions{FieldSelector: selector} pods, err := kubeClient.Core().Pods(metav1.NamespaceAll).List(options) var updateErrList []error ... for _, pod := range pods.Items { // Defensive check, also needed for tests. if pod.Spec.NodeName != nodeName { continue } // 设置Pod终止理由 if _, err = setPodTerminationReason(kubeClient, &pod, nodeName); err != nil { if errors.IsConflict(err) { updateErrList = append(updateErrList, fmt.Errorf("update status failed for pod %q: %v", format.Pod(&pod), err)) continue } } // 该Pod正在被删除,忽略 if pod.DeletionGracePeriodSeconds != nil { remaining = true continue } // 假如该节点是又daemonset管理,则忽略 _, err := daemonStore.GetPodDaemonSets(&pod) if err == nil { continue } if err := kubeClient.Core().Pods(pod.Namespace).Delete(pod.Name, nil); err != nil { return false, err } remaining = true } ... return remaining, nil }

底层其实就是delete节点上的Pod,假如Pod是由daemonset管理,则忽略,因为即使删除了,daemonset还是会在该节点上重建

1.2.2 taint机制 taint机制还处于试验状态,默认不开启,假如要开始,则要在所有组件上设置–feature-gates TaintNodesByCondition=true

当节点状态为unready时,打上node.alpha.kubernetes.io/notReady的taint 当节点状态为unknown时,打上node.alpha.kubernetes.io/unreachable的taint

打上taint后,必然有相应的控制器去处理

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
// Run starts NoExecuteTaintManager which will run in loop until `stopCh` is closed. func (tc *NoExecuteTaintManager) Run(stopCh <-chan struct{}) { go func(stopCh <-chan struct{}) { for { item, shutdown := tc.nodeUpdateQueue.Get() if shutdown { break } nodeUpdate := item.(*nodeUpdateItem) select { case <-stopCh: break case tc.nodeUpdateChannel <- nodeUpdate: } } }(stopCh) go func(stopCh <-chan struct{}) { for { item, shutdown := tc.podUpdateQueue.Get() if shutdown { break } podUpdate := item.(*podUpdateItem) select { case <-stopCh: break case tc.podUpdateChannel <- podUpdate: } } }(stopCh) for { select { case <-stopCh: break case nodeUpdate := <-tc.nodeUpdateChannel: tc.handleNodeUpdate(nodeUpdate) case podUpdate := <-tc.podUpdateChannel: // If we found a Pod update we need to empty Node queue first. priority: for { select { case nodeUpdate := <-tc.nodeUpdateChannel: tc.handleNodeUpdate(nodeUpdate) default: break priority } } // After Node queue is emptied we process podUpdate. tc.handlePodUpdate(podUpdate) } } } func deletePodHandler(c clientset.Interface, emitEventFunc func(types.NamespacedName)) func(args *WorkArgs) error { return func(args *WorkArgs) error { ns := args.NamespacedName.Namespace name := args.NamespacedName.Name glog.V(0).Infof("NoExecuteTaintManager is deleting Pod: %v", args.NamespacedName.String()) if emitEventFunc != nil { emitEventFunc(args.NamespacedName) } var err error for i := 0; i < retries; i++ { err = c.Core().Pods(ns).Delete(name, &metav1.DeleteOptions{}) if err == nil { break } time.Sleep(10 * time.Millisecond) } return err } }

本质上还是讲带eviction节点上的pod加入到删除队列上

2. kubelet的eviction机制

kube-controller-manager的eviction机制是粗粒度的,即驱赶一个节点上的所有pod,而kubelet则是细粒度的,它驱赶的是节点上的某些Pod,驱赶哪些Pod与之前讲过的Pod的Qos机制有关。

kubelet的eviction机制,只有当节点内存和磁盘资源紧张时,才会开启,他的目的就是为了回收node节点的资源。之前提过,kubelet还有oom-killer可以回收资源,那为什么还需要eviction呢?这是因为oom-killer将Pod杀掉后,假如Pod的RestartPolicy设置为Always,则kubelet隔段时间后,仍然会在该节点上启动该Pod。而kublet eviction则会将该Pod从该节点上删除。

kubelet提供了以下参数控制eviction

  • eviction-hard:一系列的阈值,比如memory.available<1Gi,即当节点可用内存低于1Gi时,会立即触发一次pod eviction
  • eviction-max-pod-grace-period:eviction-soft时,终止Pod的grace时间
  • eviction-minimum-reclaim:表示每一次eviction必须至少回收多少资源
  • eviction-pressure-transition-period:默认为5分钟,脱离pressure condition的时间,超过阈值时,节点会被设置为memory pressure或者disk pressure,然后开启pod eviction
  • eviction-soft:与hard相对应,也是一系列法制,比如memory.available<1.5Gi。但它不会立即执行pod eviction,而会等待eviction-soft-grace-period时间,假如该时间过后,依然还是达到了eviction-soft,则触发一次pod eviction
  • eviction-soft-grace-period:默认为90秒

2.1 核心代码

kubelet eviction的核心代码就是如下,里面的synchronize就是核心函数

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
func (m *managerImpl) Start(diskInfoProvider DiskInfoProvider, podFunc ActivePodsFunc, podCleanedUpFunc PodCleanedUpFunc, nodeProvider NodeProvider, monitoringInterval time.Duration) { // start the eviction manager monitoring go func() { for { if evictedPods := m.synchronize(diskInfoProvider, podFunc, nodeProvider); evictedPods != nil { glog.Infof("eviction manager: pods %s evicted, waiting for pod to be cleaned up", format.Pods(evictedPods)) m.waitForPodsCleanup(podCleanedUpFunc, evictedPods) } else { time.Sleep(monitoringInterval) } } }() }

2.2 何时检测触发eviction的条件

目前主要由两种机制检测触发eviction的条件

  • 第一种就是定时触发,前面的synchronize位于一个for循环,其中monitoringInterval为10s,也就是每隔10s会去检测出发条件
  • 通过cgroup订阅而触发,也就是假如内存低于阈值,cgroup就会通知kubelet去执行synchronize,内核层通知应用层,通过eventfd实现
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
if m.config.KernelMemcgNotification && !m.notifiersInitialized { glog.Infof("eviction manager attempting to integrate with kernel memcg notification api") m.notifiersInitialized = true // start soft memory notification err = startMemoryThresholdNotifier(m.config.Thresholds, observations, false, func(desc string) { glog.Infof("soft memory eviction threshold crossed at %s", desc) // TODO wait grace period for soft memory limit m.synchronize(diskInfoProvider, podFunc, nodeProvider) }) if err != nil { glog.Warningf("eviction manager: failed to create hard memory threshold notifier: %v", err) } // start hard memory notification err = startMemoryThresholdNotifier(m.config.Thresholds, observations, true, func(desc string) { glog.Infof("hard memory eviction threshold crossed at %s", desc) m.synchronize(diskInfoProvider, podFunc, nodeProvider) }) if err != nil { glog.Warningf("eviction manager: failed to create soft memory threshold notifier: %v", err) } }

2.3 资源的回收

kubelet的eviction主要会回收两种资源,内存和磁盘

  • 磁盘回收:主要通过删除已经终止的容器和未使用的镜像
  • 内存回收:主要通过终止正在运行的Pod

2.4 Qos对Eviction的影响

eviction manager会获取该节点上所有的容器,然后根据一定的算法对Pod进行排序,这里看看针对内存如何排序

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
// rankMemoryPressure orders the input pods for eviction in response to memory pressure. func rankMemoryPressure(pods []*v1.Pod, stats statsFunc) { orderedBy(qosComparator, memory(stats)).Sort(pods) } // qosComparator compares pods by QoS (BestEffort < Burstable < Guaranteed) func qosComparator(p1, p2 *v1.Pod) int { qosP1 := v1qos.GetPodQOS(p1) qosP2 := v1qos.GetPodQOS(p2) // its a tie if qosP1 == qosP2 { return 0 } // if p1 is best effort, we know p2 is burstable or guaranteed if qosP1 == v1.PodQOSBestEffort { return -1 } // we know p1 and p2 are not besteffort, so if p1 is burstable, p2 must be guaranteed if qosP1 == v1.PodQOSBurstable { if qosP2 == v1.PodQOSGuaranteed { return -1 } return 1 } // ok, p1 must be guaranteed. return 1 }

从qosComparator函数可以看出,Pod排列顺序为

PodQOSBestEffort < PodQOSBurstable < PodQOSGuaranteed

即首先会回收BestEffort的Pod,然后回收Burstable,最后才会回收Guranteed

2.5 Eviction的本质

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
// we kill at most a single pod during each eviction interval for i := range activePods { pod := activePods[i] // If the pod is marked as critical and static, and support for critical pod annotations is enabled, // do not evict such pods. Static pods are not re-admitted after evictions. // https://github.com/kubernetes/kubernetes/issues/40573 has more details. if utilfeature.DefaultFeatureGate.Enabled(features.ExperimentalCriticalPodAnnotation) && kubelettypes.IsCriticalPod(pod) && kubepod.IsStaticPod(pod) { continue } status := v1.PodStatus{ Phase: v1.PodFailed, Message: fmt.Sprintf(message, resourceToReclaim), Reason: reason, } // record that we are evicting the pod m.recorder.Eventf(pod, v1.EventTypeWarning, reason, fmt.Sprintf(message, resourceToReclaim)) gracePeriodOverride := int64(0) if softEviction { gracePeriodOverride = m.config.MaxPodGracePeriodSeconds } // this is a blocking call and should only return when the pod and its containers are killed. err := m.killPodFunc(pod, status, &gracePeriodOverride) if err != nil { glog.Warningf("eviction manager: error while evicting pod %s: %v", format.Pod(pod), err) } return []*v1.Pod{pod} }

每次最多回收1个Pod

假如是hardEviction,则PodDeleteGracePeriod设置为0,即立即删除,否则设置为MaxPodGracePeriodSeconds。然后调用killPodFunc删除Pod

3. 注意点

值得注意的是,当kubernetes驱赶Pod的时候,kubernetes并不会重新创建Pod,假如要重新创建Pod,需要借助replicationcontroller、relicaset和deployment等机制。也就是说假如,你直接创建一个Pod,当它被kubernetes驱赶时,该Pod直接被删除了,不会重建。而利用replicationcontroller等机制,由于少了一个Pod,这些控制器就会重新创建一个Pod

文章转自:http://licyhust.com/%E5%AE%B9%E5%99%A8%E6%8A%80%E6%9C%AF/2017/10/24/eviction/

最后

以上就是长情芹菜最近收集整理的关于kubernetes的eviction机制的全部内容,更多相关kubernetes内容请搜索靠谱客的其他文章。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(84)

评论列表共有 0 条评论

立即
投稿
返回
顶部