我是靠谱客的博主 眼睛大皮皮虾,这篇文章主要介绍k8s StorageClass 动态创建pv一、创建权限 auth.yaml二、创建提供者三、StorageClass yaml四、etcd yaml总结,现在分享给大家,希望可以做个参考。
文章目录
- 一、创建权限 auth.yaml
- 二、创建提供者
- 三、StorageClass yaml
- 四、etcd yaml
- 总结
一、创建权限 auth.yaml
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["nodes"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: nacos-ns # replace with namespace where provisioner is deployed roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io
二、创建提供者
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: tolerations: #设置容忍master节点污点 - key: node-role.kubernetes.io/master operator: Equal value: "true" serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: registry.cn-hangzhou.aliyuncs.com/jiayu-kubernetes/nfs-subdir-external-provisioner:v4.0.0 imagePullPolicy: IfNotPresent volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: etcdcluster-nfs-storage-provisioner - name: NFS_SERVER value: 192.168.7.2 # NFS SERVER_IP - name: NFS_PATH value: /www/data/etcdcluster ####nfs服务器上面的路径 volumes: - name: nfs-client-root nfs: server: 192.168.7.2 # NFS SERVER_IP path: /www/data/etcdcluster ####nfs服务器上面的路径
三、StorageClass yaml
复制代码
1
2
3
4
5
6
7
8
9
10
11apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: etcdcluster-nfs-storage annotations: storageclass.kubernetes.io/is-default-class: "false" # 是否设置为默认的storageclass provisioner: etcdcluster-nfs-storage-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME' 这里的值需要和PROVISIONER_NAME的值一样 allowVolumeExpansion: true parameters: archiveOnDelete: "true" # 设置为"false"时删除PVC不会保留数据,"true"则保留数据
四、etcd yaml
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82apiVersion: v1 kind: ConfigMap metadata: name: etcd-config-map data: ALLOW_NONE_AUTHENTICATION: "yes" # 运训不用密码登陆 ETCD_LISTEN_PEER_URLS: "http://0.0.0.0:2380" # 用于监听伙伴通讯的URL列表 ETCD_LISTEN_CLIENT_URLS: "http://0.0.0.0:2379" # 用于监听客户端通讯的URL列表 ETCD_INITIAL_CLUSTER_TOKEN: "etcd-cluster" # 在启动期间用于 etcd 集群的初始化集群记号 ETCD_INITIAL_CLUSTER_STATE: new # 初始化集群状态 ETCD_INITIAL_CLUSTER: "etcd-0=http://etcd-0.etcd-hs:2380,etcd-1=http://etcd-1.etcd-hs:2380,etcd-2=http://etcd-2.etcd-hs:2380" --- apiVersion: apps/v1 kind: StatefulSet metadata: name: etcd spec: selector: matchLabels: app: etcd serviceName: "etcd-hs" replicas: 3 template: metadata: labels: app: etcd spec: terminationGracePeriodSeconds: 10 containers: - name: etcd image: bitnami/etcd:latest imagePullPolicy: IfNotPresent ports: - containerPort: 2379 name: "gfgfhfg" - containerPort: 2380 name: "hgfhfg" envFrom: - configMapRef: name: etcd-config-map env: - name: ETCD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: ETCD_INITIAL_ADVERTISE_PEER_URLS value: "http://$(ETCD_NAME).etcd-hs:2380" - name: ETCD_ADVERTISE_CLIENT_URLS value: "http://$(ETCD_NAME).etcd-hs:2379" volumeMounts: - name: etcd-persistent-storage mountPath: /bitnami/etcd volumeClaimTemplates: - metadata: name: etcd-persistent-storage annotations: volume.beta.kubernetes.io/storage-class: "etcdcluster-nfs-storage" # spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi --- # headless 无头服务(提供域名供StatefulSet内部pod访问使用) apiVersion: v1 kind: Service metadata: name: etcd-hs spec: ports: - port: 2380 name: "2380" targetPort: 2380 - port: 2379 name: "2379" targetPort: 2379 clusterIP: None selector: app: etcd
总结
这里是创建etcd集群为例子,可以通过StorageClass动态创建pv,不过有一个坑,我这个例子是创建3个etcd的pod,当我删除其中一个pod的时候k8s会自动再重新创建一个pod,这时候这个pod对应的pv好像就失效了,不能再读写了,我当时是需要把StatefulSet删除,和对应的pvc,pv都删除了,之后再重新创建StatefulSet才可以用,具体原因是什么就不知道了,k8s pv pvc这块我现在还是有点一知半解,需要更加多的实验去摸索一下。
最后
以上就是眼睛大皮皮虾最近收集整理的关于k8s StorageClass 动态创建pv一、创建权限 auth.yaml二、创建提供者三、StorageClass yaml四、etcd yaml总结的全部内容,更多相关k8s内容请搜索靠谱客的其他文章。
本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
发表评论 取消回复