10-5 共享存储 --- PV、PVC和StorageClass(下)
给node gluster-01 02 03 打标签
kubectl label node gluster-01 storagenode=glusterfs
kubectl label node gluster-02 storagenode=glusterfs
kubectl label node gluster-03 storagenode=glusterfs
创建服务
kubectl apply -f glusterfs-daemonset.yaml
查看服务是否已经成功启动
kubectl get pods -o wide
这里如果创建慢的话也需要先执行
docker pull gluster/gluster-centos:latest
通过heketi 初始化磁盘
创建heketi-security.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: heketi-clusterrolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: heketi-clusterrole subjects: - kind: ServiceAccount name: heketi-service-account namespace: default --- apiVersion: v1 kind: ServiceAccount metadata: name: heketi-service-account namespace: default --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: heketi-clusterrole rules: - apiGroups: - "" resources: - pods - pods/status - pods/exec verbs: - get - list - watch - cre
创建heketi-deployment.yaml
kind: Service apiVersion: v1 metadata: name: heketi labels: glusterfs: heketi-service deploy-heketi: support annotations: description: Exposes Heketi Service spec: selector: name: heketi ports: - name: heketi port: 80 targetPort: 8080 --- apiVersion: v1 kind: ConfigMap metadata: name: tcp-services namespace: ingress-nginx data: "30001": default/heketi:80 --- kind: Deployment apiVersion: extensions/v1beta1 metadata: name: heketi labels: glusterfs: heketi-deployment annotations: description: Defines how to deploy Heketi spec: replicas: 1 template: metadata: name: heketi labels: name: heketi glusterfs: heketi-pod spec: serviceAccountName: heketi-service-account containers: - image: heketi/heketi:dev imagePullPolicy: Always name: heketi env: - name: HEKETI_EXECUTOR value: "kubernetes" - name: HEKETI_DB_PATH value: "/var/lib/heketi/heketi.db" - name: HEKETI_FSTAB value: "/var/lib/heketi/fstab" - name: HEKETI_SNAPSHOT_LIMIT value: "14" - name: HEKETI_KUBE_GLUSTER_DAEMONSET value: "y" ports: - containerPort: 8080 volumeMounts: - name: db mountPath: /var/lib/heketi readinessProbe: timeoutSeconds: 3 initialDelaySeconds: 3 httpGet: path: /hello port: 8080 livenessProbe: timeoutSeconds: 3 initialDelaySeconds: 30 httpGet: path: /hello port: 8080 volumes: - name: db hostPath: path: "/heketi-da
创建
kubectl apply -f heketi-security.yaml
kubectl apply -f heketi-deployment.yaml
kubectl get pods -o wide
服务启动在01上 去01上查看
docker ps |grep heketi
docker exec -it be85ba51835d bash
执行
export HEKETI_CLI_SERVER=http://localhost:8080
修改topology.json如下 修改ip和设备名
{ "clusters": [ { "nodes": [ { "node": { "hostnames": { "manage": [ "gluster-01" ], "storage": [ "192.168.10.165" ] }, "zone": 1 }, "devices": [ { "name": "/dev/sdc", "destroydata": false } ] }, { "node": { "hostnames": { "manage": [ "gluster-02" ], "storage": [ "192.168.10.166" ] }, "zone": 1 }, "devices": [ { "name": "/dev/sdc", "destroydata": false } ] }, { "node": { "hostnames": { "manage": [ "gluster-03" ], "storage": [ "192.168.10.167" ] }, "zone": 1 }, "devices": [ { "name": "/dev/sdc", "destroydata": false } ] } ] } ] }
在 刚刚登陆的容器里创建一个文件topology.json内容如上所示
在容器里执行初始化
#heketi-cli topology load --json tolopogy.json
新版本的heketi在创建gfs集群时需要带上参数,声明用户名及密码,相应值在heketi.json文件中配置
heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret 'My Secret' topology load --json=topology.json
heketi-cli --user admin --secret 'My Secret' topology info
查看集群信息
端口起来了
进入glusterd容器 查看集群状态
创建glusterfs-storage-class.yaml
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: glusterfs-storage-class provisioner: kubernetes.io/glusterfs parameters: # ingress地址 和 需要暴露的端口 resturl: "http://192.168.10.150:30001" restauthenabled: "false"
kubectl apply -f glusterfs-storage-class.yaml
创建一个pvc
glusterfs-pvc.yaml
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: glusterfs-pvc spec: storageClassName: glusterfs-storage-class accessModes: - ReadWriteOnce resources: requests: storage: 1Gi
中途遇到了报错 pvc一直处于pending状态 查看状态 报错如下 。环境因为测试 已经铲掉几次 可能gluster的ip 已经变化了。
修改 glusterfs-storage-class.yaml如下
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: glusterfs-storage-class provisioner: kubernetes.io/glusterfs parameters: resturl: "http://192.168.10.150:30001" # 此处需要根据自己的clusterid修改 clusterid: "5565c348a317750e6432c6d27fc2dd30" restauthenabled: "true" restuser: "admin" secretNamespace: "default" secretName: "heketi-secret" gidMin: "40000" gidMax: "50000" volumetype: "replicate:3"
通过进入heketi容器 使用heketi-cli --user admin --secret 'My Secret' topology info 命令获取clusterid
查看pv和pvc
kubectl get pv
kubectl get pvc
查看pvc的信息
kubectl get pvc glusterfs-pvc -o yaml
创建pod
web-deploy.yaml
#deploy apiVersion: apps/v1 kind: Deployment metadata: name: web-deploy spec: strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate selector: matchLabels: app: web-deploy replicas: 2 template: metadata: labels: app: web-deploy spec: containers: - name: web-deploy image: harbor.pdabc.com/kubernetes/springboot-web:v1 ports: - containerPort: 8080 volumeMounts: - name: gluster-volume mountPath: "/mooc-data" readOnly: false volumes: - name: gluster-volume persistentVolumeClaim: claimName: glusterfs-pvc
kubectl apply -f web-deploy.yaml
在gluster-01 上进入容器 在/mooc/data下 创建一个文件a 并键入hello
在gluster-03的容器里 的a 文件存在并且内容也一样
pv和pvc的生命周期
回收 是删除操作
最后
以上就是简单水杯最近收集整理的关于[kubernetes]10-5 共享存储 --- PV、PVC和StorageClass(下) 的全部内容,更多相关[kubernetes]10-5内容请搜索靠谱客的其他文章。
发表评论 取消回复