概述
介绍 (Introduction)
With the distributed and dynamic nature of containers, managing and configuring storage statically has become a difficult problem on Kubernetes, with workloads now being able to move from one Virtual Machine (VM) to another in a matter of seconds. To address this, Kubernetes manages volumes with a system of Persistent Volumes (PV), API objects that represent a storage configuration/volume, and PersistentVolumeClaims (PVC), a request for storage to be satisfied by a Persistent Volume. Additionally, Container Storage Interface (CSI) drivers can help automate and manage the handling and provisioning of storage for containerized workloads. These drivers are responsible for provisioning, mounting, unmounting, removing, and snapshotting volumes.
由于容器具有分布式和动态的特性,因此静态管理和配置存储已成为Kubernetes上的难题,现在工作负载可以在几秒钟内从一个虚拟机(VM)迁移到另一个虚拟机(VM)。 为了解决这个问题,Kubernetes使用Persistent Volumes(PV) ,代表存储配置/卷的API对象和PersistentVolumeClaims(PVC)的系统来管理卷, PersistentVolumeClaims(PVC)是由Persistent Volume满足的存储请求。 此外, 容器存储接口(CSI)驱动程序可以帮助自动化和管理容器化工作负载的存储处理和供应。 这些驱动程序负责配置,安装,卸载,删除和快照卷。
The digitalocean-csi
integrates a Kubernetes cluster with the DigitalOcean Block Storage product. A developer can use this to dynamically provision Block Storage volumes for containerized applications in Kubernetes. However, applications can sometimes require data to be persisted and shared across multiple Droplets. DigitalOcean’s default Block Storage CSI solution is unable to support mounting one block storage volume to many Droplets simultaneously. This means that this is a ReadWriteOnce (RWO) solution, since the volume is confined to one node. The Network File System (NFS) protocol, on the other hand, does support exporting the same share to many consumers. This is called ReadWriteMany (RWX), because many nodes can mount the volume as read-write. We can therefore use an NFS server within our cluster to provide storage that can leverage the reliable backing of DigitalOcean Block Storage with the flexibility of NFS shares.
digitalocean-csi
将Kubernetes集群与DigitalOcean块存储产品集成在一起。 开发人员可以使用它为Kubernetes中的容器化应用程序动态配置块存储卷。 但是,应用程序有时可能需要在多个Droplet之间保留和共享数据。 DigitalOcean的默认块存储CSI解决方案无法支持将一个块存储卷同时安装到许多Droplet。 这意味着这是一个ReadWriteOnce (RWO)解决方案,因为该卷仅限于一个节点。 另一方面, 网络文件系统(NFS)协议确实支持将相同的共享导出到许多使用者。 这称为ReadWriteMany (RWX),因为许多节点可以将卷作为读写装载。 因此,我们可以在群集中使用NFS服务器来提供存储,该存储可以利用DigitalOcean块存储的可靠支持和NFS共享的灵活性。
In this tutorial, you will configure dynamic provisioning for NFS volumes within a DigitalOcean Kubernetes (DOKS) cluster in which the exports are stored on DigitalOcean Block storage volumes. You will then deploy multiple instances of a demo Nginx application and test the data sharing between each instance.
在本教程中,您将为DigitalOcean Kubernetes(DOKS)集群中的NFS卷配置动态预配置,其中导出存储在DigitalOcean Block存储卷上。 然后,您将部署一个演示Nginx应用程序的多个实例,并测试每个实例之间的数据共享。
Note: The deployment of nfs-server
described in this tutorial is not highly available, and therefore is not recommended for use in production. Instead, the setup described is meant as a lighter weight option for development or in order to test ReadWriteMany (RWX) Persistent Volumes for educational purposes.
注意:本教程中描述的nfs-server
部署不是高度可用,因此不建议在生产环境中使用。 取而代之的是,所描述的设置是为开发目的或为了教育目的而测试ReadWriteMany(RWX)持久卷的较轻量级选项。
先决条件 (Prerequisites)
Before you begin this guide you’ll need the following:
在开始本指南之前,您需要满足以下条件:
The
kubectl
command-line interface installed on your local machine. You can read more about installing and configuringkubectl
in its official documentation.安装在本地计算机上的
kubectl
命令行界面。 您可以在其官方文档中阅读有关安装和配置kubectl
更多信息。A DigitalOcean Kubernetes cluster with your connection configured as the
kubectl
default. To create a Kubernetes cluster on DigitalOcean, see our Kubernetes Quickstart. Instructions on how to configurekubectl
are shown under the Connect to your Cluster step when you create your cluster.一个DigitalOcean Kubernetes集群,其连接配置为
kubectl
默认。 要在DigitalOcean上创建Kubernetes集群,请参阅我们的Kubernetes快速入门 。 创建集群时,“如何连接到集群”步骤下会显示有关如何配置kubectl
说明。The Helm package manager installed on your local machine, and Tiller installed on your cluster. To do this, complete Steps 1 and 2 of the How To Install Software on Kubernetes Clusters with the Helm Package Manager tutorial.
在本地计算机上安装了Helm软件包管理器,在集群上安装了Tiller。 为此,请使用Helm Package Manager教程完成如何在Kubernetes群集上安装软件的步骤1和2。
Note: Starting with Helm version 3.0, Tiller no longer needs to be installed for Helm to work. If you are using the latest version of Helm, see the Helm installation documentation for instructions.
注意:从Helm 3.0版开始,不再需要安装Tiller才能使Helm工作。 如果您使用的是最新版本的Helm,请参阅Helm安装文档以获取说明。
第1步-使用Helm部署NFS服务器 (Step 1 — Deploying the NFS Server with Helm)
To deploy the NFS server, you will use a Helm chart. Deploying a Helm chart is an automated solution that is faster and less error-prone than creating the NFS server deployment by hand.
要部署NFS服务器,您将使用Helm chart 。 部署Helm图表是一种自动化的解决方案,它比手动创建NFS服务器部署更快,更不容易出错。
First, make sure that the default chart repository stable
is available to you by adding the repo:
首先,通过添加仓库来确保默认的图表存储库stable
可用。
- helm repo add stable https://kubernetes-charts.storage.googleapis.com/ 舵库添加稳定的https://kubernetes-charts.storage.googleapis.com/
Next, pull the metadata for the repository you just added. This will ensure that the Helm client is updated:
接下来,为刚刚添加的存储库提取元数据。 这将确保Helm客户端已更新:
- helm repo update 头盔回购更新
To verify access to the stable
repo, perform a search on the charts:
要验证对stable
回购的访问,请在图表上执行搜索:
- helm search repo stable 头盔搜索仓库稳定
This will give you list of available charts, similar to the following:
这将为您提供可用图表的列表,类似于以下内容:
Output
NAME
CHART VERSION APP VERSION
DESCRIPTION
stable/acs-engine-autoscaler
2.2.2
2.1.1
DEPRECATED Scales worker nodes within agent pools
stable/aerospike
0.3.2
v4.5.0.5
A Helm chart for Aerospike in Kubernetes
stable/airflow
5.2.4
1.10.4
Airflow is a platform to programmatically autho...
stable/ambassador
5.3.0
0.86.1
A Helm chart for Datawire Ambassador
...
This result means that your Helm client is running and up-to-date.
此结果意味着您的Helm客户端正在运行并且是最新的。
Now that you have Helm set up, install the nfs-server-provisioner
Helm chart to set up the NFS server. If you would like to examine the contents of the chart, take a look at its documentation on GitHub.
既然已经设置了Helm,请安装nfs-server-provisioner
Helm图表来设置NFS服务器。 如果您想检查图表的内容,请查看GitHub上的文档 。
When you deploy the Helm chart, you are going to set a few variables for your NFS server to further specify the configuration for your application. You can also investigate other configuration options and tweak them to fit the application’s needs.
部署Helm图表时,将为NFS服务器设置一些变量,以进一步指定应用程序的配置。 您还可以研究其他配置选项,并对其进行调整以满足应用程序的需求。
To install the Helm chart, use the following command:
要安装Helm图表,请使用以下命令:
- helm install nfs-server stable/nfs-server-provisioner --set persistence.enabled=true,persistence.storageClass=do-block-storage,persistence.size=200Gi 掌舵安装nfs-server stable / nfs-server-provisioner --set persistence.enabled = true,persistence.storageClass = do-block-storage,persistence.size = 200Gi
This command provisions an NFS server with the following configuration options:
此命令为NFS服务器提供以下配置选项:
Adds a persistent volume for the NFS server with the
--set
flag. This ensures that all NFS shared data persists across pod restarts.使用
--set
标志为NFS服务器添加一个持久卷。 这样可以确保所有NFS共享数据在Pod重新启动后仍然存在。For the persistent storage, uses the
do-block-storage
storage class.对于持久性存储,请使用
do-block-storage
存储类。Provisions a total of
200Gi
for the NFS server to be able to split into exports.为NFS服务器提供总计
200Gi
,以便可以拆分为导出。
Note: The persistence.size
option will determine the total capacity of all the NFS volumes you can provision. At the time of this publication, only DOKS version 1.16.2-do.3 and later support volume expanding, so resizing this volume will be a manual task if you are on an earlier version. If this is the case, make sure to set this size with your future needs in mind.
注意: persistence.size
选项将确定您可以配置的所有NFS卷的总容量。 在发行本出版物时,仅DOKS版本1.16.2-do.3和更高版本支持卷扩展,因此,如果您使用的是较早版本,则调整此卷的大小将是一项手动任务。 在这种情况下,请确保根据您的未来需求设置此大小。
After this command completes, you will get output similar to the following:
该命令完成后,您将获得类似于以下内容的输出:
Output
NAME: nfs-server
LAST DEPLOYED: Thu Feb 13 19:30:07 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The NFS Provisioner service has now been installed.
A storage class named 'nfs' has now been created
and is available to provision dynamic volumes.
You can use this storageclass by creating a PersistentVolumeClaim with the
correct storageClassName attribute. For example:
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-dynamic-volume-claim
spec:
storageClassName: "nfs"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
To see the NFS server you provisioned, run the following command:
要查看您配置的NFS服务器,请运行以下命令:
- kubectl get pods kubectl得到豆荚
This will show the following:
这将显示以下内容:
Output
NAME
READY
STATUS
RESTARTS
AGE
nfs-server-nfs-server-provisioner-0
1/1
Running
0
11m
Next, check for the storageclass
you created:
接下来,请检查storageclass
创建:
- kubectl get storageclass kubectl获取存储类
This will give output similar to the following:
这将产生类似于以下内容的输出:
Output
NAME
PROVISIONER
AGE
do-block-storage (default)
dobs.csi.digitalocean.com
90m
nfs
cluster.local/nfs-server-nfs-server-provisioner
3m
You now have an NFS server running, as well as a storageclass
that you can use for dynamic provisioning of volumes. Next, you can create a deployment that will use this storage and share it across multiple instances.
现在,您正在运行NFS服务器以及可用于动态storageclass
配置卷的存储类。 接下来,您可以创建将使用此存储并在多个实例之间共享的部署。
第2步—使用共享的PersistentVolumeClaim部署应用程序 (Step 2 — Deploying an Application Using a Shared PersistentVolumeClaim)
In this step, you will create an example deployment on your DOKS cluster in order to test your storage setup. This will be an Nginx web server app named web
.
在此步骤中,您将在DOKS群集上创建示例部署,以测试您的存储设置。 这将是一个名为web
的Nginx Web服务器应用程序。
To deploy this application, first write the YAML file to specify the deployment. Open up an nginx-test.yaml
file with your text editor; this tutorial will use nano
:
要部署此应用程序,请首先编写YAML文件以指定部署。 使用文本编辑器打开nginx-test.yaml
文件; 本教程将使用nano
:
- nano nginx-test.yaml 纳米nginx-test.yaml
In this file, add the following lines to define the deployment with a PersistentVolumeClaim named nfs-data
:
在此文件中,添加以下行以使用名为nfs-data
的PersistentVolumeClaim定义部署:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web
name: web
spec:
replicas: 1
selector:
matchLabels:
app: web
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: web
spec:
containers:
- image: nginx:latest
name: nginx
resources: {}
volumeMounts:
- mountPath: /data
name: data
volumes:
- name: data
persistentVolumeClaim:
claimName: nfs-data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
storageClassName: nfs
Save the file and exit the text editor.
保存文件并退出文本编辑器。
This deployment is configured to use the accompanying PersistentVolumeClaim nfs-data
and mount it at /data
.
该部署配置为使用随附的PersistentVolumeClaim nfs-data
并将其安装在/data
。
In the PVC definition, you will find that the storageClassName
is set to nfs
. This tells the cluster to satisfy this storage using the rules of the nfs
storageClass
you created in the previous step. The new PersistentVolumeClaim will be processed, and then an NFS share will be provisioned to satisfy the claim in the form of a Persistent Volume. The pod will attempt to mount that PVC once it has been provisioned. Once it has finished mounting, you will verify the ReadWriteMany (RWX) functionality.
在PVC定义中,您会发现storageClassName
设置为nfs
。 这告诉集群使用您在上一步中创建的nfs
storageClass
的规则来满足该存储需求。 将处理新的PersistentVolumeClaim,然后将以Persistent Volume的形式提供NFS共享以满足要求。 设置完毕后,pod将尝试安装该PVC。 完成安装后,将验证ReadWriteMany(RWX)功能。
Run the deployment with the following command:
使用以下命令运行部署:
- kubectl apply -f nginx-test.yaml kubectl应用-f nginx-test.yaml
This will give the following output:
这将给出以下输出:
Output
deployment.apps/web created
persistentvolumeclaim/nfs-data created
Next, check to see the web
pod spinning up:
接下来,检查一下web
Pod旋转起来:
- kubectl get pods kubectl得到豆荚
This will output the following:
这将输出以下内容:
Output
NAME
READY
STATUS
RESTARTS
AGE
nfs-server-nfs-server-provisioner-0
1/1
Running
0
23m
web-64965fc79f-b5v7w
1/1
Running
0
4m
Now that the example deployment is up and running, you can scale it out to three instances using the kubectl scale
command:
现在示例部署已启动并正在运行,您可以使用kubectl scale
命令将kubectl scale
到三个实例:
- kubectl scale deployment web --replicas=3 kubectl规模部署网站--replicas = 3
This will give the output:
这将给出输出:
Output
deployment.extensions/web scaled
Now run the kubectl get
command again:
现在再次运行kubectl get
命令:
- kubectl get pods kubectl得到豆荚
You will find the scaled-up instances of the deployment:
您将找到扩展的部署实例:
Output
NAME
READY
STATUS
RESTARTS
AGE
nfs-server-nfs-server-provisioner-0
1/1
Running
0
24m
web-64965fc79f-q9626
1/1
Running
0
5m
web-64965fc79f-qgd2w
1/1
Running
0
17s
web-64965fc79f-wcjxv
1/1
Running
0
17s
You now have three instances of your Nginx deployment that are connected into the same Persistent Volume. In the next step, you will make sure that they can share data between each other.
现在,您已将Nginx部署的三个实例连接到相同的Persistent Volume中。 在下一步中,您将确保它们可以彼此共享数据。
步骤3 —验证NFS数据共享 (Step 3 — Validating NFS Data Sharing)
For the final step, you will validate that the data is shared across all the instances that are mounted to the NFS share. To do this, you will create a file under the /data
directory in one of the pods, then verify that the file exists in another pod’s /data
directory.
对于最后一步,您将验证数据是否在安装到NFS共享的所有实例之间共享。 为此,您将在其中一个Pod中的/data
目录下创建一个文件,然后验证该文件是否在另一个Pod的/data
目录中。
To validate this, you will use the kubectl exec
command. This command lets you specify a pod and perform a command inside that pod. To learn more about inspecting resources using kubectl
, take a look at our kubectl
Cheat Sheet.
为了验证这一点,您将使用kubectl exec
命令。 此命令使您可以指定容器并在该容器内执行命令。 要了解有关使用kubectl
检查资源的更多信息,请查看我们的kubectl
单 。
To create a file named hello_world
within one of your web
pods, use the kubectl exec
to pass along the touch
command. Note that the number after web
in the pod name will be different for you, so make sure to replace the highlighted pod name with one of your own pods that you found as the output of kubectl get pods
in the last step.
要在您的一个web
kubectl exec
创建一个名为hello_world
的文件,请使用kubectl exec
传递touch
命令。 请注意,pod名称中web
后面的数字对于您而言将有所不同,因此请确保将突出显示的Pod名称替换为您自己的一个Pod,您将其作为最后一步的kubectl get pods
的输出找到。
kubectl exec web-64965fc79f-q9626 -- touch /data/hello_world
kubectl exec web-64965fc79f-q9626-触摸/ data / hello_world
Next, change the name of the pod and use the ls
command to list the files in the /data
directory of a different pod:
接下来,更改容器的名称,并使用ls
命令列出其他容器的/data
目录中的文件:
kubectl exec web-64965fc79f-qgd2w -- ls /data
kubectl exec web-64965fc79f-qgd2w -ls /数据
Your output will show the file you created within the first pod:
您的输出将显示您在第一个窗格中创建的文件:
Output
hello_world
This shows that all the pods share data using NFS and that your setup is working properly.
这表明所有Pod都使用NFS共享数据,并且您的设置工作正常。
结论 (Conclusion)
In this tutorial, you created an NFS server that was backed by DigitalOcean Block Storage. The NFS server then used that block storage to provision and export NFS shares to workloads in a RWX-compatible protocol. In doing this, you were able to get around a technical limitation of DigitalOcean block storage and share the same PVC data across many pods. In following this tutorial, your DOKS cluster is now set up to accommodate a much wider set of deployment use cases.
在本教程中,您创建了一个由DigitalOcean Block Storage支持的NFS服务器。 然后,NFS服务器使用该块存储来以RWX兼容协议将NFS共享调配和导出到工作负载。 这样,您就可以解决DigitalOcean块存储的技术限制,并在多个Pod中共享相同的PVC数据。 在遵循本教程之后,现在可以将DOKS集群设置为适应更广泛的部署用例集。
If you’d like to learn more about Kubernetes, check out our Kubernetes for Full-Stack Developers curriculum, or look through the product documentation for DigitalOcean Kubernetes.
如果您想了解有关Kubernetes的更多信息,请查看我们的面向全栈开发人员的Kubernetes课程 ,或浏览DigitalOcean Kubernetes的产品文档 。
翻译自: https://www.digitalocean.com/community/tutorials/how-to-set-up-readwritemany-rwx-persistent-volumes-with-nfs-on-digitalocean-kubernetes
最后
以上就是敏感书本为你收集整理的如何在DigitalOcean Kubernetes上使用NFS设置ReadWriteMany(RWX)持久卷的全部内容,希望文章能够帮你解决如何在DigitalOcean Kubernetes上使用NFS设置ReadWriteMany(RWX)持久卷所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
发表评论 取消回复