我是靠谱客的博主 勤恳心锁,最近开发中收集的这篇文章主要介绍体验 正式发布 的OSM v1.0.0 版本,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

2021年10月份发布了OSM 1.0 RC[1],在过去的几个月里,OSM 的贡献者一直在努力为 v1.0.0 版本的发布做准备。2022年2月1日,OSM 团队正式发布 1.0.0 版本[2]。OSM 从最初的发布到现在已经走了很长的路,团队继续专注于社区需要的关键和必要的功能。Open Service Mesh(OSM)是一个 轻量级、 可扩展的 Service Mesh 工具,旨在通过引入简单性和降低复杂性来管理和保护 K8s 集群内的 API。它基于 envoy Proxy 并将其作为 sidecar 容器注入到每个Observable应用程序中,该应用程序依次执行流量管理、路由策略、捕获指标等。

微软把Open Service Mesh 捐赠给云原生计算基金会(CNCF),以确保它由社区主导,并具有开放的治理,OSM目前还是 沙箱项目。

1.0 版本已经支持多集群和混合环境中运行 OSM。1.0版本中的一些新功能:

  • 新的内部控制平面事件管理框架来处理对 Kubernetes 集群和策略的更改

  • 拒绝/忽略无效 SMI TrafficTarget 资源的验证

  • 改进控制平面内存利用率,OSM 现在可以根据内存使用情况自动缩放。

  • 支持用于网状流量的 TCP 服务器优先协议。appProtocol: tcp-server-first除了在 Egress 策略中指定的服务之外,现在可以在网格中的服务端口上指定,以减少 MySQL 和 PostgreSQL 等协议的延迟。

  • OSM 附带的 Grafana 仪表板更加准确和一致。

  • OSM 控制平面镜像现在是多架构的,支持 linux/amd64 和 linux/arm64。

自上次发布以来,osmCLI 也有了一些改进。

  • osm support bug-report除了网格内的 Pod 之外,收集日志和其他有助于调试的信息的命令现在可以从 OSM 的控制平面收集日志。

  • 对于在没有 Helm 的情况下管理 OSM 生命周期的用户,该osm install命令现在支持选择性地清理由控制平面创建的 CustomResourceDefinition、webhook 配置和资源以简化卸载。

  • osm version命令现在将显示安装在集群上的 OSM 版本以及 CLI 的版本。

查看我们最新更新的文档网站[3],了解更多关于特性、演示和架构的信息。

显著特性

Open Service Mesh 相对于Istio 来说,确实很轻量。SMI 处理了所有你期望的标准服务 Mesh 功能,包括使用 mTLS 确保服务之间的通信安全,管理访问控制策略,服务监控等。

  • 为服务定义并执行细化的访问控制策略,基于 Service Mesh Interface (SMI) 的实现,主要包括 Traffic Access Control, Traffic Specs 和 Traffic Split 以及 Traffic Metrics ;

  • 通过启用相互 TLS (mTLS) 来保护服务与服务之间的通信 ;

  • 定义和执行服务间的访问控制策略;

  • 通过 Prometheus 和 Grafana 完成器观察性;

  • 可与外部证书管理服务进行集成;

  • 使用 Envoy 边车代理自动注入,将应用程序加入到 OSM 网格中;

上手体验

这里我使用 Rancher Desktop[4] 作为我本地的实验环境,来亲手试一试看了。

安装非常简单,参考文档 [5],直接去 Release 页面下载预编译好的二进制文件。可将二进制文件加入到 $PATH 中。

wget  https://github.com/openservicemesh/osm/releases/download/v1.0.0/osm-v1.0.0-windows-amd64.zip -o osm.zip
unzip osm.zip
osm.exe version

下面的命令显示了如何在 Kubernetes 集群上安装 OSM。此命令启用 Prometheus、 Grafana和 Jaeger集成。文件中的osm.enablePermissiveTrafficPolicychart 参数values.yaml指示 OSM 忽略任何策略,让流量在 Pod 之间自由流动。在 OSM 中的宽松流量策略模式下,系统会绕过 SMI 流量策略强制执行。在此模式下,OSM 会自动发现属于服务网格一部分的服务,并在每个 Envoy 代理挎斗上对流量策略规则进行编程,以便能够与这些服务通信。

osm install  --mesh-name "osm-system" --osm-namespace "osm" --set=osm.enablePermissiveTrafficPolicy=true --set=osm.deployPrometheus=true  --set=osm.deployGrafana=true   --set=osm.deployJaeger=true

可以看到默认安装完成后,都在 osm-system 命名空间下有6个pod:

ef3d9b281140aa25bf991cbd361777c2.png

上图是使用k8slens :https://k8slens.dev/ ,这里简要介绍一下lens:(Lens 就是一个强大的 IDE,可以实时查看集群状态,实时查看日志流,方便排查故障。有了 Lens,你可以更方便快捷地使用你的集群,从根本上提高工作效率和业务迭代速度。Lens 可以管理多集群,它使用内置的 kubectl 通过 kubeconfig 来访问集群,支持本地集群和外部集群(如EKS、AKS、GKE、Pharos、UCP、Rancher 等),甚至连 Openshift 也支持。)

  • osm-controller:osm控制器

  • osm-grafana:Dashboard 相关,可通过 osm dashboard 命令唤起;

  • osm-prometheus:采集 metrics ;

  • osm-injector:注入程序

  • osm-bootstrap:启动

  • jaeger :链路追踪

检查 OSM 控制器Deployment、Pod 和svc

kubectl get deployment,pod,service -n osm --selector app=osm-controller

正常运行的 OSM 控制器将如下所示:

a61fcab79784be0cd9a5c200a9abfc0f.png

检查 OSM 注入程序Deployment、Pod 和服务

kubectl get deployment,pod,service -n osm --selector app=osm-injector

正常运行的 OSM 注入程序将如下所示:

4656cabfd15549af4bb591efc5976485.png

检查 OSM 启动 Deployment、Pod 和服务

kubectl get deployment,pod,service -n osm --selector app=osm-bootstrap

86c255b4385d201ed42cf978a7df013b.png

检查验证 Webhook 和改变 Webhook

kubectl get ValidatingWebhookConfiguration --selector app=osm-controller

正常运行的 OSM 验证 Webhook 将如下所示:

e6375995d15c92ee31f6985ed8e46d16.png

检查改变 Webhook 的服务和 CA 捆绑包

kubectl get ValidatingWebhookConfiguration osm-validator-mesh-osm-system -o json | jq '.webhooks[0].clientConfig.service'

正确配置的改变 Webhook 配置将如下所示:

{
       "name": "osm-validator",
       "namespace": "osm",
       "path": "/validate",
       "port": 9093
}

检查 osm-mesh-config 资源

检查该 ConfigMap 是否存在:kubectl get meshconfig osm-mesh-config -n osm

检查 OSM MeshConfig 的内容

kubectl get meshconfig osm-mesh-config -n osm -o yaml

PS C:Userszsygz> kubectl get meshconfig osm-mesh-config -n osm -o yaml
apiVersion: config.openservicemesh.io/v1alpha1
kind: MeshConfig
metadata:
   creationTimestamp: "2022-02-03T07:47:42Z"
   generation: 1
   name: osm-mesh-config
   namespace: osm
   resourceVersion: "230958"
   uid: 2701cf39-02dd-4d8d-b920-30120f52dc66
spec:
   certificate:
     certKeyBitSize: 2048
     serviceCertValidityDuration: 24h
   featureFlags:
     enableAsyncProxyServiceMapping: false
     enableEgressPolicy: true
     enableEnvoyActiveHealthChecks: false
     enableIngressBackendPolicy: true
     enableMulticlusterMode: false
     enableRetryPolicy: false
     enableSnapshotCacheMode: false
     enableWASMStats: true
   observability:
     enableDebugServer: false
     osmLogLevel: info
     tracing:
       enable: false
   sidecar:
     configResyncInterval: 0s
     enablePrivilegedInitContainer: false
     logLevel: error
     resources: {}
   traffic:
     enableEgress: false
     enablePermissiveTrafficPolicyMode: true
     inboundExternalAuthorization:
       enable: false
       failureModeAllow: false
       statPrefix: inboundExtAuthz
       timeout: 1s
     inboundPortExclusionList: []
     outboundIPRangeExclusionList: []
     outboundPortExclusionList: []

以及一系列的 CRD

PS C:Userszsygz> kubectl get crds -n osm
NAME                                                            CREATED AT
addons.k3s.cattle.io                                            2022-01-03T02:00:57Z
helmcharts.helm.cattle.io                                       2022-01-03T02:00:57Z
helmchartconfigs.helm.cattle.io                                 2022-01-03T02:00:57Z
middlewaretcps.traefik.containo.us                              2022-01-03T02:03:26Z
ingressrouteudps.traefik.containo.us                            2022-01-03T02:03:26Z
tlsstores.traefik.containo.us                                   2022-01-03T02:03:26Z
serverstransports.traefik.containo.us                           2022-01-03T02:03:26Z
traefikservices.traefik.containo.us                             2022-01-03T02:03:26Z
ingressroutetcps.traefik.containo.us                            2022-01-03T02:03:26Z
middlewares.traefik.containo.us                                 2022-01-03T02:03:26Z
tlsoptions.traefik.containo.us                                  2022-01-03T02:03:26Z
ingressroutes.traefik.containo.us                               2022-01-03T02:03:26Z
challenges.acme.cert-manager.io                                 2022-01-03T10:05:42Z
certificaterequests.cert-manager.io                             2022-01-03T10:05:42Z
clusterissuers.cert-manager.io                                  2022-01-03T10:05:42Z
issuers.cert-manager.io                                         2022-01-03T10:05:42Z
orders.acme.cert-manager.io                                     2022-01-03T10:05:42Z
certificates.cert-manager.io                                    2022-01-03T10:05:42Z
features.management.cattle.io                                   2022-01-03T11:35:16Z
navlinks.ui.cattle.io                                           2022-01-03T11:35:19Z
clusters.management.cattle.io                                   2022-01-03T11:35:20Z
apiservices.management.cattle.io                                2022-01-03T11:35:20Z
clusterregistrationtokens.management.cattle.io                  2022-01-03T11:35:20Z
settings.management.cattle.io                                   2022-01-03T11:35:20Z
preferences.management.cattle.io                                2022-01-03T11:35:20Z
clusterrepos.catalog.cattle.io                                  2022-01-03T11:35:20Z
operations.catalog.cattle.io                                    2022-01-03T11:35:20Z
apps.catalog.cattle.io                                          2022-01-03T11:35:20Z
fleetworkspaces.management.cattle.io                            2022-01-03T11:35:20Z
managedcharts.management.cattle.io                              2022-01-03T11:35:20Z
clusters.provisioning.cattle.io                                 2022-01-03T11:35:21Z
rkeclusters.rke.cattle.io                                       2022-01-03T11:35:21Z
rkecontrolplanes.rke.cattle.io                                  2022-01-03T11:35:21Z
rkebootstraps.rke.cattle.io                                     2022-01-03T11:35:21Z
rkebootstraptemplates.rke.cattle.io                             2022-01-03T11:35:21Z
custommachines.rke.cattle.io                                    2022-01-03T11:35:21Z
clusters.cluster.x-k8s.io                                       2022-01-03T11:35:21Z
machinedeployments.cluster.x-k8s.io                             2022-01-03T11:35:21Z
machinehealthchecks.cluster.x-k8s.io                            2022-01-03T11:35:21Z
machines.cluster.x-k8s.io                                       2022-01-03T11:35:22Z
machinesets.cluster.x-k8s.io                                    2022-01-03T11:35:22Z
authconfigs.management.cattle.io                                2022-01-03T11:35:22Z
groupmembers.management.cattle.io                               2022-01-03T11:35:22Z
groups.management.cattle.io                                     2022-01-03T11:35:22Z
tokens.management.cattle.io                                     2022-01-03T11:35:22Z
userattributes.management.cattle.io                             2022-01-03T11:35:22Z
users.management.cattle.io                                      2022-01-03T11:35:22Z
catalogs.management.cattle.io                                   2022-01-03T11:35:23Z
clusterroletemplatebindings.management.cattle.io                2022-01-03T11:35:23Z
catalogtemplates.management.cattle.io                           2022-01-03T11:35:23Z
dynamicschemas.management.cattle.io                             2022-01-03T11:35:23Z
catalogtemplateversions.management.cattle.io                    2022-01-03T11:35:23Z
etcdbackups.management.cattle.io                                2022-01-03T11:35:23Z
clusteralerts.management.cattle.io                              2022-01-03T11:35:23Z
globalrolebindings.management.cattle.io                         2022-01-03T11:35:23Z
clusteralertgroups.management.cattle.io                         2022-01-03T11:35:23Z
clustercatalogs.management.cattle.io                            2022-01-03T11:35:23Z
globalroles.management.cattle.io                                2022-01-03T11:35:23Z
clusterloggings.management.cattle.io                            2022-01-03T11:35:23Z
kontainerdrivers.management.cattle.io                           2022-01-03T11:35:23Z
clusteralertrules.management.cattle.io                          2022-01-03T11:35:23Z
apps.project.cattle.io                                          2022-01-03T11:35:23Z
nodedrivers.management.cattle.io                                2022-01-03T11:35:23Z
clustermonitorgraphs.management.cattle.io                       2022-01-03T11:35:23Z
clusterscans.management.cattle.io                               2022-01-03T11:35:23Z
apprevisions.project.cattle.io                                  2022-01-03T11:35:23Z
pipelineexecutions.project.cattle.io                            2022-01-03T11:35:23Z
nodepools.management.cattle.io                                  2022-01-03T11:35:23Z
nodetemplates.management.cattle.io                              2022-01-03T11:35:23Z
pipelinesettings.project.cattle.io                              2022-01-03T11:35:23Z
composeconfigs.management.cattle.io                             2022-01-03T11:35:23Z
nodes.management.cattle.io                                      2022-01-03T11:35:23Z
podsecuritypolicytemplateprojectbindings.management.cattle.io   2022-01-03T11:35:24Z
multiclusterapps.management.cattle.io                           2022-01-03T11:35:24Z
pipelines.project.cattle.io                                     2022-01-03T11:35:23Z
podsecuritypolicytemplates.management.cattle.io                 2022-01-03T11:35:24Z
sourcecodecredentials.project.cattle.io                         2022-01-03T11:35:24Z
multiclusterapprevisions.management.cattle.io                   2022-01-03T11:35:24Z
projectnetworkpolicies.management.cattle.io                     2022-01-03T11:35:24Z
sourcecodeproviderconfigs.project.cattle.io                     2022-01-03T11:35:24Z
monitormetrics.management.cattle.io                             2022-01-03T11:35:24Z
sourcecoderepositories.project.cattle.io                        2022-01-03T11:35:24Z
notifiers.management.cattle.io                                  2022-01-03T11:35:24Z
projectroletemplatebindings.management.cattle.io                2022-01-03T11:35:24Z
projects.management.cattle.io                                   2022-01-03T11:35:24Z
projectalerts.management.cattle.io                              2022-01-03T11:35:24Z
projectalertgroups.management.cattle.io                         2022-01-03T11:35:24Z
rkek8ssystemimages.management.cattle.io                         2022-01-03T11:35:24Z
projectcatalogs.management.cattle.io                            2022-01-03T11:35:24Z
projectloggings.management.cattle.io                            2022-01-03T11:35:24Z
rkek8sserviceoptions.management.cattle.io                       2022-01-03T11:35:24Z
projectalertrules.management.cattle.io                          2022-01-03T11:35:24Z
rkeaddons.management.cattle.io                                  2022-01-03T11:35:24Z
roletemplates.management.cattle.io                              2022-01-03T11:35:24Z
projectmonitorgraphs.management.cattle.io                       2022-01-03T11:35:24Z
samltokens.management.cattle.io                                 2022-01-03T11:35:24Z
clustertemplates.management.cattle.io                           2022-01-03T11:35:24Z
clustertemplaterevisions.management.cattle.io                   2022-01-03T11:35:24Z
cisconfigs.management.cattle.io                                 2022-01-03T11:35:24Z
cisbenchmarkversions.management.cattle.io                       2022-01-03T11:35:24Z
templates.management.cattle.io                                  2022-01-03T11:35:24Z
templateversions.management.cattle.io                           2022-01-03T11:35:24Z
templatecontents.management.cattle.io                           2022-01-03T11:35:24Z
globaldnses.management.cattle.io                                2022-01-03T11:35:24Z
globaldnsproviders.management.cattle.io                         2022-01-03T11:35:24Z
prometheuses.monitoring.coreos.com                              2022-01-03T11:35:29Z
prometheusrules.monitoring.coreos.com                           2022-01-03T11:35:29Z
alertmanagers.monitoring.coreos.com                             2022-01-03T11:35:29Z
servicemonitors.monitoring.coreos.com                           2022-01-03T11:35:29Z
azureconfigs.rke-machine-config.cattle.io                       2022-01-03T11:35:32Z
vmwarevsphereconfigs.rke-machine-config.cattle.io               2022-01-03T11:35:32Z
digitaloceanconfigs.rke-machine-config.cattle.io                2022-01-03T11:35:32Z
harvesterconfigs.rke-machine-config.cattle.io                   2022-01-03T11:35:32Z
linodeconfigs.rke-machine-config.cattle.io                      2022-01-03T11:35:32Z
amazonec2configs.rke-machine-config.cattle.io                   2022-01-03T11:35:32Z
digitaloceanmachines.rke-machine.cattle.io                      2022-01-03T11:35:32Z
azuremachines.rke-machine.cattle.io                             2022-01-03T11:35:32Z
linodemachines.rke-machine.cattle.io                            2022-01-03T11:35:32Z
vmwarevspheremachines.rke-machine.cattle.io                     2022-01-03T11:35:32Z
harvestermachines.rke-machine.cattle.io                         2022-01-03T11:35:32Z
amazonec2machines.rke-machine.cattle.io                         2022-01-03T11:35:32Z
digitaloceanmachinetemplates.rke-machine.cattle.io              2022-01-03T11:35:32Z
azuremachinetemplates.rke-machine.cattle.io                     2022-01-03T11:35:32Z
linodemachinetemplates.rke-machine.cattle.io                    2022-01-03T11:35:32Z
amazonec2machinetemplates.rke-machine.cattle.io                 2022-01-03T11:35:32Z
vmwarevspheremachinetemplates.rke-machine.cattle.io             2022-01-03T11:35:32Z
harvestermachinetemplates.rke-machine.cattle.io                 2022-01-03T11:35:32Z
bundles.fleet.cattle.io                                         2022-01-03T11:35:20Z
bundledeployments.fleet.cattle.io                               2022-01-03T11:36:37Z
bundlenamespacemappings.fleet.cattle.io                         2022-01-03T11:36:37Z
clustergroups.fleet.cattle.io                                   2022-01-03T11:36:37Z
clusters.fleet.cattle.io                                        2022-01-03T11:35:20Z
clusterregistrationtokens.fleet.cattle.io                       2022-01-03T11:36:37Z
gitrepos.fleet.cattle.io                                        2022-01-03T11:36:37Z
clusterregistrations.fleet.cattle.io                            2022-01-03T11:36:37Z
gitreporestrictions.fleet.cattle.io                             2022-01-03T11:36:37Z
contents.fleet.cattle.io                                        2022-01-03T11:36:37Z
imagescans.fleet.cattle.io                                      2022-01-03T11:36:37Z
gitjobs.gitjob.cattle.io                                        2022-01-03T11:36:37Z
components.dapr.io                                              2022-01-07T10:13:43Z
configurations.dapr.io                                          2022-01-07T10:13:44Z
subscriptions.dapr.io                                           2022-01-07T10:13:45Z
meshconfigs.config.openservicemesh.io                           2022-02-03T07:46:15Z
multiclusterservices.config.openservicemesh.io                  2022-02-03T07:46:15Z
egresses.policy.openservicemesh.io                              2022-02-03T07:46:15Z
trafficsplits.split.smi-spec.io                                 2022-02-03T07:46:15Z
tcproutes.specs.smi-spec.io                                     2022-02-03T07:46:15Z
ingressbackends.policy.openservicemesh.io                       2022-02-03T07:46:15Z
traffictargets.access.smi-spec.io                               2022-02-03T07:46:15Z
httproutegroups.specs.smi-spec.io                               2022-02-03T07:46:15Z

使用以下命令获取已安装的 SMI CRD 版本:

PS C:Userszsygz> osm mesh list

MESH NAME    MESH NAMESPACE   VERSION   ADDED NAMESPACES
osm-system   osm              v1.0.0

MESH NAME    MESH NAMESPACE   SMI SUPPORTED
osm-system   osm              HTTPRouteGroup:v1alpha4,TCPRoute:v1alpha4,TrafficSplit:v1alpha2,TrafficTarget:v1alpha3

To list the OSM controller pods for a mesh, please run the following command passing in the mesh's namespace
         kubectl get pods -n <osm-mesh-namespace> -l app=osm-controller

实践

下面我们来部署一个应用程序测试一下,OSM 强调的 Observable 的含义是什么 ——用户可以选择哪些应用程序(命名空间)应该在 OSM的管理范围,OSM 会监控那些不影响其他人的应用程序!

  • 创建实验用的 namespace, 并通过 osm namespace add 将其纳入管理范围中:

kubectl create namespace bookstore
kubectl create namespace bookbuyer
kubectl create namespace bookthief
kubectl create namespace bookwarehouse

osm namespace add bookstore --mesh-name=osm-system
osm namespace add bookbuyer --mesh-name=osm-system
osm namespace add bookthief --mesh-name=osm-system
osm namespace add bookwarehouse --mesh-name=osm-system

osm metrics enable --namespace bookstore
osm metrics enable --namespace bookbuyer
osm metrics enable --namespace bookthief
osm metrics enable --namespace bookwarehouse

现在,四个命名空间中的每一个都用 标记openservicemesh.io/monitored-by: osm和注释openservicemesh.io/sidecar-injection: enabled。OSM 控制器注意到这些命名空间上的标签和注释,将开始使用 Envoy sidecar 注入所有pod。

  • 部署实验应用程序

PS C:Userszsygz> kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/release-v1.0/manifests/apps/bookbuyer.yaml
serviceaccount/bookbuyer created
deployment.apps/bookbuyer created
PS C:Userszsygz> kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/release-v1.0/manifests/apps/bookthief.yaml
serviceaccount/bookthief created
deployment.apps/bookthief created
PS C:Userszsygz> kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/release-v1.0/manifests/apps/bookstore.yaml
service/bookstore created
serviceaccount/bookstore created
deployment.apps/bookstore created
PS C:Userszsygz> kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/release-v1.0/manifests/apps/bookwarehouse.yaml
serviceaccount/bookwarehouse created
service/bookwarehouse created
deployment.apps/bookwarehouse created
PS C:Userszsygz> kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/release-v1.0/manifests/apps/mysql.yaml
serviceaccount/mysql created
service/mysql created
statefulset.apps/mysql created

PS C:Userszsygz> kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/release-v1.0/manifests/apps/bookstore-v2.yaml
service/bookstore-v2 created
serviceaccount/bookstore-v2 created
deployment.apps/bookstore-v2 created
traffictarget.access.smi-spec.io/bookstore-v2 created

使用下列命令检查下安装的资源:

kubectl get pods,deployments,serviceaccounts -n bookbuyer
kubectl get pods,deployments,serviceaccounts -n bookthief

kubectl get pods,deployments,serviceaccounts,services,endpoints -n bookstore
kubectl get pods,deployments,serviceaccounts,services,endpoints -n bookwarehouse

9f328bb9b5d9147d39537b7bee14cc9c.png

实验里为每个应用程序创建了一个Kubernetes 服务帐户。服务帐户用作应用程序的身份,稍后将在演示中使用它来创建服务到服务的访问控制策略。

  • 本地访问

可以通过 kubectl port-foward 在本地对刚才部署的应用进行访问。我们也可以通过Rancher Desktop 来操作:

8db97ecff8ae66215a5322571da63ae9.png

访问 http://localhost:62300/  即可看到示例项目。例如:

7033e401953fc44d98bb3cd24a6a0adf.png

通过 osm dashboard --osm-namespace=osm可直接唤起本地浏览器,并 port-foward 将 Grafana 打开。

PS C:Userszsygz> osm dashboard --osm-namespace=osm
[+] Starting Dashboard forwarding
[+] Issuing open browser http://localhost:3000

Grafana 登录的默认用户名和密码是admin/admin。

899f652cfbf3cba84ecc68230a7370a3.png

  • 访问控制策略

一旦应用程序启动并运行,它们可以使用宽松流量策略模式或SMI 流量策略模式相互交互。在宽松流量策略模式下,应用服务之间的流量由 自动配置osm-controller,SMI Traffic Targets 定义的访问控制策略不强制执行。在 SMI 策略模式下,默认情况下所有流量都被拒绝,除非使用 SMI 访问和路由策略的组合明确允许。

前面我们安装osm 的时候指定的--set=osm.enablePermissiveTrafficPolicy=true 就是宽松流量策略模式。从而允许应用程序之间的连接,而不需要 SMI 流量访问策略。

kubectl edit meshconfig -n osm

将osm.enablePermissiveTrafficPolicy 改成false 保存,从而禁用宽松流量策略模式,启用SMI流量策略。

SMI 流量策略可用于以下方面:

  1. SMI 访问控制策略,用于授权服务身份之间的流量访问

  2. 用于定义路由规则以与访问控制策略相关联的 SMI 流量规范策略

  3. SMI 流量拆分策略可根据权重将客户端流量引导至多个后端

我们现在来部署 SMI TrafficTarget 和 HTTPRouteGroup 策略:

kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/release-v1.0/manifests/access/traffic-access-v1.yaml

kind: TrafficTarget
apiVersion: access.smi-spec.io/v1alpha3
metadata:
  name: bookstore
  namespace: bookstore
spec:
  destination:
    kind: ServiceAccount
    name: bookstore
    namespace: bookstore
  rules:
  - kind: HTTPRouteGroup
    name: bookstore-service-routes
    matches:
    - buy-a-book
    - books-bought
  sources:
  - kind: ServiceAccount
    name: bookbuyer
    namespace: bookbuyer
---
apiVersion: specs.smi-spec.io/v1alpha4
kind: HTTPRouteGroup
metadata:
  name: bookstore-service-routes
  namespace: bookstore
spec:
  matches:
  - name: books-bought
    pathRegex: /books-bought
    methods:
    - GET
    headers:
    - "user-agent": ".*-http-client/*.*"
    - "client-app": "bookbuyer"
  - name: buy-a-book
    pathRegex: ".*a-book.*new"
    methods:
    - GET
这里定义了两个 SMI 中的资源 TrafficTarget 和 HTTPRouteGroup ,用来控制入口流量,应用后将允许访问对应的服务。
清理
列出osm 扫描器下的所有命名空间
osm ns list --mesh-name=osm-system
从 OSM 扫描器中删除命名空间
osm namespace remove bookbuyer --mesh-name=osm-system
osm namespace remove bookstore --mesh-name=osm-system
osm namespace remove bookthief --mesh-name=osm-system
osm namespace remove bookwarehouse --mesh-name=osm-system

重新部署 删除Envoy 边车
kubectl rollout restart deployment bookbuyer -n bookbuyer 
kubectl rollout restart deployment bookstore -n bookstore 
kubectl rollout restart deployment bookthief -n bookthief 
kubectl rollout restart deployment bookwarehouse -n bookwarehouse

从k8s 集群里卸载osm
osm uninstall mesh --mesh-name=osm-system  --osm-namespace=osm

总结

Open Service Mesh 相对来说,确实非常的轻量。所需要的访问控制,流量切割等功能通过自己创建 SMI 资源来控制, Dapr 和 OSM 是非常好的一个实践多运行时架构的组合。

a64969ceaf7847eacfc7b7e584854354.png

参考资料

[1] 第一个候选版本: https://github.com/openservicemesh/osm/releases/tag/v1.0.0-rc.1

[2] 第一个1.0 正式版本: https://github.com/openservicemesh/osm/releases/tag/v1.0.0

[3] 文档网站: https://docs.openservicemesh.io/

[4] 通过Rancher Desktop在桌面上运行K8s :https://www.cnblogs.com/shanyou/p/15759035.html

[5] 设置OSM:https://release-v1-0.docs.openservicemesh.io/docs/getting_started/setup_osm/

最后

以上就是勤恳心锁为你收集整理的体验 正式发布 的OSM v1.0.0 版本的全部内容,希望文章能够帮你解决体验 正式发布 的OSM v1.0.0 版本所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(36)

评论列表共有 0 条评论

立即
投稿
返回
顶部