概述
文章目录
- 1.硬件需求(所有节点)
- 1.1.配置需求
- 1.2.IP&主机&配置
- 2.安装基础环境&服务(控制节点)
- 2.1.安装 NTP 时间同步服务
- …
- …
目录
基于OpenStack的云计算平台搭建
- 硬件需求(所有节点) 3
- 安装基础环境&服务(控制节点) 4
- 安装keystone身份认证服务(控制节点) 9
- 安装glance镜像服务(控制节点) 12
- 安装placement放置服务(控制节点) 17
- 安装nova计算服务(控制节点) 22
- 安装nova计算服务(计算节点) 28
- 安装neutron网络服务(控制节点) 33
- 安装neutron网络服务(计算节点) 42
- 安装horizon服务(计算节点) 45
- 启动实例(控制节点) 54
- 安装块存储服务cinder(控制节点) 57
- 安装和配置存储节点(计算节点) 62
1.硬件需求(所有节点)
如果你还没有配置好虚拟环境,请查看另外一个文档“虚拟环境配置”,先去CentOS进行配置,以免影响后期的操作
1.1.配置需求
CPU 支持 intel64 或 AMD64 CPU扩展,并启用AMD-H或intel VT硬件虚拟化支持的64位x86处理器
系统版本 CentOS Linux release 7.9.2009 (Core)
内存 根据实际测试环境选择 >= 4G
硬盘容量 根据实际测试环境选择 >= 50G
防火墙 需要关闭
SELINUX 需要关闭
VMware Workstation Pro 16.1.0
1.2.IP&主机&配置
IP地址 主机名 配置
192.168.136.128 controller 4 核 4G内存 20G储存,要开虚拟化
192.168.136.129 compute1 4 核 4G内存 20G储存,要开虚拟化
1.3.两台主机的hosts文件如下
vi /etc/hosts
1.4.关闭防火墙 selinux
vi /etc/selinux/config
修改后,执行 setenforce 0命令,设置为宽松模式
关闭防火墙并且关闭开机自启防火墙
systemctl stop firewalld
systemctl disable firewalld
2.安装基础环境&服务(控制节点)
2.1.安装 NTP 时间同步服务
控制节点NTP,主要为同步时间所用,时间不同步,可能造成你不能创建云主机
[root@controller ~]# yum install chrony -y
编辑 /etc/chrony.conf 文件,修改allow行
[root@controller ~]# vi /etc/chrony.conf
allow 192.168.0.0/16
开启服务并设置开机自启
[root@controller ~]# systemctl restart chronyd.service
[root@controller ~]# systemctl enable chronyd.service
2.2.安装train版yum源
[root@controller ~]# yum install centos-release-openstack-train -y
2.3.安装客户端
[root@controller ~]# yum install python-openstackclient -y
2.4.安装数据库
[root@controller ~]# yum install mariadb mariadb-server python2-PyMySQL -y
编辑 /etc/my.cnf.d/openstack.cnf 文件
[root@controller ~]# vi /etc/my.cnf.d/openstack.cnf
先按i进入insert模式,写如下内容
[mysqld]
bind-address = 192.168.136.128 #设置的静态IP地址
default-storage-engine = innodb #默认存储引擎
innodb_file_per_table = on #每张表独立表空间文件
max_connections = 4096 #最大连接数
collation-server = utf8_general_ci #默认字符集
character-set-server = utf8
确认无误后输入:wq! ,保存退出
启动服务并开机自启
[root@controller ~]# systemctl enable mariadb.service
[root@controller ~]# systemctl start mariadb.service
[root@controller ~]# mysql_secure_installation
这里的操作步骤是:回车—>n—>y—>y—>y—>y 如果你是第一次部署,建议安装这个操作,以免影响后面的部署
完成后会出现下面的界面
2.5.安装消息队列服务
[root@controller ~]# yum install rabbitmq-server -y
[root@controller ~]# systemctl enable rabbitmq-server.service
[root@controller ~]# systemctl start rabbitmq-server.service
创建用户
[root@controller ~]# rabbitmqctl add_user openstack RABBIT_PASS
授予权限
[root@controller ~]# rabbitmqctl set_permissions openstack “." ".” “.”
Setting permissions for user “openstack” in vhost “/”
安装好之后,使用netstat -tnlup 查看,如果有下图所示的25672和5672端口,则表示安装成功。注意:使用netstat命令查看, 需要先安装好(yum install net-tools -y)
扩展:启用 rabbitmq的管理插件,为了方便日后做监控
#启动后的端口是 15672
systemctl enable rabbitmq-server.service
systemctl restart rabbitmq-server.service
rabbitmq-plugins enable rabbitmq_management
#插件启动以后会监控两个端口(5672、25672)
[root@controller ~]# netstat -lntup|egrep ‘5672|25672’
tcp 0 0 0.0.0.0:25672 0.0.0.0: LISTEN 56252/beam.smp
tcp6 0 0 :::5672 ::???? LISTEN 56252/beam.smp
tcp 0 0 0.0.0.0:15672 0.0.0.0:* LISTEN 56252/beam.smp
2.6.安装memcache
[root@controller ~]# yum install memcached python-memcached -y
[root@controller ~]# sed -i ‘/OPTIONS/cOPTIONS=“-l 0.0.0.0”’ /etc/sysconfig/memcached
[root@controller ~]# systemctl enable memcached.service
[root@controller ~]# systemctl start memcached.service
安装和启动好之后,同样使用netstat -tnlup查看端口情况,看到11211端口有程序在侦听则表示memcache安装成功
2.7.安装etcd
[root@controller ~]# yum install etcd -y
[root@controller ~]# cp -a /etc/etcd/etcd.conf{,.bak}
[root@controller ~]# cat > /etc/etcd/etcd.conf <<EOF
#[Member]
ETCD_DATA_DIR=“/var/lib/etcd/default.etcd”
ETCD_LISTEN_PEER_URLS=“http://192.168.136.128:2380”
ETCD_LISTEN_CLIENT_URLS=“http://192.168.136.128:2379”
ETCD_NAME=“controller”
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS=“http://192.168.136.128:2380”
ETCD_ADVERTISE_CLIENT_URLS=“http://192.168.136.128:2379”
ETCD_INITIAL_CLUSTER=“controller=http://192.168.136.128:2380”
ETCD_INITIAL_CLUSTER_TOKEN=“etcd-cluster-01”
ETCD_INITIAL_CLUSTER_STATE=“new”
[root@controller ~]# systemctl enable etcd
[root@controller ~]# systemctl start etcd
安装和启动好之后,同样使用netstat -tnlup查看端口情况,看到2379和2380端口有程序在侦听则表示etcd安装成功
3.安装keystone身份认证服务(控制节点)
3.1.创建keystone数据库并授权
[root@controller ~]# mysql -uroot -p
MariaDB [(none)]> CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@‘localhost’ IDENTIFIED BY ‘KEYSTONE_DBPASS’;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@’%’ IDENTIFIED BY ‘KEYSTONE_DBPASS’;
完成后,exit退出数据库
3.2.安装keystone软件包
[root@controller ~]# yum install openstack-keystone httpd mod_wsgi -y
3.3.修改配置文件
[root@controller ~]# cp -a /etc/keystone/keystone.conf{,.bak}
[root@controller ~]# grep -Ev “^KaTeX parse error: Expected 'EOF', got '#' at position 2: |#̲” /etc/keystone…|#’ /etc/placement/placement.conf.bak > /etc/placement/placement.conf
[root@controller ~]# openstack-config --set /etc/placement/placement.conf placement_database connection mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
[root@controller ~]# vim /etc/placement/placement.conf
[api]
…
auth_strategy = keystone
[keystone_authtoken]
…
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = PLACEMENT_PASS
5.7.填充placement数据库
[root@controller ~]# su -s /bin/sh -c “placement-manage db sync” placement
修改placement的apache配置文件(官方文档坑点之一,这个步骤官方文档没有提到,如果不做,后面计算服务检查时将会报错)
[root@controller ~]# vim /etc/httpd/conf.d/00-placement-api.conf
<Directory /usr/bin>
= 2.4>
Require all granted
<IfVersion < 2.4>
Order allow,deny
Allow from all
5.8.重启apache服务
[root@controller ~]# systemctl restart httpd
检查服务是否启动成功,使用netstat -tnlup查看端口情况,如果存在8778的端口,表示placement服务启动成功。
[root@controller ~]# lsof -i:8778
进一步检查,使用命令:curl http://controller:8778,直接访问placement的API地址,看是否能返回json。
5.9.检查健康状态
[root@controller ~]# placement-status upgrade check
6.安装nova计算服务(控制节点)
计算服务nova较之前的服务稍显复杂(但没有网络服务neutron复杂),它需要在控制节点和计算节点都安装
控制节点主要安装nova-api(nova主服务)、nova-scheduler(nova调度服务)、nova-conductor(nova数据库服务,提供数据库访问)、nova-novncproxy(nova的vnc服务,提供实例的控制台)等服务;
计算节点主要安装nova-compute(nova计算服务)。
本节将叙述在控制节点上安装nova的步骤,下一节再叙述在计算节点上的安装。
6.1.创建nova_api,nova和nova_cell0数据库并授权
[root@controller ~]# mysql -uroot -p
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova’@‘localhost’ IDENTIFIED BY ‘NOVA_DBPASS’;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova’@’%’ IDENTIFIED BY ‘NOVA_DBPASS’;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@‘localhost’ IDENTIFIED BY ‘NOVA_DBPASS’;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@’%’ IDENTIFIED BY ‘NOVA_DBPASS’;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO ‘nova’@‘localhost’ IDENTIFIED BY ‘NOVA_DBPASS’;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO ‘nova’@’%’ IDENTIFIED BY ‘NOVA_DBPASS’;
6.2.创建nova用户
[root@controller ~]# openstack user create --domain default --password NOVA_PASS nova
6.3.向nova用户添加admin角色
[root@controller ~]# openstack role add --project service --user nova admin
6.4.创建nova服务实体
[root@controller ~]# openstack service create --name nova --description “OpenStack Compute” compute
6.5.创建Compute API服务端点
[root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
[root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
[root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
6.6.安装nova软件包
[root@controller ~]# yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler -y
6.7.修改配置文件
编辑nova服务的配置,文件在 /etc/nova/nova.conf
[root@controller ~]# cp -a /etc/nova/nova.conf{,.bak}
[root@controller ~]# grep -Ev ‘^$|#’ /etc/nova/nova.conf.bak > /etc/nova/nova.conf
[root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
[root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.136.128
[root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true
[root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
[root@controller ~]# openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
[root@controller ~]# openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
[root@controller ~]# openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:NOVA_DBPASS@controller/nova
[root@controller ~]# openstack-config --set /etc/nova/nova.conf placement_database connection mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
[root@controller ~]# openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
[root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:5000/v3
[root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
[root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
[root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
[root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
[root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service[root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
[root@controller ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
[root@controller ~]# openstack-config --set /etc/nova/nova.conf vnc enabled true
[root@controller ~]# openstack-config --set /etc/nova/nova.conf vnc server_listen ’ $my_ip’
[root@controller ~]# openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address ’ KaTeX parse error: Expected 'EOF', got '#' at position 28: …t@controller ~]#̲ openstack-conf…|#’ /etc/nova/nova.conf.bak > /etc/nova/nova.conf
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.136.129
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:5000/v3
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf vnc enabled true
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf vnc server_listen 0.0.0.0
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address ’ KaTeX parse error: Expected 'EOF', got '#' at position 26: …oot@compute1 ~]#̲ openstack-conf…|#’ /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.ini
[root@controller ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan
[root@controller ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types
[root@controller ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge
[root@controller ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
[root@controller ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider
[root@controller ~]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset true
flat网络:没有使用任何网络隔离技术,大二层互通。
vlan网络:基于vlan实现的虚拟网络。同一个物理网络中的多个vlan网络是相互隔离的,因此支持多租户这种应用场景。
配置Linux网桥代理
Linux网桥代理为实例构建第2层(桥接和交换)虚拟网络基础结构并处理安全组
编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件
[root@controller etc]# cp -a /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
[root@controller etc]# grep -Ev ‘^KaTeX parse error: Expected 'EOF', got '#' at position 2: |#̲’ /etc/neutron/…|#’ /etc/neutron/dhcp_agent.ini.bak > /etc/neutron/dhcp_agent.ini
[root@controller etc]# openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver linuxbridge
[root@controller etc]# openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
[root@controller etc]# openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true
查看是否如下图所示
配置元数据代理,以便和nova通讯
元数据代理提供配置信息,例如实例的凭据
编辑/etc/neutron/metadata_agent.ini文件,配置元数据主机和共享机密
[root@controller etc]# cp -a /etc/neutron/metadata_agent.ini{,.bak}
[root@controller etc]# grep -Ev ‘^$|#’ /etc/neutron/metadata_agent.ini.bak > /etc/neutron/metadata_agent.ini
[root@controller etc]# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host controller
[root@controller etc]# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret METADATA_SECRET
将计算服务配置为使用网络服务
必须安装 Nova 计算服务才能完成此步骤。
#编辑/etc/nova/nova.conf文件
[root@controller etc]# openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
[root@controller etc]# openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:5000
[root@controller etc]# openstack-config --set /etc/nova/nova.conf neutron auth_type password
[root@controller etc]# openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
[root@controller etc]# openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
[root@controller etc]# openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
[root@controller etc]# openstack-config --set /etc/nova/nova.conf neutron project_name service
[root@controller etc]# openstack-config --set /etc/nova/nova.conf neutron username neutron
[root@controller etc]# openstack-config --set /etc/nova/nova.conf neutron password NEUTRON_PASS
[root@controller etc]# openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy true
[root@controller etc]# openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret METADATA_SECRET
- 初始数据库
1.网络服务初始化脚本需要一个指向 ML2 插件配置文件的符号链接
创建 /etc/neutron/plugins/ml2/ml2_conf.ini 文件指向ML2插件配置的软链接
[root@controller neutron]# su -s /bin/sh -c “neutron-db-manage --config-file /etc/neutron/neutron.conf
–config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron
2.同步数据库
su -s /bin/sh -c “neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron
3.重新启动Nova计算服务的API服务
[root@controller etc]# systemctl restart openstack-nova-api.service
4.启动neutron服务和配置开机启动
[root@controller etc]# systemctl enable neutron-server.service
neutron-linuxbridge-agent.service neutron-dhcp-agent.service
neutron-metadata-agent.service
[root@controller etc]# systemctl start neutron-server.service
neutron-linuxbridge-agent.service neutron-dhcp-agent.service
neutron-metadata-agent.service
对于自助服务网络,还启用并启动第3层服务。请点击链接:自助服务网络
启动好之后,可以使用systemctl status命令查看各个服务运行的状态,使用netstat -tnlup查看是否有9696端口。
[root@controller ~]# netstat -lntup|grep 9696
b.自助服务网络
自助服务网络选项通过第 3 层(路由)服务增强提供程序网络选项,这些服务使用叠加分段方法(如VXLAN)启用自助服务网络。从本质上讲,它使用 NAT 将虚拟网络路由到物理网络。此外,此选项为 LBaaS 和 FWaaS 等高级服务奠定了基础。
OpenStack 用户可以在数据网络上基础基础结构不知情的情况下创建虚拟网络。如果相应配置了第 2 层插件,这也可以包括 VLAN 网络。
配置详细步骤请查看官网:自助服务网络
9.安装neutron网络服务(计算节点)
9.1.安装组件
[root@compute1 ~]# yum install openstack-neutron-linuxbridge ebtables ipset -y
9.2.修改配置文件
(1)修改neutron主配置文件
[root@compute1 ~]# cp -a /etc/neutron/neutron.conf{,.bak}
[root@compute1 ~]# grep -Ev ‘^KaTeX parse error: Expected 'EOF', got '#' at position 2: |#̲’ /etc/neutron/…|#’ /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[root@compute1 ~]# openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eth0
[root@compute1 ~]# openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan false
[root@compute1 ~]# openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true
[root@compute1 ~]# openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
(3)修改linux系统内核参数为1
[root@compute1 ~]# echo ‘net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1’ >> /etc/sysctl.conf
[root@compute1 ~]# modprobe br_netfilter
[root@compute1 ~]# sysctl -p
9.3.配置Nova服务使用网络服务
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:5000
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf neutron auth_type password
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf neutron project_name service
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf neutron username neutron
[root@compute1 ~]# openstack-config --set /etc/nova/nova.conf neutron password NEUTRON_PASS
9.4.重启nova计算服务
systemctl restart openstack-nova-compute.service
9.5.启动neutron服务和设置开机自启动
[root@compute1 ~]# systemctl enable neutron-linuxbridge-agent.service
[root@compute1 ~]# systemctl start neutron-linuxbridge-agent.service
计算节点上面的neutron服务安装完毕,下面切换到控制节点,验证整个neutron的安装。
6. Neutron网络虚拟化验证
在控制节点(controller)执行以下命令验证neutron服务
列出已加载的扩展,以验证该neutron-server过程是否成功启动
[root@controller ~]# source ~/bashrc
[root@controller ~]# openstack extension list --network
列出代理商以验证成功
[root@controller ~]# openstack network agent list
一定要确保列表中有4条记录,并且Alive状态为笑脸:-),State为UP。
10.安装horizon服务(计算节点)
OpenStack仪表板Dashboard服务的项目名称是Horizon,它所需的唯一服务是身份服务keystone,开发语言是python的web框架Django。
在计算节点(compute1)上安装仪表板服务horizon
由于horizon运行需要apache,为了不影响控制节点上的keystone等其他服务使用的apache,故在计算节点上安装。安装之前确认以前安装的服务是否正常启动。
10.1.系统要求
安装Train版本的Horizon有以下要求:
10.1.1.语言环境
Python 2.7、3.6或3.7
Django 1.11、2.0和2.2
Django 2.0和2.2支持在Train版本中处于试验阶段。
Ussuri发行版(Train发行版之后的下一个发行版)将使用Django 2.2作为主要的Django版本。Django 2.0支持将被删除。
10.1.2.可访问的keystone endpoint
10.1.3.可选服务
从Stein版本开始,Horizon支持以下服务:
cinder:块状存储
glance:镜像管理
neutron:网络
nova:计算
swift:对象存储
如果已配置好服务keystone的endpoint,那么Horizon将对其进行检测并自动启用其支持。
Horizon还通过插件支持许多其他OpenStack服务。
10.2.安装
先安装dashboard
[root@compute1 ~]# yum install openstack-dashboard -y
编辑配置文件/etc/openstack-dashboard/local_settings
[root@compute1 ~]# vi /etc/openstack-dashboard/local_settings
import os
from django.utils.translation import ugettext_lazy as _
from openstack_dashboard.settings import HORIZON_CONFIG
DEBUG = False
ALLOWED_HOSTS = [’’]
LOCAL_PATH = ‘/tmp’
SECRET_KEY=‘60eeac4448ab9733b7d8’
SESSION_ENGINE = ‘django.contrib.sessions.backends.cache’
CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.memcached.MemcachedCache’,
‘LOCATION’: ‘controller:11211’,
}
}
EMAIL_BACKEND = ‘django.core.mail.backends.console.EmailBackend’
OPENSTACK_HOST = “controller”
OPENSTACK_KEYSTONE_URL = “http://%s:5000/v3” % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
“identity”: 3,
“image”: 2,
“volume”: 3,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = “Default”
OPENSTACK_KEYSTONE_DEFAULT_ROLE = “user”
OPENSTACK_NEUTRON_NETWORK = {
‘enable_auto_allocated_network’: False,
‘enable_distributed_router’: False,
‘enable_fip_topology_check’: False,
‘enable_ha_router’: False,
‘enable_lb’: False,
‘enable_firewall’: False,
‘enable_vpn’: False,
‘enable_ipv6’: True,
‘enable_quotas’: False,
‘enable_rbac_policy’: True,
‘enable_router’: False,
‘default_dns_nameservers’: [],
‘supported_provider_types’: [’’],
‘segmentation_id_range’: {},
‘extra_provider_types’: {},
‘supported_vnic_types’: [’’],
‘physical_networks’: [],
}
TIME_ZONE = “Asia/Shanghai”
LOGGING = {
‘version’: 1,
‘disable_existing_loggers’: False,
‘formatters’: {
‘console’: {
‘format’: ‘%(levelname)s %(name)s %(message)s’
},
‘operation’: {
‘format’: ‘%(message)s’
},
},
‘handlers’: {
‘null’: {
‘level’: ‘DEBUG’,
‘class’: ‘logging.NullHandler’,
},
‘console’: {
‘level’: ‘DEBUG’ if DEBUG else ‘INFO’,
‘class’: ‘logging.StreamHandler’,
‘formatter’: ‘console’,
},
‘operation’: {
‘level’: ‘INFO’,
‘class’: ‘logging.StreamHandler’,
‘formatter’: ‘operation’,
},
},
‘loggers’: {
‘horizon’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘horizon.operation_log’: {
‘handlers’: [‘operation’],
‘level’: ‘INFO’,
‘propagate’: False,
},
‘openstack_dashboard’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘novaclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘cinderclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘keystoneauth’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘keystoneclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘glanceclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘neutronclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘swiftclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘oslo_policy’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘openstack_auth’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘django’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘django.db.backends’: {
‘handlers’: [‘null’],
‘propagate’: False,
},
‘requests’: {
‘handlers’: [‘null’],
‘propagate’: False,
},
‘urllib3’: {
‘handlers’: [‘null’],
‘propagate’: False,
},
‘chardet.charsetprober’: {
‘handlers’: [‘null’],
‘propagate’: False,
},
‘iso8601’: {
‘handlers’: [‘null’],
‘propagate’: False,
},
‘scss’: {
‘handlers’: [‘null’],
‘propagate’: False,
},
},
}
SECURITY_GROUP_RULES = {
‘all_tcp’: {
‘name’: _(‘All TCP’),
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘1’,
‘to_port’: ‘65535’,
},
‘all_udp’: {
‘name’: _(‘All UDP’),
‘ip_protocol’: ‘udp’,
‘from_port’: ‘1’,
‘to_port’: ‘65535’,
},
‘all_icmp’: {
‘name’: _(‘All ICMP’),
‘ip_protocol’: ‘icmp’,
‘from_port’: ‘-1’,
‘to_port’: ‘-1’,
},
‘ssh’: {
‘name’: ‘SSH’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘22’,
‘to_port’: ‘22’,
},
‘smtp’: {
‘name’: ‘SMTP’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘25’,
‘to_port’: ‘25’,
},
‘dns’: {
‘name’: ‘DNS’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘53’,
‘to_port’: ‘53’,
},
‘http’: {
‘name’: ‘HTTP’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘80’,
‘to_port’: ‘80’,
},
‘pop3’: {
‘name’: ‘POP3’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘110’,
‘to_port’: ‘110’,
},
‘imap’: {
‘name’: ‘IMAP’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘143’,
‘to_port’: ‘143’,
},
‘ldap’: {
‘name’: ‘LDAP’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘389’,
‘to_port’: ‘389’,
},
‘https’: {
‘name’: ‘HTTPS’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘443’,
‘to_port’: ‘443’,
},
‘smtps’: {
‘name’: ‘SMTPS’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘465’,
‘to_port’: ‘465’,
},
‘imaps’: {
‘name’: ‘IMAPS’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘993’,
‘to_port’: ‘993’,
},
‘pop3s’: {
‘name’: ‘POP3S’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘995’,
‘to_port’: ‘995’,
},
‘ms_sql’: {
‘name’: ‘MS SQL’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘1433’,
‘to_port’: ‘1433’,
},
‘mysql’: {
‘name’: ‘MYSQL’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘3306’,
‘to_port’: ‘3306’,
},
‘rdp’: {
‘name’: ‘RDP’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘3389’,
‘to_port’: ‘3389’,
},
}
10.3.重建apache的dashboard配置文件
以下两步官方文档中没有,但是需要执行,否则dashboard打不开或显示不对
[root@compute1 ~]# cd /usr/share/openstack-dashboard
[root@compute1 openstack-dashboard]# python manage.py make_web_conf --apache > /etc/httpd/conf.d/openstack-dashboard.conf
10.4.备用机制
若出现不能正常访问,请操作以下步骤:
建立策略文件(policy.json)的软链接,否则登录到dashboard将出现权限错误和显示混乱
ln -s /etc/openstack-dashboard /usr/share/openstack-dashboard/openstack_dashboard/conf
10.5.重新启动计算节点上apache服务
[root@controller ~]# systemctl enable httpd.service
[root@controller ~]# systemctl restart httpd.service
由于dashboard的运行机制是把网站下的所有文件删除之后再重新复制,所以重启httpd需要等待一段时间。
重新启动控制节点(controller)上的memcache服务
[root@controller ~]# systemctl restart memcached.service
10.6.验证操作
在浏览器访问仪表板,网址为是你安装的节点ip,例如 http://192.168.136.129(注意,和以前版本不一样,不加dashboard)
11.启动实例(控制节点)
11.1.检查各个节点间的网络通讯
在控制节点执行ping
ping compute1
ping 10.0.0.31
11.2.删除NetworkManager软件包
在控制节点和计算节点都执行
yum remove NetworkManager -y
11.3.controller节点创建网络
neutron net-create --shared --provider:physical_network provider --provider:network_type flat WAN
参数说明:
–share 指明所有项目都可以使用这个网络,否则只有创建者能使用
–external 指明是外部网络
–provider-physical-network provider
指明物理网络的提供者,与下面neutron的配置文件对应,其中provider是标签,可以
改为其他,但是2个地方必须要统一。
[ml2_type_flat]
flat_networks = provider
–provider-network-type flat
指明这里创建的网络是flat类型,即实例连接到此网络时和物理网络是在同一个网段,
vlan等功能。
vm-network 网络名称
11.4.controller节点创建子网
neutron subnet-create --name subnet-wan --allocation-pool
start=10.0.0.100,end=10.0.0.200 --dns-nameserver 223.5.5.5
–gateway 10.0.0.254 WAN 10.0.0.0/24
参数说明:
–network 指明父网络
–allocation-pool start=10.8.20.50,end=10.8.20.60 指明子网起始地址和终止地址
–dns-nameserver 指明dns服务器
–gateway 指明网关地址
–subnet-range 指明子网网段
vm-subnetwork 子网名称
11.5.检查网络配置
在控制节点执行以下操作
11.6.控制节点创建硬件配置方案
openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
11.7.dashboard创建实例
选择一个点向上的箭头,点下一步
先选择,再下一步
创建实例
12.安装块存储服务cinder(控制节点)
12.1.创建cinder数据库并授权
[root@controller ~]# mysql
MariaDB [(none)]> CREATE DATABASE cinder;
Query OK, 1 row affected (0.012 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder. TO ‘cinder’@‘localhost’ IDENTIFIED BY ‘CINDER_DBPASS’;
Query OK, 0 rows affected (0.001 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@’%’ IDENTIFIED BY ‘CINDER_DBPASS’;
Query OK, 0 rows affected (0.000 sec)
12.2.创建cinder用户
[root@controller ~]# openstack user create --domain default --password CINDER_PASS cinder
12.3.向cinder用户添加admin角色
[root@controller ~]# openstack role add --project service --user cinder admin
12.4.创建cinderv2和cinderv3服务实体
[root@controller ~]# openstack service create --name cinderv2
–description “OpenStack Block Storage” volumev2
[root@controller ~]# openstack service create --name cinderv3
–description “OpenStack Block Storage” volumev3
12.5.创建块存储服务API端点
[root@controller ~]# openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%(project_id)s
[root@controller ~]# openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%(project_id)s
[root@controller ~]# openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%(project_id)s
[root@controller ~]# openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%(project_id)s
[root@controller ~]# openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%(project_id)s
[root@controller ~]# openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%(project_id)s
12.6.安装cinder软件包并修改配置文件
yum install openstack-cinder -y
cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
grep -Ev ‘#|^KaTeX parse error: Expected 'EOF', got '#' at position 1415: …t@controller ~]#̲ su -s /bin/sh …’ /etc/cinder/cinder.conf.bak>/etc/cinder/cinder.conf
编辑/etc/cinder/cinder.conf文件,添加如下内容
[root@compute1 ~]# vi /etc/cinder/cinder.conf
[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@ct
auth_strategy = keystone
my_ip = 10.0.0.31
enabled_backends = lvm
glance_api_servers = http://controller:9292
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
target_protocol = iscsi
target_helper = lioadm
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
13.8.启动服务并设置开机自启
[root@compute1 ~]# systemctl enable openstack-cinder-volume.service target.service
[root@compute1 ~]# systemctl start openstack-cinder-volume.service target.service
13.9.验证cinder块存储服务(控制节点)
[root@controller ~]# openstack volume service list
13.10.使用块存储服务向实例提供数据盘
创建卷(volume)
创建一个10 GB的卷:
[root@controller ~]# openstack volume create --size 10 volume1
很短的时间后,卷状态应该从creating 到available
[root@controller ~]# openstack volume list
13.11.将卷附加到实例
将volume1卷附加到zdw实例:
[root@controller ~]# openstack server add volume zdw volume1
可以通过openstack volume list的查看卷清单
Attached的状态
13.12.使用SSH访问实例
并使用以下fdisk命令验证该卷是否作为/dev/vdb块存储设备
注意是sudo fdisk -l
分区并格式化新添加的/dev/vdb
fdisk /dev/vdb
mk2fs.ext4 /dev/vdb1
最后
以上就是糊涂睫毛为你收集整理的基于OpenStack的云计算平台搭建1.硬件需求(所有节点)2.安装基础环境&服务(控制节点)……的全部内容,希望文章能够帮你解决基于OpenStack的云计算平台搭建1.硬件需求(所有节点)2.安装基础环境&服务(控制节点)……所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
发表评论 取消回复