概述
文章目录
- MGR单主(Single-Primary)模式部署
- 1、创建复制账号
- 2、修改配置参数
- 3、主节点配置
- 4、在主节点启动组复制并查看日志。
- 5、在主节点通过视图查看现在的集群状态
- 6、在主节点关闭 group_replication_bootstrap_group
- 7、配置从节点
- 8、在两个节点启动组复制
- 9、如何找出 Primary 成员
- 10、常见报错
- 11、集群故障
MGR单主(Single-Primary)模式部署
1、创建复制账号
在每一个MySQL节点创建复制使用账号
set sql_log_bin=0;
#设为0后,在Master数据库上执行的语句都不记录binlog
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'192.168.136.*' IDENTIFIED BY 'repl';
CHANGE MASTER TO MASTER_USER='repl',MASTER_PASSWORD='repl' FOR CHANNEL 'group_replication_recovery';
set sql_log_bin=1;
2、修改配置参数
修改每一个MySQL节点的配置文件 my.cnf ,增加如下参数,并重启数据库实例
[mysqld]
master-info-repository=TABLE
#复制元数据存储在系统表而不是文件
relay-log-info-repository=TABLE
#中继日志信息写入到表而不是文件
gtid_mode=on
#开启全局事务ID
enforce_gtid_consistency=on
#开启全局事务ID强一致性
slave_preserve_commit_order=on
#控制从库并行reply时事务提交的顺序
binlog_checksum=NONE
#禁用二进制日志时间校验和
transaction_write_set_extraction=XXHASH64
#以便在server收集写集合的同时将其记录到二进制日志。写集合基于每行的主键,并且是行更改后的唯一标识此标识将用于检测冲突。
3、主节点配置
选择一个 MySQL 节点作为主节点,安装组复制插件并配置组复制参数,同时将参数写入 my.cnf 文件。
#开放 23306 端口
firewall-cmd --zone=public --add-port=23306/tcp --permanent
firewall-cmd --reload
#安装组复制插件
INSTALL PLUGIN group_replication SONAME 'group_replication.so';
#使用SELECT UUID()生成一个UUID。
mysql> select uuid();
+--------------------------------------+
| uuid()
|
+--------------------------------------+
| 3a90016b-afe0-11ea-ab97-00505636b68c |
+--------------------------------------+
1 row in set (0.00 sec)
set global group_replication_group_name='3a90016b-afe0-11ea-ab97-00505636b68c';
#组的名字可以随便起,必须是有效的UUID,但不能用主机的GTID! 所有节点的这个组名必须保持一致!。在二进制日志中为组复制事件设置GTID时,将在内部使用此UUID。使用SELECT UUID()生成一个UUID。
set global group_replication_local_address='192.168.136.136:23306';
#不同节点配置不同节点本身的IP地址和端口,区分MYSQL自身的3306端口
set global group_replication_group_seeds='192.168.136.136:23306,192.168.136.135:23306,192.168.136.137:23306';
#表示当集群有新成员加入时,可以从哪些节点(IP:Port)获取需要的数据进行 recovery
set global group_replication_start_on_boot=off;
#表示组复制是否随实例启动而启动
set global group_replication_bootstrap_group=on;
#只用于集群初始化的时候,主节点必须执行,其他节点不需要执行
set global group_replication_ip_whitelist='192.168.136.0/24';
#IP白名单,确定可以接受哪个主机的组通信系统连接
注意:写入 my.cnf 文件时,group_replication 开头的参数应该增加 loose- 开头,例如 loose-group_replication_group_name
loose-group_replication_group_name='3a90016b-afe0-11ea-ab97-00505636b68c'
loose-group_replication_local_address='192.168.136.136:23306'
loose-group_replication_group_seeds='192.168.136.136:23306,192.168.136.135:23306,192.168.136.137:23306'
loose-group_replication_start_on_boot=off
#loose-group_replication_bootstrap_group=on
loose-group_replication_ip_whitelist='192.168.136.0/24'
4、在主节点启动组复制并查看日志。
start group_replication;
如果显示:Plugin group_replication reported: ‘This server is working as primary member.’ 则成功启动
2020-06-16T14:49:09.898607Z 17 [Note] Plugin group_replication reported: 'Group communication SSL configuration: group_replication_ssl_mode: "DISABLED"'
2020-06-16T14:49:09.899050Z 17 [Warning] Plugin group_replication reported: '[GCS] Automatically adding IPv4 localhost address to the whitelist. It is mandatory that it is added.'
2020-06-16T14:49:09.899464Z 17 [Note] Plugin group_replication reported: '[GCS] SSL was not enabled'
2020-06-16T14:49:09.899496Z 17 [Note] Plugin group_replication reported: 'Initialized group communication with configuration: group_replication_group_name: "3a90016b-afe0-11ea-ab97-00505636b68c"; group_replication_local_address: "192.168.136.136:23306"; group_replication_group_seeds: "192.168.136.136:23306,192.168.136.135:23306,192.168.136.137:23306"; group_replication_bootstrap_group: true; group_replication_poll_spin_loops: 0; group_replication_compression_threshold: 1000000; group_replication_ip_whitelist: "192.168.136.0/24"'
2020-06-16T14:49:09.899566Z 17 [Note] Plugin group_replication reported: '[GCS] Configured number of attempts to join: 0'
2020-06-16T14:49:09.899580Z 17 [Note] Plugin group_replication reported: '[GCS] Configured time between attempts to join: 5 seconds'
2020-06-16T14:49:09.899632Z 17 [Note] Plugin group_replication reported: 'Member configuration: member_id: 1363306; member_uuid: "839d3796-a8a1-11ea-b1db-00505636b68c"; single-primary mode: "true"; group_replication_auto_increment_increment: 7; '
2020-06-16T14:49:09.899961Z 20 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 373, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2020-06-16T14:49:09.914952Z 17 [Note] Plugin group_replication reported: 'Group Replication applier module successfully initialized!'
2020-06-16T14:49:09.915006Z 17 [Note] Plugin group_replication reported: 'auto_increment_increment is set to 7'
2020-06-16T14:49:09.915033Z 17 [Note] Plugin group_replication reported: 'auto_increment_offset is set to 1363306'
2020-06-16T14:49:09.915005Z 23 [Note] Slave SQL thread for channel 'group_replication_applier' initialized, starting replication in log 'FIRST' at position 0, relay log './ip136-relay-bin-group_replication_applier.000002' position: 4
2020-06-16T14:49:09.915432Z 0 [Note] Plugin group_replication reported: 'XCom protocol version: 3'
2020-06-16T14:49:09.915447Z 0 [Note] Plugin group_replication reported: 'XCom initialized and ready to accept incoming connections on port 23306'
2020-06-16T14:49:10.921514Z 0 [Note] Plugin group_replication reported: 'Group membership changed to ip136:3306 on view 15923189509206964:1.'
2020-06-16T14:49:10.921627Z 30 [Note] Plugin group_replication reported: 'Only one server alive. Declaring this server as online within the replication group'
2020-06-16T14:49:11.231107Z 0 [Note] Plugin group_replication reported: 'This server was declared online within the replication group'
2020-06-16T14:49:11.231322Z 0 [Note] Plugin group_replication reported: 'A new primary with address ip136:3306 was elected, enabling conflict detection until the new primary applies all relay logs.'
2020-06-16T14:49:11.231425Z 32 [Note] Plugin group_replication reported: 'This server is working as primary member.'
5、在主节点通过视图查看现在的集群状态
mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME
| MEMBER_ID
| MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 839d3796-a8a1-11ea-b1db-00505636b68c | ip136
|
3306 | ONLINE
|
+---------------------------+--------------------------------------+-------------+-------------+--------------+
1 row in set (0.00 sec)
mysql> select VARIABLE_VALUE from performance_schema.global_status where VARIABLE_NAME='group_replication_primary_member';
+--------------------------------------+
| VARIABLE_VALUE
|
+--------------------------------------+
| 839d3796-a8a1-11ea-b1db-00505636b68c |
+--------------------------------------+
1 row in set (0.00 sec)
6、在主节点关闭 group_replication_bootstrap_group
set global group_replication_bootstrap_group=off;
7、配置从节点
在需要加入集群的两个节点分别安装组复制插件,并配置组复制参数,同时写入 my.cnf 文件
#开放 23306 端口
firewall-cmd --zone=public --add-port=23306/tcp --permanent
firewall-cmd --reload
INSTALL PLUGIN group_replication SONAME 'group_replication.so';
#节点1
set global group_replication_group_name='3a90016b-afe0-11ea-ab97-00505636b68c';
set global group_replication_local_address='192.168.136.135:23306';
set global group_replication_group_seeds='192.168.136.136:23306,192.168.136.135:23306,192.168.136.137:23306';
set global group_replication_start_on_boot=off;
set global group_replication_ip_whitelist='192.168.136.0/24';
loose-group_replication_group_name='3a90016b-afe0-11ea-ab97-00505636b68c'
loose-group_replication_local_address='192.168.136.135:23306'
loose-group_replication_group_seeds='192.168.136.136:23306,192.168.136.135:23306,192.168.136.137:23306'
loose-group_replication_start_on_boot=off
loose-group_replication_ip_whitelist='192.168.136.0/24'
#节点2
set global group_replication_group_name='3a90016b-afe0-11ea-ab97-00505636b68c';
set global group_replication_local_address='192.168.136.137:23306';
set global group_replication_group_seeds='192.168.136.136:23306,192.168.136.135:23306,192.168.136.137:23306';
set global group_replication_start_on_boot=off;
set global group_replication_ip_whitelist='192.168.136.0/24';
loose-group_replication_group_name='3a90016b-afe0-11ea-ab97-00505636b68c'
loose-group_replication_local_address='192.168.136.137:23306'
loose-group_replication_group_seeds='192.168.136.136:23306,192.168.136.135:23306,192.168.136.137:23306'
loose-group_replication_start_on_boot=off
loose-group_replication_ip_whitelist='192.168.136.0/24'
8、在两个节点启动组复制
依次在两个节点启动组复制,加入集群环境,可以通过观察日志或者视图确认是否加入成功
#启动组复制
start group_replication;
#查看视图
mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME
| MEMBER_ID
| MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 839d3796-a8a1-11ea-b1db-00505636b68c | ip136
|
3306 | ONLINE
|
| group_replication_applier | e741120b-a7e9-11ea-9af7-0050563bbd20 | ip135
|
3306 | ONLINE
|
| group_replication_applier | f44ecd83-a8d1-11ea-ad97-0050562512fc | ip137
|
3306 | ONLINE
|
+---------------------------+--------------------------------------+-------------+-------------+--------------+
3 rows in set (0.00 sec)
#日志打印如下,则启动成功
2020-06-16T14:56:17.265752Z 2 [Note] Plugin group_replication reported: 'This server is working as secondary member with primary member address ip136:3306.'
...
2020-06-16T14:56:17.344919Z 0 [Note] Plugin group_replication reported: 'This server was declared online within the replication group'
启动详细日志
2020-06-16T14:56:12.522704Z 2 [Note] Plugin group_replication reported: 'Group communication SSL configuration: group_replication_ssl_mode: "DISABLED"'
2020-06-16T14:56:12.523859Z 2 [Warning] Plugin group_replication reported: '[GCS] Automatically adding IPv4 localhost address to the whitelist. It is mandatory that it is added.'
2020-06-16T14:56:12.524045Z 2 [Note] Plugin group_replication reported: '[GCS] SSL was not enabled'
2020-06-16T14:56:12.524102Z 2 [Note] Plugin group_replication reported: 'Initialized group communication with configuration: group_replication_group_name: "3a90016b-afe0-11ea-ab97-00505636b68c"; group_replication_local_address: "192.168.136.135:23306"; group_replication_group_seeds: "192.168.136.136:23306,192.168.136.135:23306,192.168.136.137:23306"; group_replication_bootstrap_group: false; group_replication_poll_spin_loops: 0; group_replication_compression_threshold: 1000000; group_replication_ip_whitelist: "192.168.136.0/24"'
2020-06-16T14:56:12.524147Z 2 [Note] Plugin group_replication reported: '[GCS] Configured number of attempts to join: 0'
2020-06-16T14:56:12.524156Z 2 [Note] Plugin group_replication reported: '[GCS] Configured time between attempts to join: 5 seconds'
2020-06-16T14:56:12.524185Z 2 [Note] Plugin group_replication reported: 'Member configuration: member_id: 1353306; member_uuid: "e741120b-a7e9-11ea-9af7-0050563bbd20"; single-primary mode: "true"; group_replication_auto_increment_increment: 7; '
2020-06-16T14:56:12.530898Z 4 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2020-06-16T14:56:12.557787Z 7 [Note] Slave SQL thread for channel 'group_replication_applier' initialized, starting replication in log 'FIRST' at position 0, relay log './ip135-relay-bin-group_replication_applier.000001' position: 4
2020-06-16T14:56:12.557788Z 2 [Note] Plugin group_replication reported: 'Group Replication applier module successfully initialized!'
2020-06-16T14:56:12.557874Z 2 [Note] Plugin group_replication reported: 'auto_increment_increment is set to 7'
2020-06-16T14:56:12.557881Z 2 [Note] Plugin group_replication reported: 'auto_increment_offset is set to 1353306'
2020-06-16T14:56:12.591272Z 0 [Note] Plugin group_replication reported: 'XCom protocol version: 3'
2020-06-16T14:56:12.591345Z 0 [Note] Plugin group_replication reported: 'XCom initialized and ready to accept incoming connections on port 23306'
2020-06-16T14:56:17.265752Z 2 [Note] Plugin group_replication reported: 'This server is working as secondary member with primary member address ip136:3306.'
2020-06-16T14:56:17.267305Z 14 [Note] Plugin group_replication reported: 'Establishing group recovery connection with a possible donor. Attempt 1/10'
2020-06-16T14:56:17.267876Z 0 [Note] Plugin group_replication reported: 'Group membership changed to ip136:3306, ip135:3306 on view 15923189509206964:2.'
2020-06-16T14:56:17.283582Z 14 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='ip136', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''.
2020-06-16T14:56:17.292194Z 14 [Note] Plugin group_replication reported: 'Establishing connection to a group replication recovery donor 839d3796-a8a1-11ea-b1db-00505636b68c at ip136 port: 3306.'
2020-06-16T14:56:17.292856Z 16 [Warning]
2020-06-16T14:56:17.300970Z 16 [Note] Slave I/O thread for channel 'group_replication_recovery': connected to master 'repl@ip136:3306',replication started in log 'FIRST' at position 4
2020-06-16T14:56:17.309664Z 17 [Note] Slave SQL thread for channel 'group_replication_recovery' initialized, starting replication in log 'FIRST' at position 0, relay log './ip135-relay-bin-group_replication_recovery.000001' position: 4
2020-06-16T14:56:17.322972Z 14 [Note] Plugin group_replication reported: 'Terminating existing group replication donor connection and purging the corresponding logs.'
2020-06-16T14:56:17.323386Z 17 [Warning] Slave SQL for channel 'group_replication_recovery': Coordinator thread of multi-threaded slave is being stopped in the middle of assigning a group of events; deferring to exit until the group completion ... , Error_code: 0
2020-06-16T14:56:17.324925Z 17 [Note] Slave SQL thread for channel 'group_replication_recovery' exiting, replication stopped in log 'mysql-binlog.000010' at position 1442
2020-06-16T14:56:17.325910Z 16 [Note] Slave I/O thread killed while reading event for channel 'group_replication_recovery'
2020-06-16T14:56:17.325935Z 16 [Note] Slave I/O thread exiting for channel 'group_replication_recovery', read up to log 'mysql-binlog.000010', position 1442
2020-06-16T14:56:17.337845Z 14 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='ip136', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2020-06-16T14:56:17.344919Z 0 [Note] Plugin group_replication reported: 'This server was declared online within the replication group'
9、如何找出 Primary 成员
集群搭建成功之后,可以通过查看视图找出 Primary 成员;或者使用 show variables like ‘%read_only%’; 因为 MGR 默认单主模式,secondary 节点会自动开启 read only 模式
mysql> select VARIABLE_VALUE from performance_schema.global_status where VARIABLE_NAME='group_replication_primary_member';
+--------------------------------------+
| VARIABLE_VALUE
|
+--------------------------------------+
| 839d3796-a8a1-11ea-b1db-00505636b68c |
+--------------------------------------+
1 row in set (0.00 sec)
#查看 Primary 成员
mysql> select *from
performance_schema.replication_group_members where member_id =(select variable_value from performance_schema.global_status WHERE VARIABLE_NAME= 'group_replication_primary_member');
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME
| MEMBER_ID
| MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 839d3796-a8a1-11ea-b1db-00505636b68c | ip136
|
3306 | ONLINE
|
+---------------------------+--------------------------------------+-------------+-------------+--------------+
1 row in set (0.00 sec)
10、常见报错
1) 问题一
mysql> INSTALL PLUGIN group_replication SONAME 'group_replication.so';
ERROR 1126 (HY000):
原因 :MySQL的配置文件指定的 plugin_dir 中,找不到你要安装的【*.so】文件。
#查看 plugin_dir 路径
mysql> show global variables like 'plugin_dir';
+---------------+-------------------------------+
| Variable_name | Value
|
+---------------+-------------------------------+
| plugin_dir
| /home/mysql/mysql/lib/plugin/ |
+---------------+-------------------------------+
1 row in set (0.00 sec)
#将【*.so】文件复制到此路径下
cp /home/mysql/mysql/mysql-5.7.30/lib/plugin/group_replication.so /home/mysql/mysql/lib/plugin/
2)问题二
Plugin group_replication reported: '[GCS] Error on opening a connection to 192.168.136.136:23306 on local port: 23306.'
Plugin group_replication reported: '[GCS] Error on opening a connection to 192.168.136.137:23306 on local port: 23306.'
原因:23306 未放通
centos7 firewalld常用命令
方法一:关闭防火墙
# 启动
systemctl start firewalld
# 关闭
systemctl stop firewalld
# 查看状态
systemctl status firewalld
# 开机禁用
systemctl disable firewalld
# 开机启用
systemctl enable firewalld
# 查看所有打开的端口
firewall-cmd --zone=public --list-ports
方法二:开放 23306 端口
firewall-cmd --zone=public --add-port=23306/tcp --permanent
firewall-cmd --reload
节点接入报错
Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 23306'
Plugin group_replication reported: 'Timeout on wait for view after joining group'
#开放 23306 端口
firewall-cmd --zone=public --add-port=23306/tcp --permanent
firewall-cmd --reload
centos6 防火墙常用命令
# 查看防火墙状态
service iptables status
# 停止防火墙
service iptables stop
# 启动防火墙
service iptables start
# 重启防火墙
service iptables restart
# 永久关闭防火墙
chkconfig iptables off
# 永久关闭后重启
chkconfig iptables on
#开放23306端口
vim /etc/sysconfig/iptables
-A INPUT -m state --state NEW -M tcp -p tcp --dport 23306 -j ACCEPT
service iptables restart
11、集群故障
#主从节点故障,重新加入组复制
stop group_replication;
start group_replication;
#查看 recovery 阶段的执行情况
show slave status for channel 'group_replication_recovery' G;
最后
以上就是知性摩托为你收集整理的MGR单主(Single-Primary)模式部署的全部内容,希望文章能够帮你解决MGR单主(Single-Primary)模式部署所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
发表评论 取消回复