我是靠谱客的博主 默默小天鹅,这篇文章主要介绍数据库间数据迁移方案数据库间数据迁移方案,现在分享给大家,希望可以做个参考。

数据库间数据迁移方案

前言

本章节讨论的是不依托第三方开发语言,不通过业务逻辑的方式进行迁移,主要为运维提供的迁移方案。一般应用于历史数据的整体迁移,就为了这一次迁移写一堆的固定代码,不是很优雅的处理方式!

MySQL To MySQL

核心:mysqldump 对源库表数据进行转存储,source命令创建临时源库表数据(通过一个Shell脚本说明,大家把以下具体配置项改成自己的,表结构改成自己的,一键执行即可,其他迁移方案也类似)

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
#!/bin/bash echo "Start move Data." #使用mysqldump命令转存储源数据表结构及其数据为临时脚本文件 mysqldump -h192.168.137.129 -P3307 -uroot -proot parauser --tables t_ssh_accounts t_user > ./parauser.sql; mysqldump -h192.168.137.129 -P3307 -uroot -proot ngbilling --tables b_user b_group b_cluster b_cluster_user > ./ngbilling.sql; #连接目标数据库并执行迁移脚本,完整脚本""包裹 mysql -h192.168.137.129 -P3306 -uroot -proot -e" #切换到目标库 use console; #设置编码,注意跟源库表编码保持一致 set names utf8; #执行源库表脚本文件,创建源库表及其数据,临时,迁移完数据后进行drop source ./parauser.sql; source ./ngbilling.sql; #执行迁移脚本,根据自身业务需要来 INSERT IGNORE INTO user_info (id,name,email,phone,real_email,group_id,user_type) SELECT * FROM ( SELECT id as user_id,name as username,email,phone as mphone,real_email AS para_email,group_id,user_type FROM user_info UNION ALL SELECT a.user_id,a.username,a.email,a.mphone,a.para_email,a.group_id, IF(b.origin_channel = 'bscc' OR b.origin_channel = 'bscc-cstcloud' OR b.origin_channel = 'bscc-carsi',2,1) as user_type FROM b_user a LEFT JOIN t_user b ON a.user_id=b.user_id ) tb GROUP BY user_id HAVING count(*) = 1 ORDER BY user_id; INSERT IGNORE INTO account_group (id,pay_user_id,master_user_id,name) SELECT * FROM ( SELECT id as group_id,pay_user_id as pay_user_id, master_user_id as master_user_id,name as group_name FROM account_group UNION ALL SELECT group_id,pay_user_id, master_user_id,group_name FROM b_group ) tb GROUP BY group_id HAVING count(*) = 1 ORDER BY group_id; INSERT IGNORE INTO cluster (id,name,online,owned,res_type) SELECT * FROM ( SELECT id as name,name as alias,online as is_online,owned as is_proxy,res_type as resource_type FROM cluster UNION ALL SELECT name,alias,is_online,is_proxy,resource_type FROM b_cluster where is_name_valid = 1 ) tb GROUP BY name HAVING count(*) = 1 ORDER BY name; DELETE a FROM cluster_user a LEFT JOIN b_cluster_user b ON a.cluster_id = b.cluster AND a.user = b.username WHERE b.is_bound = 0; INSERT IGNORE INTO cluster_user ( cluster_id, user,pay_user_id ) SELECT distinct cluster, username, user_id FROM b_cluster_user WHERE NOT EXISTS ( SELECT cluster_id, user FROM cluster_user WHERE cluster_id = cluster AND user = username ) AND is_bound=1 ORDER BY id desc; DELETE a FROM cluster_usage a LEFT JOIN t_ssh_accounts b ON a.cluster_id = b.cluster_code AND a.user_id = b.user_id WHERE b.cluster_code is null AND b.user_id is null; INSERT IGNORE INTO cluster_usage ( cluster_id, user_id, cluster_user ) SELECT cluster_code, user_id, cluster_login_name FROM t_ssh_accounts WHERE NOT EXISTS ( SELECT cluster_id, user_id FROM cluster_usage WHERE cluster_id = cluster_code AND user_id = user_id ); #删除源库表, DROP TABLES t_ssh_accounts,t_user,b_user,b_group,b_cluster,b_cluster_user; commit;" #删除mysqldump临时转存储脚本文件 rm ./ngbilling.sql ./parauser.sql -f; echo "Check Data Successful." exit;

MongoDB To MySQL

核心:通过mongoexport命令导出数据,通过load data infile导入数据(临时表与mongodb数据结构一致)

复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
#1、使用mongoexport命令或者其他工具导出userPref的数据,格式为.csv mongoexport --port 27001 -u admin -p 123456 --authenticationDatabase=admin -d userpref -c userpref -f userId,userPref.billingShowMoney,userPref.billingShowCoreTime --type=csv -o /usr/local/userpref/userpref.csv(导出到你需要的位置) #2、因为MySQL默认是不允许Load File的,避免修改mysql配置,需要将csv文件放入指定的secure_file_priv文件夹中,先查出路径 show global variables like 'local_infile';#查看是否开启本地导入 show global variables like 'secure_file_priv';#查看本地导入保护文件夹 #3、然后将第一步导出的文件放入secure_file_priv对应的文件夹路径,并修改其操作权限 chmod 777 userpref.csv #4、将导出的csv文件里的true和false改为大写,然后将TRUE替换为1, FALSE替换为0。 #5、在console库创建一个表userpref,字段与mongo导出数据保持一致 CREATE TABLE `console`.`userpref` ( `userId` varchar(100) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL, `userpref.billingShowMoney` int(1) NULL DEFAULT NULL, `userpref.billingShowCoreTime` int(1) NULL DEFAULT NULL ) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Dynamic; #6、将mongodb导出的数据导入userpref表 LOAD DATA LOCAL INFILE 'secure_file_priv所在的文件夹路径/*.csv' REPLACE INTO TABLE userpref FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '"' LINES TERMINATED BY 'n' ignore 1 lines; #7、console库执行如下SQL脚本,更新用户偏好 UPDATE user_info a INNER JOIN userpref b ON a.id = b.userId SET a.user_pref = ( CASE WHEN b.`userPref.billingShowCoreTime` = 1 AND b.`userPref.billingShowMoney` = 0 THEN 2 ELSE 1 END );

MySQL To Redis

核心:利用redis管道模式将MySQL数据刷入Redis

复制代码
1
2
3
4
5
6
#1、MySQL查询结果拼接成Redis执行脚本,见:mysql_to_redis.sql #2、将上一步脚本执行查询结果写入文件(具体的地址,数据库s,账号,密码根据需要设置,mysql_to_redis.sql,mtr.txt路径为MySQL容器路径) docker exec mysql /bin/bash -c 'mysql -h192.168.137.128 -P3306 -uroot -proot -Dngbilling --skip-column-names --raw < /mydir/mysql_to_redis.sql > /mydir/mtr.txt' #3、cat数据文件,通过管道符将结果让redis-cli批量刷入redis(具体的地址,数据库,密码根据需要设置,mtr.txt为redis容器根路径) cat mtr.txt |redis-cli -h 192.168.137.128 -n 0 --pipe -a root

mysql_to_redis.sql

复制代码
1
2
3
4
5
6
7
8
SELECT DISTINCT CONCAT( "*3rn", #3表示下方字段数量,管道批量执行时截取的依据 '$', LENGTH('SADD'),'rn','SADD','rn', #Redis-cmd '$', LENGTH('console:biz:cluster:appclouduser'),'rn', 'console:biz:cluster:appclouduser','rn', #Redis-key '$', LENGTH(CONCAT(m_cluster,':',m_user)), 'rn',CONCAT(m_cluster,':',m_user),'r' #Redis-value ) FROM ngbilling.b_cloud_user_map WHERE effective = 1;

最后

以上就是默默小天鹅最近收集整理的关于数据库间数据迁移方案数据库间数据迁移方案的全部内容,更多相关数据库间数据迁移方案数据库间数据迁移方案内容请搜索靠谱客的其他文章。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(75)

评论列表共有 0 条评论

立即
投稿
返回
顶部