我是靠谱客的博主 暴躁戒指,最近开发中收集的这篇文章主要介绍sqoop的import命令和export命令sqoop的import命令和export命令,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

sqoop的import命令和export命令

一. import命令

  1. 数据从mysql导入到hdfs当中

    前提:先在MySQL中创建到一个user表

    create table user(id int primary key auto_increment,name varchar(20),addr varchar(200));

    insert into user(id,name,addr) values(1,“long”,“beijing”);

    执行命令

[root@linux01 sqoop]# bin/sqoop import --connect jdbc:mysql://linux02:3306/sq --username root --password 123456 --table user --target-dir /sqoop/datas --delete-target-dir --num-mappers 1 --fields-terminated-by "t"
  1. 数据从MySQL总导入到hdfs当中进行筛选

    bin/sqoop import --connect jdbc:mysql://linux02:3306/sq --username root --password 123456 --target-dir /sqoop/selectdemo --delete-target-dir --num-mappers 1 --fields-terminated-by "t" --query 'select * from user where id<=1 and $CONDITIONS'
    
  2. 通过where筛选


[root@linux01 sqoop]# bin/sqoop import --connect jdbc:mysql://linux02:3306/sq --username root --password 123456 --target-dir /sqoop/selectdemo2 --delete-target-dir --num-mappers 1 --fields-terminated-by "t" --table user --where "id<=1"
  1. mysql导入数据到hive

    前提 需要先创建hive表

    
    hive (default)> create table user_sqoop(id int,name string) row format delimited fields terminated by "t";
    OK
    Time taken: 0.376 seconds
    hive (default)> select * from user_sqoop;
    OK
    user_sqoop.id
    user_sqoop.name
    Time taken: 0.513 seconds
    

    创建MySQL表user1

    
    mysql> show databases;
    +--------------------+
    | Database
    |
    +--------------------+
    | information_schema |
    | mysql
    |
    | performance_schema |
    | sq
    |
    | test
    |
    +--------------------+
    5 rows in set (0.00 sec)
    mysql> use sq;
    Reading table information for completion of table and column names
    You can turn off this feature to get a quicker startup with -A
    Database changed
    mysql> show tables;
    +--------------+
    | Tables_in_sq |
    +--------------+
    | user
    |
    +--------------+
    1 row in set (0.00 sec)
    mysql> create table user1(id int primary key auto_increment,name varchar(20),age varchar(30));
    Query OK, 0 rows affected (0.01 sec)
    mysql> insert into user1(id,name) values(1,"yangmai");
    Query OK, 1 row affected (0.00 sec)
    mysql> insert into user1(id,name) values(2,"long");
    Query OK, 1 row affected (0.00 sec)
    
    问题:hiveconf
    解决:
    vi ~/.bash_profile
    export HADOOP_CLASSPATH=$HADOOP_CLASSPASS:/root/hd/hive/lib/*
    mysql权限问题:
    grant all privileges on *.* to root@'%' identified by "password";
    flush privileges;
    

    导入数据

     bin/sqoop import --connect jdbc:mysql://node02:3306/db_telecom --username root --password 123456 --table tb_call --num-mappers 1 --hive-import --fields-terminated-by "t" --hive-overwrite --hive-table user_sqoop
    

    查询hive中的表

    hive (default)> select * from user_sqoop;
    OK
    user_sqoop.id
    user_sqoop.name
    1
    yangmai
    2
    long
    Time taken: 0.53 seconds, Fetched: 2 row(s)
    

    二.export命令

    从hive到处到mysql

    前提:保证user1是空表

    mysql> select * from user1;
    Empty set (0.00 sec)
    

    执行命令

[root@linux01 sqoop]# bin/sqoop export --connect jdbc:mysql://node02:3306/sqoopdb --username root --password 123456 --table user1 --num-mappers 1 --export-dir /user/hive/warehouse/itcast.db/	complex_array --input-fields-terminated-by "t"

查询

mysql> select * from user1;
+----+---------+
| id | name
|
+----+---------+
|
1 | yangmai |
|
2 | long
|
+----+---------+
2 rows in set (0.00 sec)

三.sqoop打包脚本的使用

从hdfs导出到mysql

前提保证mysql中user1是空表

mysql> select * from user1;
Empty set (0.00 sec)

hdfs中的数据


[root@linux03 ~]# hadoop fs -cat /user/hive/warehouse/user_sqoop/part-m-00000
1
yangmai
2
long

编写脚本


[root@linux01 sqoopjob]# cat job_hdfs2mysql.opt
export
--connect
jdbc:mysql://linux02:3306/sq
--username
root
--password
123456
--table
user1
--num-mappers
1
--export-dir
/user/hive/warehouse/user_sqoop
--input-fields-terminated-by
"t"

执行脚本


[root@linux01 sqoop]# bin/sqoop --options-file /root/sqoopjob/job_hdfs2mysql.opt

查看mysql中user1的数据


mysql> select * from user1;
+----+---------+
| id | name
|
+----+---------+
|
1 | yangmai |
|
2 | long
|
+----+---------+
2 rows in set (0.00 sec)

四sqoop常用命令

命令说明
import将数据导入到集群
export将集群数据导出
codegen将某数据库中表生成javaBean并打包为jar
eval查看sql执行结果
create-hive-table创建hive表
import-all-table导入某个数据库中所有表到hdfs中
list-table列出某个数据库下所有表
merge将hdfs中不同目录下的数据合并在一起
version V查看sqoop版本
help查看帮助信息

codegen
Generate code to interact with database records
create-hive-table
Import a table definition into Hive
eval
Evaluate a SQL statement and display the results
export
Export an HDFS directory to a database table
help
List available commands
import
Import a table from a database to HDFS
import-all-tables
Import tables from a database to HDFS
import-mainframe
Import datasets from a mainframe server to HDFS
job
Work with saved jobs
list-databases
List available databases on a server
list-tables
List available tables in a database
merge
Merge results of incremental imports
metastore
Run a standalone Sqoop metastore
version
Display version information

五sqoop常用参数

参数说明
–connect连接关系型数据库URL
–connection-manager指定连接管理类
–driverJDBC的driver class
–username连接数据库的用户名
–password连接数据库的密码
–verbose在控制台中打印详细信息
–help查看帮助
–hive-import将关系型数据库导入到hive表中
–hive-overwrite覆盖掉hive表中已存在的数据
–create-hive-table创建hive表
–hive-table接入hive表
–table指定关系型数据库的表名

最后

以上就是暴躁戒指为你收集整理的sqoop的import命令和export命令sqoop的import命令和export命令的全部内容,希望文章能够帮你解决sqoop的import命令和export命令sqoop的import命令和export命令所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(43)

评论列表共有 0 条评论

立即
投稿
返回
顶部