概述
--------------------------配置-------------------------------------
配置/home/appleyuchi/bigdata/sqoop-1.99.5-bin-hadoop200/server/conf
中的sqoop.properties文件以及catalina.properties文件
catalina.properties中修改如下:
common.loader=${catalina.base}/lib,${catalina.base}/lib/*.jar,${catalina.home}/lib,${catalina.home}/lib/*.jar,${catalina.home}/../lib/*.jar,/home/appleyuchi/bigdata/hadoop-3.0.3/share/hadoop/common/*.jar,/home/appleyuchi/bigdata/hadoop-3.0.3/share/hadoop/common/lib/*.jar,/home/appleyuchi/bigdata/hadoop-3.0.3/share/hadoop/hdfs/*.jar,/home/appleyuchi/bigdata/hadoop-3.0.3/share/hadoop/hdfs/lib/*.jar,/home/appleyuchi/bigdata/hadoop-3.0.3/share/hadoop/mapreduce/*.jar,/home/appleyuchi/bigdata/hadoop-3.0.3/share/hadoop/mapreduce/lib/*.jar,/home/appleyuchi/bigdata/hadoop-3.0.3/share/hadoop/yarn/*.jar,/home/appleyuchi/bigdata/hadoop-3.0.3/share/hadoop/yarn/lib/*.jar
sqoop.properties中修改如下:
org.apache.sqoop.submission.engine.mapreduce.configuration.directory=/home/appleyuchi/bigdata/hadoop-3.0.3/etc/hadoop
--------------------------配置检查-------------------------------------
先运行./sqoop2-tool verify
检查配置,如果报告fail
那么检查下面的log看看是哪里的问题
/home/appleyuchi/bigdata/sqoop-1.99.5-bin-hadoop200/bin/@LOGDIR@/sqoop.log
---------------------------接来下是启动-------------------------------------
启动分为两步,先启动服务,然后再启动客户端:
1.
(python2.7) appleyuchi@ubuntu:~/bigdata/sqoop-1.99.5-bin-hadoop200/bin$ sqoop2-server start
2.
(python2.7) appleyuchi@ubuntu:~/bigdata/sqoop-1.99.5-bin-hadoop200/bin$ sqoop2-shell
---------------------------接来下是常见基本操作---------------------------------
set server --host 127.0.0.1 --port 12000 --webapp sqoop
sqoop:000> 模式下可以执行以下命令:
show version
show version --all(如果这里报告有exception,那么说明前面的配置没有成功)
sqoop:000> show connector
+----+------------------------+---------+------------------------------------------------------+----------------------+
| Id | Name | Version | Class | Supported Directions |
+----+------------------------+---------+------------------------------------------------------+----------------------+
| 1 | generic-jdbc-connector | 1.99.5 | org.apache.sqoop.connector.jdbc.GenericJdbcConnector | FROM/TO |
| 2 | kite-connector | 1.99.5 | org.apache.sqoop.connector.kite.KiteConnector | FROM/TO |
| 3 | hdfs-connector | 1.99.5 | org.apache.sqoop.connector.hdfs.HdfsConnector | FROM/TO |
| 4 | kafka-connector | 1.99.5 | org.apache.sqoop.connector.kafka.KafkaConnector | TO |
+----+------------------------+---------+------------------------------------------------------+----------------------+
上面的这个到底什么作用呢?
我们可以看到左侧的id有1,2,3,4,这个其实就是后面的cid,所以cid不能乱用,你想用sqoop2处理什么样的数据库,就要使用什么样的cid
所谓的cid就是connector的id
sqoop:000> show link
+----+-----------+-----------+---------+
| Id | Name | Connector | Enabled |
+----+-----------+-----------+---------+
| 1 | hdfs_link | 3 | true |
+----+-----------+-----------+---------+
sqoop:000> delete link --lid 1
sqoop:000> show link
+----+------+-----------+---------+
| Id | Name | Connector | Enabled |
+----+------+-----------+---------+
+----+------+-----------+---------+
sqoop:000> show job
+----+------+----------------+--------------+---------+
| Id | Name | From Connector | To Connector | Enabled |
+----+------+----------------+--------------+---------+
+----+------+----------------+--------------+---------+
------------------------------下面是正式操作--------------------------------------------------------------------------
主要内容分为两部分:
一:
mysql导入hive
二:
hive导入mysql
☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆
sqoop:000> set server --host 127.0.0.1 --port 12000 --webapp sqoop
☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆先创建第一个link☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆
sqoop:000> create link --cid 1
注意上面的cid不是随便写的,是根据上面的
sqoop:000> show connector命令的返回结果得到的表格中第一列id查询得到的,
想用什么数据库,就在此处cid的后面使用表格中对应驱动的id
Creating link for connector with id 1
Please fill following values to create new link object
Name: mysql
Link configuration
JDBC Driver Class:com.mysql.jdbc.Driver
JDBC Connection String: jdbc:mysql://127.0.0.1:3306/employees
Username: root
Password: **********
JDBC Connection Properties:
There are currently 0 values in the map:
entry#
New link was successfully created with validation status OK and persistent id 2
sqoop:000> show link
+----+-------+-----------+---------+
| Id | Name | Connector | Enabled |
+----+-------+-----------+---------+
| 2 | mysql | 1 | true |
+----+-------+-----------+---------+
☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆再创建第2个link☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆
sqoop:000> create link --cid 3
Creating link for connector with id 3
Please fill following values to create new link object
Name: hdfs
Link configuration
HDFS URI: hdfs://localhost:9000/user/appleyuchi/test.txt
New link was successfully created with validation status OK and persistent id 3
sqoop:000> show link
+----+-------+-----------+---------+
| Id | Name | Connector | Enabled |
+----+-------+-----------+---------+
| 2 | mysql | 1 | true |
| 3 | hdfs | 3 | true |
+----+-------+-----------+---------+
☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆注意连接关系:数据库-驱动-link-job-link-驱动-数据库☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆
sqoop:000> create job -f 2 -t 3
后面的选择中,必填项有:
Schema name: employees
Table name: departments
Choose: 0
Choose: 0
Output directory: ~jdbc2hdfs
其他一律回车键跳过即可
☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆启动job☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆☆
报错:
sqoop:000> start job --jid 1
Exception has occurred during processing command
Exception: java.lang.RuntimeException Message: java.lang.ClassNotFoundException: org.apache.sqoop.driver.DriverError
这个问题网上查不到,百度和google都不行,只能放弃了...
最后只好把sqoop2给卸载了.
根据
https://stackoverflow.com/questions/41388979/what-does-sqoop-2-provide-that-sqoop-1-does-not
可知:
目前发展趋势是使用sqoop1,sqoop2面临淘汰。
------------------------------------附录----------------------------------------------------------------
二、hdfs资源uri格式:
用法:scheme://authority/path
选项:
scheme–>协议名,file或hdfs
authority–>namenode主机名
path–>路径
范例:hdfs://localhost:54310/user/hadoop/test.txt
假设已经在/home/hadoop/hadoop-1.1.1/conf/core-site.xml里配置了fs.default.name=hdfs://localhost:54310,则仅使用/user/hadoop/test.txt即可。hdfs默认工作目录为/user/$USER,$USER是当前的登录用户名。
最后
以上就是小巧万宝路为你收集整理的sqoop2操作流程的全部内容,希望文章能够帮你解决sqoop2操作流程所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
发表评论 取消回复