我是靠谱客的博主 幽默西装,最近开发中收集的这篇文章主要介绍mysql传到hdfs需要改格式吗_Sqoop将MySQL的表数据同步到HDFS(二)设置存储格式,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

[root@centos02 bin]#sqoop import --connect jdbc:mysql://centos02:3306/OfficialCashMid --driver com.mysql.cj.jdbc.Driver --username root --password sa123_ADMIN. --table tadminoperationlog --m 2 --target-dir /jdbcHDFS/TAdminLog_avro -- as-avrodatafile

Warning: /opt/bigdata/sqoop/sqoop-1.4.7/../hbase does not exist!HBase imports will fail.

Please set$HBASE_HOMEto the root of your HBase installation.

Warning:/opt/bigdata/sqoop/sqoop-1.4.7/../hcatalog does not exist!HCatalog jobs will fail.

Please set$HCAT_HOMEto the root of your HCatalog installation.

Warning:/opt/bigdata/sqoop/sqoop-1.4.7/../accumulo does not exist!Accumulo imports will fail.

Please set$ACCUMULO_HOMEto the root of your Accumulo installation.

Warning:/opt/bigdata/sqoop/sqoop-1.4.7/../zookeeper does not exist!Accumulo imports will fail.

Please set$ZOOKEEPER_HOMEto the root of your Zookeeper installation.19/09/04 01:11:25 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7

19/09/04 01:11:25 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.19/09/04 01:11:25 WARN sqoop.ConnFactory: Parameter --driver is set to an explicit driver however appropriate connection manager is not being set (via --connection-manager). Sqoop is going to fall back to org.apache.sqoop.manager.GenericJdbcManager. Please specify explicitly which connection manager should be used next time.19/09/04 01:11:25 INFO manager.SqlManager: Using default fetchSize of 1000

19/09/04 01:11:25INFO tool.CodeGenTool: Beginning code generation19/09/04 01:11:26 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM tadminoperationlog AS t WHERE 1=0

19/09/04 01:11:27 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM tadminoperationlog AS t WHERE 1=0

19/09/04 01:11:27 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/bigdata/hadoop/hadoop-2.8.5注:/tmp/sqoop-root/compile/64a3b4f66eb537e9b11f0416dbc6d58d/tadminoperationlog.java使用或覆盖了已过时的 API。

注: 有关详细信息, 请使用-Xlint:deprecation 重新编译。19/09/04 01:11:30 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/64a3b4f66eb537e9b11f0416dbc6d58d/tadminoperationlog.jar19/09/04 01:11:30INFO mapreduce.ImportJobBase: Beginning import of tadminoperationlog19/09/04 01:11:31INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar19/09/04 01:11:31 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM tadminoperationlog AS t WHERE 1=0

19/09/04 01:11:32INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps19/09/04 01:11:32 INFO client.RMProxy: Connecting to ResourceManager at centos02/192.168.122.1:8032

19/09/04 01:11:37INFO db.DBInputFormat: Using read commited transaction isolation19/09/04 01:11:37INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(FID), MAX(FID) FROM tadminoperationlog19/09/04 01:11:37 INFO db.IntegerSplitter: Split size: 16058; Num splits: 2 from: 21 to: 32138

19/09/04 01:11:37 INFO mapreduce.JobSubmitter: number of splits:2

19/09/04 01:11:38 INFO mapreduce.JobSubmitter: Submitting tokens forjob: job_1567503661837_000519/09/04 01:11:39INFO impl.YarnClientImpl: Submitted application application_1567503661837_000519/09/04 01:11:39 INFO mapreduce.Job: The url to track the job: http://centos02:8088/proxy/application_1567503661837_0005/

19/09/04 01:11:39INFO mapreduce.Job: Running job: job_1567503661837_000519/09/04 01:11:51 INFO mapreduce.Job: Job job_1567503661837_0005 running in uber mode : false

19/09/04 01:11:51 INFO mapreduce.Job: map 0% reduce 0%

19/09/04 01:12:07 INFO mapreduce.Job: map 100% reduce 0%

19/09/04 01:12:09INFO mapreduce.Job: Job job_1567503661837_0005 completed successfully19/09/04 01:12:10 INFO mapreduce.Job: Counters: 30File System Counters

FILE: Number of bytes read=0FILE: Number of bytes written=357730FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0HDFS: Number of bytes read=206HDFS: Number of bytes written=3753700HDFS: Number of read operations=8HDFS: Number of large read operations=0HDFS: Number of write operations=4Job Counters

Launched map tasks=2Other local map tasks=2Total time spent by all mapsin occupied slots (ms)=25703Total time spent by all reducesin occupied slots (ms)=0Total time spent by all map tasks (ms)=25703Total vcore-milliseconds taken by all map tasks=25703Total megabyte-milliseconds taken by all map tasks=26319872Map-Reduce Framework

Map input records=12122Map output records=12122Input split bytes=206Spilled Records=0Failed Shuffles=0Merged Map outputs=0GC time elapsed (ms)=443CPU time spent (ms)=11370Physical memory (bytes) snapshot=399974400Virtual memory (bytes) snapshot=4243742720Total committed heap usage (bytes)=198180864File Input Format Counters

Bytes Read=0File Output Format Counters

Bytes Written=3753700

19/09/04 01:12:10 INFO mapreduce.ImportJobBase: Transferred 3.5798 MB in 37.8235 seconds (96.9166 KB/sec)19/09/04 01:12:10 INFO mapreduce.ImportJobBase: Retrieved 12122records.

[root@centos02 bin]#

最后

以上就是幽默西装为你收集整理的mysql传到hdfs需要改格式吗_Sqoop将MySQL的表数据同步到HDFS(二)设置存储格式的全部内容,希望文章能够帮你解决mysql传到hdfs需要改格式吗_Sqoop将MySQL的表数据同步到HDFS(二)设置存储格式所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(62)

评论列表共有 0 条评论

立即
投稿
返回
顶部