我是靠谱客的博主 飞快胡萝卜,最近开发中收集的这篇文章主要介绍php flume,Flume连接HDFS和Hive,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

Flume连接HDFS

进入Flume配置

954735a9f137

954735a9f137

954735a9f137

配置flume.conf

954735a9f137

# Name the components on this agent

a1.sources = r1

a1.sinks = k1

a1.channels = c1

# sources

a1.sources.r1.type = netcat

a1.sources.r1.bind = 0.0.0.0

a1.sources.r1.port = 41414

# sinks

a1.sinks.k1.type = hdfs

a1.sinks.k1.hdfs.path = hdfs://slave1/flume/events/%y-%m-%d/%H%M/%S

a1.sinks.k1.hdfs.filePrefix = events-

a1.sinks.k1.hdfs.round = true

a1.sinks.k1.hdfs.roundValue = 10

a1.sinks.k1.hdfs.roundUnit = minute

a1.sinks.k1.hdfs.useLocalTimeStamp=true

a1.sinks.k1.hdfs.batchSize = 10

a1.sinks.k1.hdfs.fileType = DataStream

# channels

a1.channels.c1.type = memory

a1.channels.c1.capacity = 1000

a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel

a1.sources.r1.channels = c1

a1.sinks.k1.channel = c1

测试telnet通信

telnet slave1 41414

954735a9f137

查看日志找到HDFS文件

954735a9f137

查看文件内容,测试成功

954735a9f137

Windows下Flume连接Hive

954735a9f137

# Name the components on this agent

a1.sources=r1

a1.sinks=k1

a1.channels=c1

# source

a1.sources.r1.type=avro

a1.sources.r1.bind=0.0.0.0

a1.sources.r1.port=43434

# sink

a1.sinks.k1.type = hive

a1.sinks.k1.hive.metastore = thrift://192.168.18.33:9083

a1.sinks.k1.hive.database = bd14

a1.sinks.k1.hive.table = flume_log

a1.sinks.k1.useLocalTimeStamp = true

a1.sinks.k1.serializer = DELIMITED

a1.sinks.k1.serializer.delimiter = "t"

a1.sinks.k1.serializer.serdeSeparator = 't'

a1.sinks.k1.serializer.fieldnames = id,time,context

a1.sinks.k1.hive.txnsPerBatchAsk = 5

# channel

a1.channels.c1.type=memory

a1.channels.c1.capacity=1000

a1.channels.c1.transactionCapacity=100

# Bind the source and sink to the channel

a1.sources.r1.channels=c1

a1.sinks.k1.channel=c1

配置Windows下的flume

# Name the components on this agent

a1.sources = r1

a1.sinks = k1

a1.channels = c1

# source

a1.sources.r1.type = spooldir

a1.sources.r1.spoolDir = F:\test

a1.sources.r1.fileHeader = true

# sink

a1.sinks.k1.type = avro

a1.sinks.k1.hostname = 192.168.18.34

a1.sinks.k1.port = 43434

# channel

a1.channels.c1.type = memory

a1.channels.c1.capacity = 1000

a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel

a1.sources.r1.channels = c1

a1.sinks.k1.channel = c1

在hive中创建日志表

954735a9f137

在flume文档中要求将hive表分桶以及设置为orc格式,测试不声明orc格式,Hive将不会收到数据

create table flume_log(

id int

,time string

,context string

)

clustered by (id) into 3 buckets

stored as orc;

创建日志文件到监控目录F:test

954735a9f137

在Windows中 flume的bin目录下启动flume

flume-ng.cmd agent -conf-file ../conf/windows.conf -name a1 -property flume.root.logger=INFO,console

在Windows中查找一个log文件拖放到F:test中,内容如下

954735a9f137

当flume读取完文件后,文件后缀会增加completed

954735a9f137

查看Hive表

954735a9f137

测试成功,本来是想通过impala查询Hive表,但Impala不支持orc格式的Hive表,而flume中sink端需要采用orc格式传输数据,所以只能放弃impala,后续解决问题再进行补充

三、遇到问题

原因:在CDH的Flume中,设置路径只需要IP地址,不需要配置端口

HDFS文件存在乱码

954735a9f137

解决:在flume配置中添加

a1.sinks.k1.hdfs.fileType = DataStream

原因:

hdfs.fileType默认为SequenceFile,会压缩文件

954735a9f137

AvroRuntimeException: Excessively large list allocation request detected: 825373449 items!

954735a9f137

解决:调整flume中java堆栈大小

原因:Flume内存溢出

NoClassDefFoundError: org/apache/hive/hcatalog/streaming/RecordWriter

954735a9f137

解决:

找到Hive的jar包所在目录

954735a9f137

找到Flume的jar包所在目录

954735a9f137

cp /opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/jars/hive-* /opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/flume-ng/lib/

原因:flume缺少了hive的jar包,需要从CDH拷贝

EventDeliveryException: java.lang.NullPointerException: Expected timestamp in the Flume event headers, but it was null

954735a9f137

原因:时间戳参数设置错误

解决:

在flume的conf文件中配置sink端

a1.sinks.k1.hive.useLocalTimeStamp=true

最后

以上就是飞快胡萝卜为你收集整理的php flume,Flume连接HDFS和Hive的全部内容,希望文章能够帮你解决php flume,Flume连接HDFS和Hive所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(92)

评论列表共有 0 条评论

立即
投稿
返回
顶部