我是靠谱客的博主 彩色金毛,最近开发中收集的这篇文章主要介绍大数据平台运维之Flume,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

Flume

51.在master节点安装启动Flume组件,打开Linux Shell运行flume-ng的帮助命令,查看Flume-ng的用法信息,将查询结果显示如下。

[root@master ~]# flume-ng help

Usage: /usr/hdp/2.4.3.0-227/flume/bin/flume-ng.distro<command> [options]...

 

commands:

  help                  display this help text

  agent                 run a Flume agent

 avro-client           run an avroFlume client

  password              create a password file for use influme config

  version               show Flume version info

 

global options:

  --conf,-c<conf>      use configs in<conf> directory

  --classpath,-C<cp>   append to the classpath

 --dryrun,-d           do not actually start Flume, just printthe command

  --plugins-path<dirs> colon-separated list of plugins.d directories. See the

                       plugins.d section in the user guide for more details.

                       Default: $FLUME_HOME/plugins.d

 -Dproperty=value      sets a Javasystem property value

 -Xproperty=value      sets a Java-X option

 

agent options:

  --conf-file,-f<file> specify a config file (required)

  --name,-n<name>      the name of this agent(required)

  --help,-h             display help text

 

avro-client options:

  --rpcProps,-P<file>   RPC client properties filewith server connection params

  --host,-H<host>       hostname to whichevents will be sent

  --port,-p<port>       port of the avrosource

  --dirname<dir>        directory to stream toavro source

  --filename,-F<file>   text file to stream toavro source (default: std input)

 --headerFile,-R <file> File containing event headers as key/valuepairs on each new line

  --help,-h              display help text

 

  Either--rpcProps or both --host and --port must be specified.

 

password options:

  --outfile              The file in which encodedpassword is stored

 

Note that if <conf> directory is specified, thenit is always included first

in the classpath.

 

52.根据提供的模板log-example.conf文件,使用Flume NG工具收集master节点的系统日志/var/log/secure,将收集的日志信息文件的名称以“xiandian-sec”为前缀,存放于HDFS 文件系统的/1daoyun/file/flume目录中,并且定义在HDFS中产生的文件的时间戳为10分钟。进行收集后,查询HDFS文件系统中/1daoyun/file/flume的列表信息。将以上操作命令和结果信息以及修改后的log-example.conf文件内容提交到答题框中。

[root@master ~]# hadoop fs -ls /1daoyun/file/flume

Found 1 items

-rw-r--r--   3root hdfs       1142 2017-05-08 10:29 /1daoyun/file/flume/xiandian-sec.1494239316323

 

[root@master ~]# cat log-example.conf

# example.conf: A single-node Flume configuration

# Name the components on this agent

a1.sources = r1

a1.sinks = k1

a1.channels = c1

# Describe/configure the source

a1.sources.r1.type = exec

a1.sources.r1.command = tail -F /var/log/secure

a1.sources.r1.channels = c1

# Use a channel which buffers events in memory

a1.channels.c1.type = memory

a1.channels.c1.capacity = 1000

# Describe the sink

a1.sinks.k1.type = hdfs

a1.sinks.k1.channel = c1

a1.sinks.k1.hdfs.path =hdfs://master:8020/1daoyun/file/flume

a1.sinks.k1.hdfs.filePrefix = xiandian-sec

a1.sinks.k1.hdfs.round = true

a1.sinks.k1.hdfs.roundValue = 10

a1.sinks.k1.hdfs.roundUnit = minute

 

53.根据提供的模板hdfs-example.conf文件,使用Flume NG工具设置master节点的系统路径/opt/xiandian/为实时上传文件至HDFS文件系统的实时路径,设置HDFS文件系统的存储路径为/data/flume/,上传后的文件名保持不变,文件类型为DataStream,然后启动flume-ng agent。将以上操作命令和以及修改后的hdfs-example.conf文件内容提交到答题框中。

[root@master ~]# flume-ng agent --conf-filehdfs-example.conf --name master -Dflume.root.logger=INFO,cnsole

Warning: No configuration directory set! Use --conf<dir> to override.

Info: Including Hadoop libraries found via(/bin/hadoop) for HDFS access

Info: Excluding/usr/hdp/2.4.3.0-227/hadoop/lib/slf4j-api-1.7.10.jar from classpath

Info: Excluding/usr/hdp/2.4.3.0-227/hadoop/lib/slf4j-log4j12-1.7.10.jar from classpath

Info: Excluding /usr/hdp/2.4.3.0-227/tez/lib/slf4j-api-1.7.5.jarfrom classpath

Info: Including HBASE libraries found via (/bin/hbase)for HBASE access

Info: Excluding/usr/hdp/2.4.3.0-227/hbase/lib/slf4j-api-1.7.7.jar from classpath

Info: Excluding /usr/hdp/2.4.3.0-227/hadoop/lib/slf4j-api-1.7.10.jarfrom classpath

Info: Excluding/usr/hdp/2.4.3.0-227/hadoop/lib/slf4j-log4j12-1.7.10.jar from classpath

Info: Excluding/usr/hdp/2.4.3.0-227/tez/lib/slf4j-api-1.7.5.jar from classpath

Info: Excluding/usr/hdp/2.4.3.0-227/hadoop/lib/slf4j-api-1.7.10.jar from classpath

Info: Excluding/usr/hdp/2.4.3.0-227/hadoop/lib/slf4j-log4j12-1.7.10.jar from classpath

Info: Excluding/usr/hdp/2.4.3.0-227/zookeeper/lib/slf4j-api-1.6.1.jar from classpath

Info: Excluding/usr/hdp/2.4.3.0-227/zookeeper/lib/slf4j-log4j12-1.6.1.jar from classpath

Info: Including Hive libraries found via () for Hiveaccess

 

[root@master ~]# cat hdfs-example.conf

# example.conf: A single-node Flume configuration

# Name the components on this agent

master.sources = webmagic

master.sinks = k1

master.channels = c1

# Describe/configure the source

master.sources.webmagic.type = spooldir

master.sources.webmagic.fileHeader = true

master.sources.webmagic.fileHeaderKey = fileName

master.sources.webmagic.fileSuffix = .COMPLETED

master.sources.webmagic.deletePolicy = never

master.sources.webmagic.spoolDir = /opt/xiandian/

master.sources.webmagic.ignorePattern = ^$

master.sources.webmagic.consumeOrder = oldest

master.sources.webmagic.deserializer =org.apache.flume.sink.solr.morphline.BlobDeserializer$Builder

master.sources.webmagic.batchsize = 5

master.sources.webmagic.channels = c1

# Use a channel which buffers events in memory

master.channels.c1.type = memory

# Describe the sink

master.sinks.k1.type = hdfs

master.sinks.k1.channel = c1

master.sinks.k1.hdfs.path =hdfs://master:8020/data/flume/%{dicName}

master.sinks.k1.hdfs.filePrefix = %{fileName}

master.sinks.k1.hdfs.fileType = DataStream

最后

以上就是彩色金毛为你收集整理的大数据平台运维之Flume的全部内容,希望文章能够帮你解决大数据平台运维之Flume所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(52)

评论列表共有 0 条评论

立即
投稿
返回
顶部