我是靠谱客的博主 震动砖头,最近开发中收集的这篇文章主要介绍Mahout0.9 ——hadoop2.2.0编译与安装1. Mahout0.9 ——hadoop2.2.0编译与安装,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

1. Mahout0.9 ——hadoop2.2.0编译与安装

1.1. 基础准备

1) JDK安装
2) Maven安装
见《基础准备(Hadoop/Spark/Mahout安装准备)》。

1.2. 下载Mahout源码

Mahout官方下载地址:

http://archive.apache.org/dist/mahout/



 
可以直接下载官方已经编译好的包,mahout-distribution-0.9.zip ,但是编译好的包只支持hadoop1 分布式计算。如果需要支持hadoop2.X系列需要下载源码重新编译。mahout-distribution-0.9-src.tar.gz 的下载地址:
http://archive.apache.org/dist/mahout/0.9/mahout-distribution-0.9-src.tar.gz

1.3. Mahout0.9编译

1) Mahout0.9源码修改(Mahout源码打patch)

目前mahout只支持hadoop1 的缘故。在这里可以找到解决方法:https://issues.apache.org/jira/browse/MAHOUT-1329。主要就是修改pom文件,修改mahout的依赖。
 
下载:1329-3.patch拷贝至服务器上;

https://issues.apache.org/jira/secure/attachment/12630146/1329-3.patch




下载:mahout-distribution-0.9-src.tar.gz源码解压至服务器上;
到源码根目录下执行以下命令打patch:
[root@master mahout-distribution-0.9]$ patch -p0 < ../1329-3.patch
patching file core/pom.xml
patching file integration/pom.xml
patching file pom.xml

2)Mahout0.9源码编译(Hadoop2.x)

[root@master mahout-distribution-0.9]$ mvn package -Prelease -Dhadoop2 -Dhadoop2.version=2.2.0 -DskipTests=true

……漫长等待……

[INFO] Reactor Summary:
[INFO] 
[INFO] Mahout Build Tools ................................. SUCCESS [  1.680 s]
[INFO] Apache Mahout ...................................... SUCCESS [  2.056 s]
[INFO] Mahout Math ........................................ SUCCESS [ 24.012 s]
[INFO] Mahout Core ........................................ SUCCESS [ 32.697 s]
[INFO] Mahout Integration ................................. SUCCESS [  7.977 s]
[INFO] Mahout Examples .................................... SUCCESS [ 20.199 s]
[INFO] Mahout Release Package ............................. SUCCESS [ 34.697 s]
[INFO] Mahout Math/Scala wrappers ......................... SUCCESS [  4.728 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:08 min
[INFO] Finished at: 2014-12-11T14:56:44+08:00
[INFO] Final Memory: 87M/1320M
[INFO] ------------------------------------------------------------------------

看到上面就编译成功
查看编译部署二进制包
[root@master mahout-distribution-0.9]$ cd distribution/target
[root@master target]$ ls –la
drwxr-xr-x 2 root            root       4096 Nov 27 09:27 archive-tmp
drwxr-xr-x 3 root            root       4096 Nov 27 09:27 mahout-distribution-0.9
drwxr-xr-x 3 root            root       4096 Nov 27 09:27 mahout-distribution-0.9-src
-rw-r--r-- 1 root            root    3671793 Dec 11 14:56 mahout-distribution-0.9-src.tar.gz
-rw-r--r-- 1 root            root    5064136 Dec 11 14:56 mahout-distribution-0.9-src.zip
-rw-r--r-- 1 root            root  120579010 Dec 11 14:56 mahout-distribution-0.9.tar.gz
-rw-r--r-- 1 root            root  148440568 Dec 11 14:56 mahout-distribution-0.9.zip
mahout-distribution-0.9.tar.gz为安装部署包

3) Mahout0.9安装部署

下载编译好的mahout-distribution-0.9.tar.gz部署包;直接解压及可。
[root@master target]$ cp mahout-distribution-0.9.tar.gz /usr/lib/
[root@master target]$ cd /usr/lib/
[root@master lib]$ tar -zxvf mahout-distribution-0.9.tar.gz

配置环境变量
[root@master lib]$ vi /etc/profile
添加:
#mahout
export MAHOUT_HOME=/usr/lib/mahout-distribution-0.9
export HADOOP_CONF_DIR=$HADOOP_HOME/conf
export MAHOUT_CONF_DIR=$MAHOUT_HOME/conf
export PATH=$HADOOP_HOME/bin:$MAHOUT_HOME/bin:$PATH
export CLASSPATH=.:$MAHOUT_HOME/lib:$HADOOP_CONF_DIR:$MAHOUT_CONF_DIR:$CLASSPATH

保存退出,输入以下命令使之立即生效
[root@master lib]$ source /etc/profile

Mahout配置
[root@master lib]$ vi mahout-distribution-0.9/bin/mahout
添加:
MAHOUT_JAVA_HOME=/usr/local/jdk1.7.0_03

如果要设置为分布式 添加
MAHOUT_LOCAL=true
如果不设置 将默认为在hadoop上运行。

测试下Mahout是否安装成功
[root@master lib]$ mahout --help
MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
Running on hadoop, using /usr/local/hadoop/hadoop-2.2.0/bin/hadoop and HADOOP_CONF_DIR=/usr/local/hadoop/hadoop-2.2.0/conf
MAHOUT-JOB: /usr/lib/mahout-distribution-0.9/mahout-examples-0.9-job.jar
Unknown program '--help' chosen.
Valid program names are:
  arff.vector: : Generate Vectors from an ARFF file or directory
  baumwelch: : Baum-Welch algorithm for unsupervised HMM training
  canopy: : Canopy clustering
  cat: : Print a file or resource as the logistic regression models would see it
  cleansvd: : Cleanup and verification of SVD output
  clusterdump: : Dump cluster output to text
  clusterpp: : Groups Clustering Output In Clusters
  cmdump: : Dump confusion matrix in HTML or text formats
  concatmatrices: : Concatenates 2 matrices of same cardinality into a single matrix
  cvb: : LDA via Collapsed Variation Bayes (0th deriv. approx)
  cvb0_local: : LDA via Collapsed Variation Bayes, in memory locally.
  evaluateFactorization: : compute RMSE and MAE of a rating matrix factorization against probes
  fkmeans: : Fuzzy K-means clustering
  hmmpredict: : Generate random sequence of observations by given HMM
  itemsimilarity: : Compute the item-item-similarities for item-based collaborative filtering
  kmeans: : K-means clustering
  lucene.vector: : Generate Vectors from a Lucene index
  lucene2seq: : Generate Text SequenceFiles from a Lucene index
  matrixdump: : Dump matrix in CSV format
  matrixmult: : Take the product of two matrices
  parallelALS: : ALS-WR factorization of a rating matrix
  qualcluster: : Runs clustering experiments and summarizes results in a CSV
  recommendfactorized: : Compute recommendations using the factorization of a rating matrix
  recommenditembased: : Compute recommendations using item-based collaborative filtering
  regexconverter: : Convert text files on a per line basis based on regular expressions
  resplit: : Splits a set of SequenceFiles into a number of equal splits
  rowid: : Map SequenceFile<Text,VectorWritable> to {SequenceFile<IntWritable,VectorWritable>, SequenceFile<IntWritable,Text>}
  rowsimilarity: : Compute the pairwise similarities of the rows of a matrix
  runAdaptiveLogistic: : Score new production data using a probably trained and validated AdaptivelogisticRegression model
  runlogistic: : Run a logistic regression model against CSV data
  seq2encoded: : Encoded Sparse Vector generation from Text sequence files
  seq2sparse: : Sparse Vector generation from Text sequence files
  seqdirectory: : Generate sequence files (of Text) from a directory
  seqdumper: : Generic Sequence File dumper
  seqmailarchives: : Creates SequenceFile from a directory containing gzipped mail archives
  seqwiki: : Wikipedia xml dump to sequence file
  spectralkmeans: : Spectral k-means clustering
  split: : Split Input data into test and train sets
  splitDataset: : split a rating dataset into training and probe parts
  ssvd: : Stochastic SVD
  streamingkmeans: : Streaming k-means clustering
  svd: : Lanczos Singular Value Decomposition
  testnb: : Test the Vector-based Bayes classifier
  trainAdaptiveLogistic: : Train an AdaptivelogisticRegression model
  trainlogistic: : Train a logistic regression using stochastic gradient descent
  trainnb: : Train the Vector-based Bayes classifier
  transpose: : Take the transpose of a matrix
  validateAdaptiveLogistic: : Validate an AdaptivelogisticRegression model against hold-out data set
  vecdist: : Compute the distances between a set of Vectors (or Cluster or Canopy, they must fit in memory) and a list of Vectors
  vectordump: : Dump vectors from a sequence file to text
  viterbi: : Viterbi decoding of hidden states from given output states sequence

看到已上,代表安装成功,上面提示:Running on hadoop,下面是Mahout支持的函数列表

4) Mahout0.9实例测试

使用贝叶斯文本分类来测试Mahout0.9对Hadoop2.2.0的兼容性
实例参照自带脚本:
mahout-distribution-0.9/examples/bin/ classify-20newsgroups.sh

第1、将20news的文件都上传到hdfs
 hadoop fs -ls /workspace/mahout/week4/data/20news
第2、对数据创建序列文件
 ./mahout seqdirectory -i /workspace/mahout/week4/data/20news -o /workspace/mahout/week4/data/20news_seq
第3、将序列文件转化成向量
./mahout seq2sparse -i /workspace/mahout/week4/data/20news_seq/ -o /workspace/mahout/week4/data/20news_vectors -lnorm -nv -wt tfidf
第4、将向量集分为训练集和测试数据
./mahout split -i /workspace/mahout/week4/data/20news_vectors/tfidf-vectors -tr /workspace/mahout/week4/data/train-vectors -te /workspace/mahout/week4/data/test-vectors -rp 20 -ow -seq -xm sequential
第5、训练模型
 ./mahout trainnb -i /workspace/mahout/week4/data/train-vectors -el -o /workspace/mahout/week4/nbmodel -li /workspace/mahout/week4/labindex -ow –c
第6、测试

./mahout testnb -i /workspace/mahout/week4/data/test-vectors -m /workspace/mahout/week4/nbmodel -l /workspace/mahout/week4/labindex -ow -o /workspace/mahout/week4/20news-test-result -c


转载请注明出处:

http://blog.csdn.net/sunbow0/article/details/41962071

最后

以上就是震动砖头为你收集整理的Mahout0.9 ——hadoop2.2.0编译与安装1. Mahout0.9 ——hadoop2.2.0编译与安装的全部内容,希望文章能够帮你解决Mahout0.9 ——hadoop2.2.0编译与安装1. Mahout0.9 ——hadoop2.2.0编译与安装所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(40)

评论列表共有 0 条评论

立即
投稿
返回
顶部