概述
安装JDK
http://blog.csdn.net/stanely_hwang/article/details/18883599
Hadoop单结点伪分布式安装
http://blog.csdn.net/stanely_hwang/article/details/18884181
Mahout安装与配置
1:下载二进制解压安装:
http://www.apache.org/dyn/closer.cgi/mahout/
Mahout下载完后,直接解压。我将Mahout下载到/opt/hadoop下,进入该目录,进行解压操作
$ cd /opt/hadoop
$ tar -zxvf mahout-distribution-0.9
2:配置环境变量:
JAVA_HOME=/opt/java/jdk
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/bin
JRE_HOME=/opt/java/jdk
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/bin
export JAVA_HOME
export JRE_HOME
export HADOOP_HOME=/home/andy/hadoop-2.2.0
export HADOOP_CONF_DIR=/home/andy/hadoop-2.2.0/conf
export MAHOUT_HOME=/opt/hadoop/mahout-distribution-0.9
export PATH=$HADOOP_HOME/bin:$MAHOUT_HOME/bin:$PATH
export PATH
export PATH=/sbin:/bin:/usr/sbin:/usr/bin:/sbin
3:启动Hadoop:
$ ./hadoop-daemon.sh start namenode
$ ./hadoop-daemon.sh start datanode
$ ./yarn-daemon.sh start resourcemanager
$ ./yarn-daemon.sh start nodemanager
4:mahout --help #检查Mahout是否安装完好,看是否列出了一些算法
$ cd $MAHOUT_HOME/bin
$ ./mahout --help
MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
Running on hadoop, using /home/andy/hadoop-2.2.0/bin/hadoop and HADOOP_CONF_DIR=/home/andy/hadoop-2.2.0/conf
MAHOUT-JOB: /opt/hadoop/mahout-distribution-0.9/mahout-examples-0.9-job.jar
Unknown program '--help' chosen.
Valid program names are:
arff.vector: : Generate Vectors from an ARFF file or directory
baumwelch: : Baum-Welch algorithm for unsupervised HMM training
canopy: : Canopy clustering
cat: : Print a file or resource as the logistic regression models would see it
cleansvd: : Cleanup and verification of SVD output
clusterdump: : Dump cluster output to text
clusterpp: : Groups Clustering Output In Clusters
cmdump: : Dump confusion matrix in HTML or text formats
concatmatrices: : Concatenates 2 matrices of same cardinality into a single matrix
cvb: : LDA via Collapsed Variation Bayes (0th deriv. approx)
cvb0_local: : LDA via Collapsed Variation Bayes, in memory locally.
evaluateFactorization: : compute RMSE and MAE of a rating matrix factorization against probes
fkmeans: : Fuzzy K-means clustering
hmmpredict: : Generate random sequence of observations by given HMM
itemsimilarity: : Compute the item-item-similarities for item-based collaborative filtering
kmeans: : K-means clustering
lucene.vector: : Generate Vectors from a Lucene index
lucene2seq: : Generate Text SequenceFiles from a Lucene index
matrixdump: : Dump matrix in CSV format
matrixmult: : Take the product of two matrices
parallelALS: : ALS-WR factorization of a rating matrix
qualcluster: : Runs clustering experiments and summarizes results in a CSV
recommendfactorized: : Compute recommendations using the factorization of a rating matrix
recommenditembased: : Compute recommendations using item-based collaborative filtering
regexconverter: : Convert text files on a per line basis based on regular expressions
resplit: : Splits a set of SequenceFiles into a number of equal splits
rowid: : Map SequenceFile<Text,VectorWritable> to {SequenceFile<IntWritable,VectorWritable>, SequenceFile<IntWritable,Text>}
rowsimilarity: : Compute the pairwise similarities of the rows of a matrix
runAdaptiveLogistic: : Score new production data using a probably trained and validated AdaptivelogisticRegression model
runlogistic: : Run a logistic regression model against CSV data
seq2encoded: : Encoded Sparse Vector generation from Text sequence files
seq2sparse: : Sparse Vector generation from Text sequence files
seqdirectory: : Generate sequence files (of Text) from a directory
seqdumper: : Generic Sequence File dumper
seqmailarchives: : Creates SequenceFile from a directory containing gzipped mail archives
seqwiki: : Wikipedia xml dump to sequence file
spectralkmeans: : Spectral k-means clustering
split: : Split Input data into test and train sets
splitDataset: : split a rating dataset into training and probe parts
ssvd: : Stochastic SVD
streamingkmeans: : Streaming k-means clustering
svd: : Lanczos Singular Value Decomposition
testnb: : Test the Vector-based Bayes classifier
trainAdaptiveLogistic: : Train an AdaptivelogisticRegression model
trainlogistic: : Train a logistic regression using stochastic gradient descent
trainnb: : Train the Vector-based Bayes classifier
transpose: : Take the transpose of a matrix
validateAdaptiveLogistic: : Validate an AdaptivelogisticRegression model against hold-out data set
vecdist: : Compute the distances between a set of Vectors (or Cluster or Canopy, they must fit in memory) and a list of Vectors
vectordump: : Dump vectors from a sequence file to text
viterbi: : Viterbi decoding of hidden states from given output states sequence
[andy@localhost bin]$
5:mahout使用准备:
- 准备数据:
测试数据下载地址:
http://archive.ics.uci.edu/ml/databases/synthetic_control/synthetic_control.data
下载完后,将数据放入$MAHOUT_HOME文件下
- 创建测试目录
创建测试目录testdata,并将数据导入到testdata中
$ cd $HADOOP_HOME/bin/
$ hadoop fs -mkdir testdata #
$ hadoop fs -put $MAHOUT_HOME/synthetic_control.data testdata
- 使用kmeans算法
$ hadoop jar /home/hadoop/mahout-distribution-0.7/mahout-examples-0.7-job.jar org.apache.mahout.clustering.syntheticcontrol.kmeans.Job
- 查看结果
$ hadoop fs -lsr output
$ hadoop fs -get output $MAHOUT_HOME/result
$ cd $MAHOUT_HOME/example/result
$ ls
如上图所示表示安装成功!
http://archive.ics.uci.edu/ml/databases/synthetic_control/synthetic_control.data
下载完后,将数据放入$MAHOUT_HOME文件下
$ cd $HADOOP_HOME/bin/
$ hadoop fs -mkdir testdata #
$ hadoop fs -put $MAHOUT_HOME/synthetic_control.data testdata
- 使用kmeans算法
$ hadoop jar /home/hadoop/mahout-distribution-0.7/mahout-examples-0.7-job.jar org.apache.mahout.clustering.syntheticcontrol.kmeans.Job
$ hadoop jar /home/hadoop/mahout-distribution-0.7/mahout-examples-0.7-job.jar org.apache.mahout.clustering.syntheticcontrol.kmeans.Job
- 查看结果
$ hadoop fs -lsr output
$ hadoop fs -get output $MAHOUT_HOME/result
$ cd $MAHOUT_HOME/example/result
$ ls
$ hadoop fs -lsr output
$ hadoop fs -get output $MAHOUT_HOME/result$ cd $MAHOUT_HOME/example/result$ ls
最后
以上就是聪明翅膀为你收集整理的Mahout安装与测试-基于hadoop单结点伪分布式的全部内容,希望文章能够帮你解决Mahout安装与测试-基于hadoop单结点伪分布式所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
发表评论 取消回复