我是靠谱客的博主 野性西装,最近开发中收集的这篇文章主要介绍hadoop常见错误,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

1.   解决:是缺少相关jar没配入classpath.这错报得有点不懂。或直接拷贝相关jar到bin下。

 jar位置:hadoop-2.6.0-cdh5.8.0sharehadoopmapreduce2

 

解决这耗费不少时间,坑。

./hadoop jar /opt/hadoop-2.6.0-cdh5.8.0/share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.6.0-cdh5.8.0.jar wordcount  /user/hadoop/input /user/hadoop/inputcount

17/03/08 05:59:29 WARN security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.

java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.

     at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)

     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)

     at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)

     at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1277)

     at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1273)

     at java.security.AccessController.doPrivileged(Native Method)

     at javax.security.auth.Subject.doAs(Subject.java:415)

     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)

     at org.apache.hadoop.mapreduce.Job.connect(Job.java:1272)

     at org.apache.hadoop.mapreduce.Job.submit(Job.java:1301)

     at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1325)

     at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)

     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

     at java.lang.reflect.Method.invoke(Method.java:606)

     at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)

     at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)

     at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)

     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

     at java.lang.reflect.Method.invoke(Method.java:606)

     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)

     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

 2.

windown eclipse执行hadoop程序,由于window登录用户是Administrator,执行hadoop程序时会默认用这系统用户,但上传文件的目录可能属于另一用户或组就报如下异常。.org.apache.hadoop.security.AccessControlException: Permission denied: user=Administrator, access=WRI

 

解决办法:

a./hdfs dfs -chmod 777  /user/hadoop

 

b. hdfs-site.xml中加入

<property>
<name>dfs.permissions</name>
<value>false</value>
<description>
If "true", enable permission checking in HDFS.
If "false", permission checking is turned off,
but all other behavior is unchanged.
Switching from one parameter value to the other does not change the mode,
owner or group of files or directories.
</description>
</property>

 

c.设置拥有权限的用户

 

设置环境变量

HADOOP_USER_NAME

或代码直接设置

System.setProperty("HADOOP_USER_NAME", "hadoop");

3. 主要可能由两原因引起。

可能加载了window本地的hadoop.dll,从而以为是一集群环境,但window又不支持hadoop集群环境。

 在本地(Window 7 环境)本地模式下运行却遇到了下述异常:

 

An exception or error caused a run to abort: org.apache.hadoop.io.nativeio.NativeIO$Windows.createFileWithMode0(Ljava/lang/String;JJJI)Ljava/io/FileDescriptor; 
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.createFileWithMode0(Ljava/lang/String;JJJI)Ljava/io/FileDescriptor;
    at org.apache.hadoop.io.nativeio.NativeIO$Windows.createFileWithMode0(Native Method)
    at org.apache.hadoop.io.nativeio.NativeIO$Windows.createFileOutputStreamWithMode(NativeIO.java:559)
    at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:219)
    at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:209)
    at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:307)
    at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:295)
    at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:328)
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:388)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:451)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:430)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:920)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:901)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:798)
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:368)
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:341)
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:292)
    at org.apache.hadoop.fs.LocalFileSystem.copyFromLocalFile(LocalFileSystem.java:82)

 

    at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1882)

最后

以上就是野性西装为你收集整理的hadoop常见错误的全部内容,希望文章能够帮你解决hadoop常见错误所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(61)

评论列表共有 0 条评论

立即
投稿
返回
顶部