概述
这个问题在网上找了很多,就大致总结了一下:
是这样的:如果如下图,browseDirectory.jsp页面不存在 ,那么可能是datanode没有启动成功
然后我们检查
如果 live nodes 为0 ,然后上边也是0blocks 那么基本我们可以试着重启一下 datanode
bin/Hadoop-daemon.sh start DataNode
[hadoop@mylinux bin]$ jps
8654 NameNode
8889 Jps
8808 JobTracker
[hadoop@mylinux bin]$ ./hadoop-daemon.sh start datanode
starting datanode, logging to /usr/myhadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-mylinux.out
[hadoop@mylinux bin]$ jps
9027 Jps
8654 NameNode
8941 DataNode
8808 JobTracker
[hadoop@mylinux bin]$
如上述命令,在重新启动了datanode后 再运行jps datanode 出来了。然后再去点Browse the filesystem 一切ok。
网上还有一种说法 ,在此仅贴出地址:
http://yymmiinngg.iteye.com/blog/706909
http://blog.sina.com.cn/s/blog_6d932f2a0101fswv.html
对与这个问题,我查了官网http://comments.gmane.org/gmane.comp.jakarta.lucene.hadoop.user/6649
I was exactly following the Hadoop 0.16.4 quickstart guide to run a Pseudo-distributed operation on my Fedora 8 machine. The first time I did it, everything ran successfully (formated a new hdfs, started hadoop daemons, then ran the grep example). A moment later, I decided to redo everything again. Reformating the hdfs and starting the daemons seemed to have no problem; but from the homepage of the namenode's web interface (http://localhost:50070/), when I clicked "Browse the filesystem", it said the following: HTTP ERROR: 404 /browseDirectory.jsp RequestURI=/browseDirectory.jsp Then when I tried to copy files to the hdfs to re-run the grep example, I couldn't with the following long list of exceptions (looks like some replication or block allocation issue): # bin/hadoop dfs -put conf input 08/06/29 09:38:42 INFO dfs.DFSClient: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/root/input/hadoop-env.sh could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1127) at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:312) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:901) at org.apache.hadoop.ipc.Client.call(Client.java:512) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198) at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
9 Jul 2008 07:13
Re: Failed to repeat the Quickstart guide for Pseudo-distributed operation
> # bin/hadoop dfs -put conf input > > 08/06/29 09:38:42 INFO dfs.DFSClient: > org.apache.hadoop.ipc.RemoteException: java.io.IOException: File / > user/root/input/hadoop-env.sh could only be replicated to 0 nodes, > instead of 1 Looks like your datanode didn't come up, anything in the logs? http://wiki.apache.org/hadoop/Help Arun
最后
以上就是开朗蚂蚁为你收集整理的Hadoop状态页面的Browse the filesystem链接无效的问题的全部内容,希望文章能够帮你解决Hadoop状态页面的Browse the filesystem链接无效的问题所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
发表评论 取消回复