概述
目录
1.修改主机名 /etc/hostname
2.配置 /etc/hosts 文件,ip 映射
3. 秘钥认证 ssh-keygen -t rsa
4.安装jdk1.8(如果安装的是 hadoop3.2.1的话)
5.修改配置文件
5.1 core-site.xml
5.2 hdfs-site.xml
5.3 hadoop-env.sh
5.4 workers (主机名称)
6. namenode 初始化
7. 启动hadoop
问题:
Failed to retrieve data from /webhdfs/v1/?op=LISTSTATUS: Server Error
解决方式:
hadoop 单节点部署,(主要是为了进行写的demo 的测试,因为电脑开多台虚拟机会特别卡)
具体的操作,流程可以参考 hadoop 全分布式 https://blog.csdn.net/yang_zzu/article/details/108171482
1.修改主机名 /etc/hostname
2.配置 /etc/hosts 文件,ip 映射
3. 秘钥认证 ssh-keygen -t rsa
4.安装jdk1.8(如果安装的是 hadoop3.2.1的话)
5.修改配置文件
5.1 core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://yang10:9820</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/big/hadoopdata/single</value>
</property>
<property>
<name>hadoop.http.staticuser.user</name>
<value>root</value>
</property>
</configuration>
5.2 hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
5.3 hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_271-amd64
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export HADOOP_SHELL_EXECNAME=root
5.4 workers (主机名称)
yang10
6. namenode 初始化
hdfs namenode -format
7. 启动hadoop
start-dfs.sh
问题:
Failed to retrieve data from /webhdfs/v1/?op=LISTSTATUS: Server Error
查看 namenode 的日志文件地址 出现下面的内容
日志文件的地址 : http://192.168.44.10:9870/logs/
2020-11-27 10:12:15,714 WARN org.eclipse.jetty.servlet.ServletHandler: /webhdfs/v1/
java.lang.NullPointerException
at com.sun.jersey.spi.container.ContainerRequest.<init>(ContainerRequest.java:189)
at com.sun.jersey.spi.container.servlet.WebComponent.createRequest(WebComponent.java:446)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:373)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644)
at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592)
at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:90)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1624)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:539)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.base/java.lang.Thread.run(Thread.java:834)
解决方式:
之前安装 es 的时候使用的是 jdk11 有版本的兼容性问题,
这里进行单节点部署的时候安装的也是 jdk11 ,但是 jdk11 和 hadoop3.2.1 不兼容,将jdk 换成1.8
1.8 下载地址: https://www.oracle.com/java/technologies/javase/javase-jdk8-downloads.html
最后
以上就是纯情香菇为你收集整理的hadoop-单节点部署1.修改主机名 /etc/hostname2.配置 /etc/hosts 文件,ip 映射3. 秘钥认证 ssh-keygen -t rsa 4.安装jdk1.8(如果安装的是 hadoop3.2.1的话)5.修改配置文件 6. namenode 初始化7. 启动hadoop问题:的全部内容,希望文章能够帮你解决hadoop-单节点部署1.修改主机名 /etc/hostname2.配置 /etc/hosts 文件,ip 映射3. 秘钥认证 ssh-keygen -t rsa 4.安装jdk1.8(如果安装的是 hadoop3.2.1的话)5.修改配置文件 6. namenode 初始化7. 启动hadoop问题:所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
发表评论 取消回复