概述
软件版本:CentOS Linux release 7.5.1804 (Core)
安装准备:
1、安装jdk
2、主机名不能包含下划线_
3、配置hosts
4、配置master主机免密登录各节点
5、关闭防火墙和selinux
安装过程摘要:
1、解压安装包
2、配置环境变量
3、修改配置文件
4、目录复制到所有节点
5、namenode格式化
6、启动hadoop
7、验证
1、安装jdk
用yum安装,先查看有哪些可用的版本:
yum list java*
安装
sudo yum install java-1.8.0-openjdk.x86_64
验证安装
java -version
输出:
openjdk version "1.8.0_312"
OpenJDK Runtime Environment (build 1.8.0_312-b07)
OpenJDK 64-Bit Server VM (build 25.312-b07, mixed mode)
安装完成
2、配置静态IP地址
虚拟机安装hadoop,需要所有主机都先配置成静态地址。
查看有哪些网络接口:
ip addr
输出:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:12:57:fe brd ff:ff:ff:ff:ff:ff
inet 10.15.53.138/24 brd 10.15.53.255 scope global noprefixroute dynamic enp0s3
valid_lft 86377sec preferred_lft 86377sec
inet6 fe80::7a51:3d6f:39e8:6a7d/64 scope link noprefixroute
valid_lft forever preferred_lft forever
查到主机实际的网络接口名称是enp0s3,到网卡配置目录:/etc/sysconfig/network-scripts/下面找到主机的网卡文件,编辑:
sudo vi /etc/sysconfig/network-scripts/ifcfg-enp0s3
修改或添加以下内容:
ONBOOT=yes
BOOTPROTO=static
IPADDR=
NETMASK=
GATEWAY=
DNS1=
配置后重启网络服务
service network restart
配置完成。
3、配置hosts
sudo vi /etc/hosts
添加以下内容:
10.15.53.240 app
10.15.53.241 hadoop1
10.15.53.242 hadoop2
10.15.53.243 hadoop3
测试,使用配置好的名称访问主机,能访问即配置成功
ping hadoop3
PING hadoop3 (127.0.0.1) 56(84) bytes of data.
64 bytes from hadoop3 (127.0.0.1): icmp_seq=1 ttl=64 time=0.034 ms
64 bytes from hadoop3 (127.0.0.1): icmp_seq=2 ttl=64 time=0.043 ms
4 、配置ssh免密登录
基本思路如下:
A免密访问B
1、A生成秘钥
2、B创建.ssh/authorized_keys文件
3、将A的id_rsa.pub文件内容复制给B的authorized_keys文件
4、设置B的权限,.ssh为700,authorized_keys的权限为600
具体步骤
进入hadoop1,生成id_rsa.pub
ssh-keygen
进入hadoop2,创建.ssh/authorized_keys
mkdir ~/.ssh
vi ~/.ssh/authorized_keys
编辑authorized_keys文件,将hadoop1的id_rsa.pub文件内容追加到hadoop2的authorized_keys文件中
设置一个文件和一个目录的权限:
chmod 600 authorized_keys
cd ..
chmod 700 .ssh
测试,在hadoop1上登录hadoop2:
ssh hadoop2
输出:
The authenticity of host 'hadoop2 (10.15.53.242)' can't be established.
ECDSA key fingerprint is SHA256:gQherZTQAY5P7B8wP9Y/R5ZIn33r84/M84R87G9HXlo.
ECDSA key fingerprint is MD5:5e:b5:d4:f7:60:07:34:48:d8:6a:c9:de:0d:c5:db:0c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop2,10.15.53.242' (ECDSA) to the list of known hosts.
Last login: Fri Nov 19 14:55:50 2021 from 10.15.53.64
[lht@hadoop2 ~]$
使用主机名可以免密登录,则设置成功。
注意:hadoop1也要增加authorized_keys文件,加入本机id_rsa.pub文件内容,并设置相应的权限,否则后面hadoop启动的时候会报错。
5、关闭防火墙和selinux
查看防火墙状态
systemctl status firewalld
输出:
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: active (running) since 五 2021-11-19 14:28:57 CST; 1h 23min ago
Docs: man:firewalld(1)
Main PID: 654 (firewalld)
CGroup: /system.slice/firewalld.service
└─654 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid
11月 19 14:28:56 hadoop1 systemd[1]: Starting firewalld - dynamic firewall daemon...
11月 19 14:28:57 hadoop1 systemd[1]: Started firewalld - dynamic firewall daemon.
关闭防火墙
systemctl stop firewalld #关闭服务
systemctl disable firewalld #关闭自启动
查看selinux状态
getenforce
Enforcing 开启
Permissive 关闭
关闭selinux
setenforce 0
停止自启动
vi /etc/selinux/config
将配置文件中的 SELINUX=disabled 改为 SELINUX=enforcing 即可
6、安装hadoop
6.1、下载
下载地址:
https://downloads.apache.org/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gzhttps://downloads.apache.org/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz
curl -o hadoop3.tar.gz https://downloads.apache.org/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz
tar -xvf hadoop3.tar.gz
mv hadoop-3.3.1 ../app/hadoop3
6.2、配置环境变量
安装hadoop主目录为:/home/lht/app/hadoop3
创建软连接
ln -s hadoop3 hadoop
配置环境变量
sudo vi /etc/profile
将下面两行加入环境变量
export HADOOP_HOME=/home/lht/app/hadoop
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
环境变量生效
source /etc/profile
验证配置
hadoop version
报错了:
[lht@hadoop1 app]$ hadoop version
ERROR: JAVA_HOME is not set and could not be found.
这是没有设置java的环境变量
配置java环境变量:
找一下java主目录在哪
whereis jvm
找到主目录如下:
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/jre #实际目录
/usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64 #软连接
加入环境变量即可
sudo vi /etc/profile
添加内容:
export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64
export PATH=$JAVA_HOME/bin:$PATH
再次验证
hadoop version
输出:
[lht@hadoop1 ~]$ hadoop version
Hadoop 3.3.1
Source code repository https://github.com/apache/hadoop.git -r a3b9c37a397ad4188041dd80621bdeefc46885f2
Compiled by ubuntu on 2021-06-15T05:13Z
Compiled with protoc 3.7.1
From source with checksum 88a4ddb2299aca054416d6b7f81ca55
This command was run using /home/lht/app/hadoop3/share/hadoop/common/hadoop-common-3.3.1.jar
表明hadoop安装成功
6.3、修改hadoop的配置文件
先确定部署方式:
hadoop1 | hadoop2 | hadoop3 | |
HDFS | NameNode | SecNameNode | |
DataNode | DataNode | DataNode | |
YARN | ResourceManager | ||
NodeManager | NodeManager | NodeManager |
需要修改的文件列表:
cd $HADOOP_HOME/etc/hadoop
core-site.xml
hadoop-env.sh
hdfs-site.xml
yarn-env.sh
yarn-site.xml
mapred-env.sh
mapred-site.xml
cd $HADOOP_HOME/sbin
start-dfs.sh
stop-dfs.sh
start-yarn.sh
stop-yarn.sh
6.3.1、core-site.xml
添加:
<!-- 指定HDFS中NameNode的地址 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop1:9000</value>
</property>
<!-- 指定Hadoop运行时产生文件的存储目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>$HADOOP_HOME/tmp</value>
</property>
6.3.2、hadoop-env.sh
export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64
6.3.3、hdfs-site.xml
添加:
<!-- 指定Hadoop默认dfs地址 -->
<property>
<name>dfs.http.address</name>
<value>hadoop1:50070</value>
</property>
<!-- 指定namenode的目录 -->
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/lht/app/hadoop/name</value>
</property>
<!-- 指定dfs副本数 -->
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<!-- 指定Hadoop辅助名称节点主机配置 -->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop2:50090</value>
</property>
6.3.4、yarn-env.sh
export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64
6.3.5、yarn-site.xml
添加:
<!-- Site specific YARN configuration properties -->
<!-- Reduce获取数据的方式 -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!-- 指定YARN的ResourceManager的地址 -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop2</value>
</property>
<!-- 日志聚集功能开启 -->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<!-- 日志保留时间设置7天 -->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>604800</value>
</property>
6.3.6、mapred-env.sh
export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64
6.3.7、mapred-site.xml
添加:
<!-- 指定MR运行在Yarn上 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<!-- 历史服务器端地址 -->
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop1:10020</value>
</property>
<!-- 历史服务器web端地址 -->
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop1:19888</value>
</property>
6.3.8、start-dfs.sh、stop-dfs.sh
添加:
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
6.3.9、start-yarn.sh、stop-yarn.sh
添加:
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
6.3.10、workers
添加:
hadoop1
hadoop2
hadoop3
6.4、目录复制到所有节点
先看下目录:
[lht@hadoop1 app]$ ll
总用量 0
lrwxrwxrwx 1 lht lht 7 11月 22 17:14 hadoop -> hadoop3
drwxr-xr-x 10 lht lht 215 6月 15 13:52 hadoop3
需要将这一个目录和一个软连接复制到另外两台主机
scp -r hadoop3 lht@hadoop2:/home/lht/app/hadoop3
一阵疯狂输出之后,进入hadoop2主机,查看目录/home/lht/app/
[lht@hadoop3 app]$ ll
总用量 0
drwxr-xr-x. 10 lht lht 215 11月 24 15:21 hadoop3
复制完成,创建软连接
ln -s hadoop3 hadoop
然后把环境变量复制过来
cat /etc/profile
找到文件最后刚才写入的环境变量:
####DIY
export HADOOP_HOME=/home/lht/app/hadoop
export PATH=/home/lht/app/mongodb5/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64
export PATH=$JAVA_HOME/bin:$PATH
复制到hadoop2主机的/etc/profile中即可
第三台主机hadoop3如法炮制。
最后,两台主机source /etc/profile,然后验证hadoop即可:
[lht@hadoop2 app]$ hadoop version
Hadoop 3.3.1
Source code repository https://github.com/apache/hadoop.git -r a3b9c37a397ad4188041dd80621bdeefc46885f2
Compiled by ubuntu on 2021-06-15T05:13Z
Compiled with protoc 3.7.1
From source with checksum 88a4ddb2299aca054416d6b7f81ca55
This command was run using /home/lht/app/hadoop3/share/hadoop/common/hadoop-common-3.3.1.jar
出现版本信息说明安装成功
7、namenode格式化
hdfs namenode -format
输出
…………
2021-11-24 16:06:32,186 INFO common.Storage: Storage directory /home/lht/app/hadoop/name has been successfully formatted.
2021-11-24 16:06:32,214 INFO namenode.FSImageFormatProtobuf: Saving image file /home/lht/app/hadoop/name/current/fsimage.ckpt_0000000000000000000 using no compression
2021-11-24 16:06:32,301 INFO namenode.FSImageFormatProtobuf: Image file /home/lht/app/hadoop/name/current/fsimage.ckpt_0000000000000000000 of size 398 bytes saved in 0 seconds .
2021-11-24 16:06:32,314 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2021-11-24 16:06:32,333 INFO namenode.FSNamesystem: Stopping services started for active state
2021-11-24 16:06:32,334 INFO namenode.FSNamesystem: Stopping services started for standby state
2021-11-24 16:06:32,337 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid=0 when meet shutdown.
2021-11-24 16:06:32,337 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop1/127.0.0.1
************************************************************/
格式化成功
未完成,持续更新中…………
8、启动hadoop
在hadoop1主机启动hadoop
start-dfs.sh
如果设置免密登录的时候,hadoop1没有配置,会报错如下:
[lht@hadoop1 app]$ start-dfs.sh
Starting namenodes on [hadoop1]
hadoop1: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Starting datanodes
hadoop1: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Starting secondary namenodes [hadoop2]
正常的输出是这样的:
[lht@hadoop1 .ssh]$ start-dfs.sh
Starting namenodes on [hadoop1]
Starting datanodes
Starting secondary namenodes [hadoop2]
看到这个输出则启动成功。
9、验证
查看三台主机的进程
报错:
[lht@hadoop1 ~]$ jps
bash: jps: 未找到命令
前面没有配置相应的环境变量导致找不到这个命令
用ps代替
hadoop1上的进程:
[lht@hadoop1 ~]$ ps -ef|grep hadoop
lht 9353 1 0 17:46 ? 00:00:04 /usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/bin/java -Dproc_namenode -Djava.net.preferIPv4Stack=true -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dyarn.log.dir=/home/lht/app/hadoop3/logs -Dyarn.log.file=hadoop-lht-namenode-hadoop1.log -Dyarn.home.dir=/home/lht/app/hadoop3 -Dyarn.root.logger=INFO,console -Djava.library.path=/home/lht/app/hadoop3/lib/native -Dhadoop.log.dir=/home/lht/app/hadoop3/logs -Dhadoop.log.file=hadoop-lht-namenode-hadoop1.log -Dhadoop.home.dir=/home/lht/app/hadoop3 -Dhadoop.id.str=lht -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.namenode.NameNode
lht 9497 1 0 17:46 ? 00:00:04 /usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/bin/java -Dproc_datanode -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=ERROR,RFAS -Dyarn.log.dir=/home/lht/app/hadoop3/logs -Dyarn.log.file=hadoop-lht-datanode-hadoop1.log -Dyarn.home.dir=/home/lht/app/hadoop3 -Dyarn.root.logger=INFO,console -Djava.library.path=/home/lht/app/hadoop3/lib/native -Dhadoop.log.dir=/home/lht/app/hadoop3/logs -Dhadoop.log.file=hadoop-lht-datanode-hadoop1.log -Dhadoop.home.dir=/home/lht/app/hadoop3 -Dhadoop.id.str=lht -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.datanode.DataNode
lht 9920 2275 0 18:05 pts/0 00:00:00 grep --color=auto hadoop
可以看到有两个进程:NameNode和DataNode。进程正常
hadoop2上的进程:
[lht@hadoop2 ~]$ ps -ef|grep hadoop
lht 4240 1 0 17:46 ? 00:00:05 /usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/bin/java -Dproc_datanode -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=ERROR,RFAS -Dyarn.log.dir=/home/lht/app/hadoop3/logs -Dyarn.log.file=hadoop-lht-datanode-hadoop2.log -Dyarn.home.dir=/home/lht/app/hadoop3 -Dyarn.root.logger=INFO,console -Djava.library.path=/home/lht/app/hadoop3/lib/native -Dhadoop.log.dir=/home/lht/app/hadoop3/logs -Dhadoop.log.file=hadoop-lht-datanode-hadoop2.log -Dhadoop.home.dir=/home/lht/app/hadoop3 -Dhadoop.id.str=lht -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.datanode.DataNode
lht 4341 1 0 17:46 ? 00:00:03 /usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/bin/java -Dproc_secondarynamenode -Djava.net.preferIPv4Stack=true -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dyarn.log.dir=/home/lht/app/hadoop3/logs -Dyarn.log.file=hadoop-lht-secondarynamenode-hadoop2.log -Dyarn.home.dir=/home/lht/app/hadoo3 -Dyarn.root.logger=INFO,console -Djava.library.path=/home/lht/app/hadoop3/lib/native -Dhadoop.log.dir=/home/lht/app/hadoop3/logs -Dhadoop.log.file=hadoop-lht-secondarynamenode-hadoop2.log -Dhadoop.home.dir=/home/lht/app/hadoop3 -Dhadoop.id.str=lht -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
lht 4395 1357 0 18:06 pts/0 00:00:00 grep --color=auto hadoop
可以看到有两个进程:DataNode、SecondaryNameNode。进程正常
hadoop3的进程:
[lht@hadoop3 ~]$ ps -ef|grep hadoop
lht 3232 1 0 17:46 ? 00:00:05 /usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/bin/java -Dproc_datanode -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=ERROR,RFAS -Dyarn.log.dir=/home/lht/app/hadoop3/logs -Dyarn.log.file=hadoop-lht-datanode-hadoop3.log -Dyarn.home.dir=/home/lht/app/hadoop3 -Dyarn.root.logger=INFO,console -Djava.library.path=/home/lht/app/hadoop3/lib/native -Dhadoop.log.dir=/home/lht/app/hadoop3/logs -Dhadoop.log.file=hadoop-lht-datanode-hadoop3.log -Dhadoop.home.dir=/home/lht/app/hadoop3 -Dhadoop.id.str=lht -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.datanode.DataNode
lht 3300 1409 0 18:03 pts/0 00:00:00 grep --color=auto hadoop
可以看到有一个进程:DataNode。进程正常
10、hdfs中创建一个目录
[lht@hadoop1 ~]$ hdfs dfs -ls /
[lht@hadoop1 ~]$ hdfs dfs -mkdir /lht
[lht@hadoop1 ~]$ hdfs dfs -ls /
Found 1 items
drwxr-xr-x - lht supergroup 0 2021-11-24 18:09 /lht
创建完成,hadoop安装完成
11、启动yarn
yarn在刚才的配置中指定了hadoop2主机,因此要在hadoop2上启动yarn
start-yarn.sh
报错了:
[lht@hadoop2 ~]$ start-yarn.sh
Starting resourcemanager
Starting nodemanagers
hadoop1: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
hadoop3: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
hadoop2: Warning: Permanently added 'hadoop2' (ECDSA) to the list of known hosts.
hadoop2: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
这是因为没有配置hadoop2到其他两台主机的免密登录导致的,配置之后,再重新开启(因为刚才启动没有成功,所以要先执行停止,再执行启动)
这是启动成功之后的输出:
[lht@hadoop2 .ssh]$ start-yarn.sh
Starting resourcemanager
Starting nodemanagers
验证服务是否启动:
ps -ef|grep java
输出如下:
[lht@hadoop2 .ssh]$ ps -ef|grep java
lht 1518 1 0 10:32 ? 00:00:31 /usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/bin/java -Dproc_datanode -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=ERROR,RFAS -Dyarn.log.dir=/home/lht/app/hadoop3/logs -Dyarn.log.file=hadoop-lht-datanode-hadoop2.log -Dyarn.home.dir=/home/lht/app/hadoop3 -Dyarn.root.logger=INFO,console -Djava.library.path=/home/lht/app/hadoop3/lib/native -Dhadoop.log.dir=/home/lht/app/hadoop3/logs -Dhadoop.log.file=hadoop-lht-datanode-hadoop2.log -Dhadoop.home.dir=/home/lht/app/hadoop3 -Dhadoop.id.str=lht -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.datanode.DataNode
lht 1622 1 0 10:32 ? 00:00:18 /usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/bin/java -Dproc_secondarynamenode -Djava.net.preferIPv4Stack=true -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dyarn.log.dir=/home/lht/app/hadoop3/logs -Dyarn.log.file=hadoop-lht-secondarynamenode-hadoop2.log -Dyarn.home.dir=/home/lht/app/hadoop3 -Dyarn.root.logger=INFO,console -Djava.library.path=/home/lht/app/hadoop3/lib/native -Dhadoop.log.dir=/home/lht/app/hadoop3/logs -Dhadoop.log.file=hadoop-lht-secondarynamenode-hadoop2.log -Dhadoop.home.dir=/home/lht/app/hadoop3 -Dhadoop.id.str=lht -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
lht 3317 1 2 15:55 pts/0 00:00:08 /usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/bin/java -Dproc_resourcemanager -Djava.net.preferIPv4Stack=true -Dservice.libdir=/home/lht/app/hadoop/share/hadoop/yarn,/home/lht/app/hadoop/share/hadoop/yarn/lib,/home/lht/app/hadoop/share/hadoop/hdfs,/home/lht/app/hadoop/share/hadoop/hdfs/lib,/home/lht/app/hadoop/share/hadoop/common,/home/lht/app/hadoop/share/hadoop/common/lib -Dyarn.log.dir=/home/lht/app/hadoop/logs -Dyarn.log.file=hadoop-lht-resourcemanager-hadoop2.log -Dyarn.home.dir=/home/lht/app/hadoop -Dyarn.root.logger=INFO,console -Djava.library.path=/home/lht/app/hadoop/lib/native -Dhadoop.log.dir=/home/lht/app/hadoop/logs -Dhadoop.log.file=hadoop-lht-resourcemanager-hadoop2.log -Dhadoop.home.dir=/home/lht/app/hadoop -Dhadoop.id.str=lht -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.yarn.server.resourcemanager.ResourceManager
lht 3442 1 1 15:56 ? 00:00:05 /usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.312.b07-1.el7_9.x86_64/bin/java -Dproc_nodemanager -Djava.net.preferIPv4Stack=true -Dyarn.log.dir=/home/lht/app/hadoop3/logs -Dyarn.log.file=hadoop-lht-nodemanager-hadoop2.log -Dyarn.home.dir=/home/lht/app/hadoop3 -Dyarn.root.logger=INFO,console -Djava.library.path=/home/lht/app/hadoop3/lib/native -Dhadoop.log.dir=/home/lht/app/hadoop3/logs -Dhadoop.log.file=hadoop-lht-nodemanager-hadoop2.log -Dhadoop.home.dir=/home/lht/app/hadoop3 -Dhadoop.id.str=lht -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.yarn.server.nodemanager.NodeManager
lht 3793 1435 0 16:01 pts/0 00:00:00 grep --color=auto java
可以看到进程列表中增加了一个ResourceManager进程,表明yarn正常启动了
有一点遗憾,启动之后无法通过web页面对hadoop和yarn进行管理,用配置的地址打不开页面,有懂的朋友请给予指教,谢谢!
最后
以上就是自然小鸽子为你收集整理的CentOS7安装Hadoop3过程记录1、安装jdk2、配置静态IP地址3、配置hosts4 、配置ssh免密登录5、关闭防火墙和selinux6、安装hadoop7、namenode格式化8、启动hadoop9、验证10、hdfs中创建一个目录11、启动yarn的全部内容,希望文章能够帮你解决CentOS7安装Hadoop3过程记录1、安装jdk2、配置静态IP地址3、配置hosts4 、配置ssh免密登录5、关闭防火墙和selinux6、安装hadoop7、namenode格式化8、启动hadoop9、验证10、hdfs中创建一个目录11、启动yarn所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
发表评论 取消回复