概述
节点 | ip | 进程 | user |
---|---|---|---|
master | 192.168.1.115 | NameNode | root |
slave1 | 192.168.1.116 | DataNode | root |
kdcserver | 192.168.1.118 | kdc,kadmin | root |
在kdcserver节点配置
kerberos安装
参照:http://www.fanlegefan.com/archives/kerberosinstall
配置kdc.conf
vi /var/kerberos/krb5kdc/kdc.conf
[kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88
[realms]
FAN.HADOOP = {
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
database_name = /var/kerberos/principal
max_renewable_life = 7d
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
}
配置krb5.conf
vi /etc/krb5.conf
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
default_realm = FAN.HADOOP
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
[realms]
FAN.HADOOP = {
kdc = kdcserver
admin_server = kdcserver
}
[domain_realm]
.FAN.hadoop = FAN.HADOOP
FAN.hadoop = FAN.HADOOP
初始化databases
kdb5_util create -s -r FAN.HADOOP
添加database administractor
kadmin.local -q “addprinc admin/admin”
添加keytab
kadmin.local -q "addprinc -randkey hdfs/kdcserver@FAN.HADOOP"
kadmin.local -q "addprinc -randkey hdfs/master@FAN.HADOOP"
kadmin.local -q "addprinc -randkey hdfs/slave1@FAN.HADOOP"
kadmin.local -q "addprinc -randkey HTTP/kdcserver@FAN.HADOOP"
kadmin.local -q "addprinc -randkey HTTP/master@FAN.HADOOP"
kadmin.local -q "addprinc -randkey HTTP/slave1@FAN.HADOOP"
kadmin.local -q "addprinc -randkey qun/kdcserver@FAN.HADOOP"
kadmin.local -q "addprinc -randkey qun/master@FAN.HADOOP"
kadmin.local -q "addprinc -randkey qun/slave1@FAN.HADOOP"
kadmin.local -q "xst
-k hdfs-unmerged.keytab
hdfs/kdcserver@FAN.HADOOP"
kadmin.local -q "xst
-k hdfs-unmerged.keytab
hdfs/master@FAN.HADOOP"
kadmin.local -q "xst
-k hdfs-unmerged.keytab
hdfs/slave1@FAN.HADOOP"
kadmin.local -q "xst
-k HTTP.keytab
HTTP/kdcserver@FAN.HADOOP"
kadmin.local -q "xst
-k HTTP.keytab
HTTP/master@FAN.HADOOP"
kadmin.local -q "xst
-k HTTP.keytab
HTTP/slave1@FAN.HADOOP"
kadmin.local -q "xst
-k qun.keytab
qun/kdcserver@FAN.HADOOP"
kadmin.local -q "xst
-k qun.keytab
qun/master@FAN.HADOOP"
kadmin.local -q "xst
-k qun.keytab
qun/slave1@FAN.HADOOP"
合并
$ ktutil
ktutil: rkt hdfs-unmerged.keytab
ktutil: rkt HTTP.keytab
ktutil: rkt qun.keytab
ktutil: wkt hdfs.keytab
ktutil: exit
执行完上面的命令后会生成hdfs.keytab文件,这个文件需要复制到master和slave1节点上,并且将/etc/krb5.conf 这个文件复制到master和slave1的/etc/这个目录下
在master和slave1节点上配置
hadoop 集成kerberos
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/qun/data/hadoop/tmp</value>
</property>
<property>
<name>fs.checkpoint.period</name>
<value>3600</value>
</property>
<property>
<name>fs.checkpoint.size</name>
<value>67108864</value>
</property>
<property>
<name>fs.checkpoint.dir</name>
<value>/home/qun/data/hadoop/namesecondary</value>
</property>
<property>
<name>hadoop.security.authentication</name>
<value>kerberos</value>
</property>
<property>
<name>hadoop.security.authorization</name>
<value>true</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/qun/data/hadoop/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/qun/data/hadoop/data</value>
</property>
<property>
<name>dfs.block.access.token.enable</name>
<value>true</value>
</property>
<property>
<name>dfs.datanode.data.dir.perm</name>
<value>700</value>
</property>
<property>
<name>dfs.namenode.keytab.file</name>
<value>/home/qun/hadoop-2.6.0/etc/hadoop/hdfs.keytab</value>
</property>
<property>
<name>dfs.namenode.kerberos.principal</name>
<value>hdfs/_HOST@FAN.HADOOP</value>
</property>
<property>
<name>dfs.namenode.kerberos.https.principal</name>
<value>HTTP/_HOST@FAN.HADOOP</value>
</property>
<property>
<name>dfs.datanode.address</name>
<value>0.0.0.0:1004</value>
</property>
<property>
<name>dfs.datanode.http.address</name>
<value>0.0.0.0:1006</value>
</property>
<property>
<name>dfs.datanode.keytab.file</name>
<value>/home/qun/hadoop-2.6.0/etc/hadoop/hdfs.keytab</value>
</property>
<property>
<name>dfs.datanode.kerberos.principal</name>
<value>hdfs/_HOST@FAN.HADOOP</value>
</property>
<property>
<name>dfs.datanode.kerberos.https.principal</name>
<value>HTTP/_HOST@FAN.HADOOP</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.web.authentication.kerberos.principal</name>
<value>HTTP/_HOST@FAN.HADOOP</value>
</property>
<property>
<name>dfs.web.authentication.kerberos.keytab</name>
<value>/home/qun/hadoop-2.6.0/etc/hadoop/hdfs.keytab</value>
</property>
<property>
<name>dfs.namenode.kerberos.internal.spnego.principal</name>
<value>HTTP/_HOST@FAN.HADOOP</value>
</property>
<property>
<name>dfs.secondary.https.address</name>
<value>master:50495</value>
</property>
<property>
<name>dfs.secondary.https.port</name>
<value>50495</value>
</property>
<property>
<name>dfs.secondary.namenode.keytab.file</name>
<value>/home/qun/hadoop-2.6.0/etc/hadoop/hdfs.keytab</value>
</property>
<property>
<name>dfs.secondary.namenode.kerberos.principal</name>
<value>hdfs/_HOST@FAN.HADOOP</value>
</property>
<property>
<name>dfs.secondary.namenode.kerberos.https.principal</name>
<value>hdfs/_HOST@FAN.HADOOP</value>
</property>
</configuration>
安装JVSC
下载
http://commons.apache.org/proper/commons-daemon/jsvc.html
编译
cd /home/qun/commons-daemon-1.1.0-src/src/native/unix
./configure --with-java=$JAVA_HOME
make
生成jsvc 64位executable,把它拷贝到$HADOOP_HOME/libexec,然后需要在hadoop-env.sh中指定JSVC_HOME到此路径,否则会报错"It looks like you’re trying to start a secure DN, but $JSVC_HOME isn’t set. Falling back to starting insecure DN."
打包
cd /home/qun/commons-daemon-1.1.0-src
mvn package
编译commons-daemon-1.1.10.jar,拷贝到$HADOOP_HOME/share/hadoop/hdfs/lib下,同时删除自带版本的commons-daemon jar包
配置JCE
下载地址
http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html
将解压的jar:US_export_policy.jar和local_policy.jar替换$HADOOP_HOME/jre/lib/security/下的jar
配置hadoop-env.sh
export JAVA_HOME=/home/qun/soft/jdk1.8.0_91
export HADOOP_SECURE_DN_USER=qun
export HADOOP_SECURE_DN_PID_DIR=/home/qun/hadoop-2.6.0/pids
export HADOOP_SECURE_DN_LOG_DIR=/home/qun/hadoop-2.6.0/logs
export JSVC_HOME=/home/qun/hadoop-2.6.0/libexec
启动namenode和datanode
启动NameNode和secondarynamenode
./hadoop-daemon.sh start namenode
./hadoop-daemon.sh start secondarynamenode
启动DataNode
start-secure-dns.sh
配置yarn
vi yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.web-proxy.address</name>
<value>master:8888</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.keytab</name>
<value>/home/qun/hadoop-2.6.0/etc/hadoop/hdfs.keytab</value>
</property>
<property>
<name>yarn.resourcemanager.principal</name>
<value>hdfs/_HOST@FAN.HADOOP</value>
</property>
<property>
<name>yarn.nodemanager.keytab</name>
<value>/home/qun/hadoop-2.6.0/etc/hadoop/hdfs.keytab</value>
</property>
<property>
<name>yarn.nodemanager.principal</name>
<value>hdfs/_HOST@FAN.HADOOP</value>
</property>
<property>
<name>yarn.nodemanager.container-executor.class</name>
<value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
</property>
<property>
<name>yarn.nodemanager.linux-container-executor.group</name>
<value>yarn</value>
</property>
</configuration>
修改container-executor.cfg
yarn.nodemanager.linux-container-executor.group=yarn
banned.users=root,nobody,impala,hive,hdfs,yarn,bin,qun
min.user.id=1000
allowed.system.users=root,nobody,impala,hive,hdfs,yarn,qun
启动yarn
start-yarn.sh
yarn配置中出现的问题
container-executor.cfg must be owned by root, but is owned by 500
解决办法:
下载hadoop源码,重新编译container-executor,文件生成路径为target/usr/local/bin/,将container-executor复制到$HADOOP_HOME/bin/下
cd hadoop-2.6.0-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/
cmake src -DHADOOP_CONF_DIR=/etc/hadoop
make
注意要将container-executor.cfg复制到/etc/hadoop这个目录下
可以参考:http://blog.csdn.net/lipeng_bigdata/article/details/52687821
Caused by: ExitCodeException exitCode=24: Can’t get group information for yarn - Success.Exit code from container executor initialization is : 22 ExitCodeException exitCode=22: Invalid permissions on container-executor binary
chown root:yarn container-executor
chmod 6050 container-executor
改完后如下
---Sr-s---. 1 root yarn 108276 Dec 30 19:05 container-executor
Login failure for hdfs/master@FAN.HADOOP from keytab /home/qun/hadoop-2.6.0/etc/hadoop/hdfs.keytab: javax.security.auth.login.LoginException: Unable to obtain password from user
出现这个问题,除了kerberos账号的问题,也有可能是hdfs.keytab文件权限的问题,
之前在hadoop-env.sh中设置HADOOP_SECURE_DN_USER=qun,所以要确保hdfs.keytab属于qun这个用户,最简单的办法就是切换到qun账号执行下:kinit -k -t hdfs.keytab hdfs/master@FAN.HADOOP ,看看会不会报权限问题
要点难点
hadoop结合kerberos非常麻烦,有几个要点需要注意
- JCE配置
- keytab文件权限
- JSVC_HOME
- container-executor
参考链接
- http://blog.javachen.com/2014/11/04/config-kerberos-in-cdh-hdfs.html
- http://blog.chinaunix.net/uid-1838361-id-3243243.html
- http://blog.csdn.net/lalaguozhe/article/details/11570009
- http://blog.csdn.net/liliwei0213/article/details/40656455
- http://blog.csdn.net/lipeng_bigdata/article/details/52687821
- http://www.cloudera.com/documentation/cdh/5-0-x/CDH5-Security-Guide/cdh5sg_yarn_container_exec_errors.html
最后
以上就是灵巧唇彩为你收集整理的hadoop集成kerberos的全部内容,希望文章能够帮你解决hadoop集成kerberos所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
发表评论 取消回复