概述
该教程为centos7安装教程,root用户
准备一个centos7,设置好root密码并牢记
查看IP,方便使用xshell链接:
a) vi /etc/sysconfig/network-scripts/ifcfg-ens33
改最后ONBOOT=yes
b) service network restart
c) ip addr
d) 使用xshell连接
遇到问题,可参看我的另一篇博客“linkis和scripts使用中遇到问题解决办法”
第一章:基本环境安装
关闭防火墙:
systemctl stop firewalld
systemctl disable firewalld
免密登录:
cd
cd ~/.ssh/
ssh-keygen -t rsa #出现提示直接按enter
cat ./id_rsa.pub >> ./authorized_keys
安装jdk:
配置JDK,上传jdk-8u201-linux-x64.tar.gz
解压并配置环境变量:在vi .bashrc添加:
#java environment
export JAVA_HOME=/usr/java/jdk1.8.0_201
export CLASSPATH=.:
J
A
V
A
H
O
M
E
/
j
r
e
/
l
i
b
/
r
t
.
j
a
r
:
{JAVA_HOME}/jre/lib/rt.jar:
JAVAHOME/jre/lib/rt.jar:{JAVA_HOME}/lib/dt.jar:
J
A
V
A
H
O
M
E
/
l
i
b
/
t
o
o
l
s
.
j
a
r
e
x
p
o
r
t
P
A
T
H
=
{JAVA_HOME}/lib/tools.jar export PATH=
JAVAHOME/lib/tools.jarexportPATH=PATH:${JAVA_HOME}/bin
完成后:
source .bashrc
java –version检查安装效果
安装python:
1.安装编译相关工具
yum -y groupinstall “Development tools”
yum -y install zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel db4-devel libpcap-devel xz-devel
yum install libffi-devel -y
2.下载安装包解压
cd #回到用户目录
wget https://www.python.org/ftp/python/3.7.0/Python-3.7.0.tar.xz
tar -xvJf Python-3.7.0.tar.xz
3.编译安装
mkdir /usr/local/python3 #创建编译安装目录
cd Python-3.7.0
./configure --prefix=/usr/local/python3
make && make install
4.创建软连接
ln -s /usr/local/python3/bin/python3 /usr/local/bin/python3
ln -s /usr/local/python3/bin/pip3 /usr/local/bin/pip3
5.验证是否成功
python3 -V
pip3 –V
安装MySQL(docker版):
安装docker:
yum update
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install docker-ce
安装MySQL:
systemctl start docker
docker pull mysql:5.6
docker run -p 3306:3306 --name mymysql -v $PWD/conf:/etc/mysql/conf.d -v $PWD/logs:/logs -v $PWD/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 -d mysql:5.6
安装MySQL-client:
yum install -y mariadb.x86_64 mariadb-libs.x86_64
Hadoop安装:
解压
设置.bashrc环境变量:
#hadoop home
export HADOOP_HOME=/root/app/hadoop-2.7.2
export HADOOP_INSTALL=
H
A
D
O
O
P
H
O
M
E
e
x
p
o
r
t
H
A
D
O
O
P
M
A
P
R
E
D
H
O
M
E
=
HADOOP_HOME export HADOOP_MAPRED_HOME=
HADOOPHOMEexportHADOOPMAPREDHOME=HADOOP_HOME
export HADOOP_COMMON_HOME=
H
A
D
O
O
P
H
O
M
E
e
x
p
o
r
t
H
A
D
O
O
P
H
D
F
S
H
O
M
E
=
HADOOP_HOME export HADOOP_HDFS_HOME=
HADOOPHOMEexportHADOOPHDFSHOME=HADOOP_HOME
export YARN_HOME=
H
A
D
O
O
P
H
O
M
E
e
x
p
o
r
t
H
A
D
O
O
P
C
O
M
M
O
N
L
I
B
N
A
T
I
V
E
D
I
R
=
HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=
HADOOPHOMEexportHADOOPCOMMONLIBNATIVEDIR=HADOOP_HOME/lib/native
export PATH=
P
A
T
H
:
PATH:
PATH:HADOOP_HOME/sbin:
H
A
D
O
O
P
H
O
M
E
/
b
i
n
e
x
p
o
r
t
H
A
D
O
O
P
C
O
N
F
D
I
R
=
HADOOP_HOME/bin export HADOOP_CONF_DIR=
HADOOPHOME/binexportHADOOPCONFDIR=HADOOP_HOME/etc/hadoop
修改hadoop_home/etc/hadoop下配置文件
core-site.xml:
fs.default.name
hdfs://localhost:9000
Hdfs-site.xml:
dfs.http.address
0.0.0.0:50070
dfs.replication
1
dfs.name.dir
file:///usr/local/hadoop/hadoopdata/namenode
dfs.data.dir
file:///usr/local/hadoop/hadoopdata/datanode
Yarn-site.xml:
yarn.nodemanager.aux-services
mapreduce_shuffle
yarn.resourcemanager.scheduler.class
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler
yarn.nodemanager.vmem-pmem-ratio
4
Ratio between virtual memory to physical memory when setting memory limits for containers
Mapred-site.xml:
mapreduce.framework.name
yarn
Hadoop-env.sh:
export JAVA_HOME=/usr/local/jdk/jdk1.8.0_201
格式化hdfs:
hadoop namenode -format
Hadoop_home/sbin/start_all.sh启动
Hive安装:
以apache-hive-1.2.2-bin.tar.gz为例;
解压
将解压路径加入环境变量.bashrc:
#hive home
export HIVE_HOME=/root/app/apache-hive-1.2.2-bin
export HIVE_CONF_DIR=
H
I
V
E
H
O
M
E
/
c
o
n
f
e
x
p
o
r
t
P
A
T
H
=
HIVE_HOME/conf export PATH=
HIVEHOME/confexportPATH=PATH:$HIVE_HOME/bin
在hive_home/conf目录下新增:hive-site.xml:
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> javax.jdo.option.ConnectionURL jdbc:mysql://localhost:3306/hive(mysql地址localhost,hive是提前建好的数据库) javax.jdo.option.ConnectionDriverName(mysql的驱动) com.mysql.jdbc.Driver javax.jdo.option.ConnectionUserName(用户名) root javax.jdo.option.ConnectionPassword(密码) 123456 hive.metastore.schema.verification false将MySQL驱动复制到hive_home/lib文件夹下:
类似于:mysql-connector-java-8.0.18.jar这样
在hive_home/conf中编辑hive_env.sh,新增:
export HADOOP_HOME=/root/app/hadoop-2.7.2
export HIVE_CONF_DIR=/root/app/apache-hive-1.2.2-bin/conf
到hive_home/bin目录下执行:
schematool -dbType mysql –initSchema
目的是初始化MySQL中的数据,所以一定得保证数据库联通
Spark安装:
解压
将解压路径加入.bashrc环境变量中:
#spark environment
export SPARK_HOME=/root/app/spark-2.3.4-bin-hadoop2.7
export SPARK_CONF_DIR=/root/app/spark-2.3.4-bin-hadoop2.7/bin/spark-submit
export PYSPARK_ALLOW_INSECURE_GATEWAY=1
export PATH=
P
A
T
H
:
PATH:
PATH:SPARK_HOME/bin
在spark_home/conf下编辑spark-env.sh,新增:
export SPARK_DIST_CLASSPATH=$(/root/app/hadoop-2.7.2/bin/hadoop classpath)
Nginx安装:
添加Nginx到YUM源
sudo rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
安装Nginx
sudo yum install -y nginx
vim /etc/nginx/nginx.conf
将user从Nginx改为root
启动Nginx
sudo systemctl start nginx.service
第二章:linkis及scripts安装
linkis安装
上传解压wedatasphere-linkis-0.9.0-dist-spark2.2-2.4.tar.gz
(1)修改基础配置
vim conf/config.sh
deployUser=root #指定部署用户
LINKIS_INSTALL_HOME=/root/Linkis # 指定安装目录
WORKSPACE_USER_ROOT_PATH=file:///tmp/root # 指定用户根目录,一般用于存储用户的脚本文件和日志文件等,是用户的工作空间。
RESULT_SET_ROOT_PATH=hdfs:///tmp/linkis # 结果集文件路径,用于存储Job的结果集文件
(2)修改数据库配置
vim conf/db.sh
# 设置数据库的连接信息
# 包括IP地址、数据库名称、用户名、端口
# 主要用于存储用户的自定义变量、配置参数、UDF和小函数,以及提供JobHistory的底层存储
MYSQL_HOST=your_ip
MYSQL_PORT=3306
MYSQL_DB=linkis
MYSQL_USER=root
MYSQL_PASSWORD=123456
(3)现在可以开始安装
执行sh bin/install.sh
选择3 (标准模式)
选择2 (导入初始数据)如果导入sql报错,就将两个sql文件放到mysql中去执行了,再选择1(代替2的初始化)
启动服务sh bin/start-all.sh
http://your_ip:20303即可访问服务
scripts安装
下载scripts安装包并解压至你想要的位置,例如root/app
修改配置文件:sudo vi /etc/nginx/conf.d/scriptis.conf 添加如下内容:
server {
listen 8080;# 访问端口
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
root /root/app/scriptis/dist; 前端包解压的目录
index index.html index.html;
}
location /ws {#webSocket配置支持
proxy_pass http://192.168.xxx.xxx:9001;#linkis-gateway服务的ip端口
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection “upgrade”;
}
location /api {
proxy_pass http://192.168.xxx.xxx:9001; # linkis-gateway服务的ip端口
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header x_real_ipP $remote_addr;
proxy_set_header remote_addr $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_connect_timeout 4s;
proxy_read_timeout 600s;
proxy_send_timeout 12s;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection upgrade;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
启动服务sudo systemctl restart nginx
执行完后可以直接通过在谷歌浏览器访问:http://nginx_ip:nginx_port
即刻开始通过scripts使用linkis吧
最后
以上就是专注小鸽子为你收集整理的linkis标准版安装教程第一章:基本环境安装第二章:linkis及scripts安装的全部内容,希望文章能够帮你解决linkis标准版安装教程第一章:基本环境安装第二章:linkis及scripts安装所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
发表评论 取消回复