我是靠谱客的博主 大方芒果,最近开发中收集的这篇文章主要介绍Spark 集群部署(MasterHA)一. 前提条件二. 部署步骤,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

一. 前提条件

引用

Spark Standalone集群是Master-Slaves架构的集群模式,和大部分的Master-Slaves结构集群一样,存在着Master单点故障的问题。如何解决这个单点故障的问题,Spark提供了两种方案:

基于文件系统的单点恢复(Single-Node Recovery with Local File System)
基于zookeeper的Standby Masters(Standby Masters with ZooKeeper)
ZooKeeper提供了一个Leader Election机制,利用这个机制可以保证虽然集群存在多个Master,但是只有一个是Active的,其他的都是Standby。当Active的Master出现故障时,另外的一个Standby Master会被选举出来。由于集群的信息,包括Worker, Driver和Application的信息都已经持久化到文件系统,因此在切换的过程中只会影响新Job的提交,对于正在进行的Job没有任何的影响。加入ZooKeeper的集群整体架构

Zookeeper集群正常运行

二. 部署步骤

  1. 下载Spark程序压缩包
    wget http://mirrors.shu.edu.cn/apache/spark/spark-2.4.0/spark-2.4.0-bin-hadoop2.7.tgz
  2. 解压缩并重命名
    tar -zxvf spark-2.4.0-bin-hadoop2.7.tgz -C /opt
    mv spark-2.4.0-bin-hadoop2.7 spark-2.4.0
  3. 配置环境变量
    /etc/profile
    export JAVA_HOME=/usr/lib/jdk1.8.0_172
    export CLASSPATH=${JAVA_HOME}/jre/lib:${JAVA_HOME}/lib
    export HADOOP_HOME=/opt/hadoop-2.7.6
    export SPARK_HOME=/opt/spark-2.4.0
    export PATH=${JAVA_HOME}/bin:$HADOOP_HOME/bin:$SPARK_HOME/bin:$PATH

    修改机器名称

    hostnamectl set-hostname res-spark-0001

    执行命令使得环境变量生效

    source /etc/profile
  4. 修改配置文件
cd /opt/spark-2.4.0/conf
cp log4j.properties.template log4j.properties
cp slaves.template slaves
cp spark-env.sh.template spark-env.sh
cp spark-defaults.conf.template spark-defaults.conf

4.1 slaves

res-spark-0003
res-spark-0004
res-spark-0005

4.2 spark-defaults.conf

spark.deploy.recoveryMode
ZOOKEEPER
spark.deploy.zookeeper.url
res-spark-0001:2181,res-spark-0002:2181,res-spark-0003:2181
spark.master
spark://res-spark-0001:7077
spark.eventLog.enabled
true
spark.eventLog.dir
hdfs://cluster1/spark/eventLog
spark.shuffle.service.enabled
true

4.3 spark-env.sh

export JAVA_HOME=/usr/lib/jdk1.8.0_172
export HADOOP_HOME=/opt/hadoop-2.7.6
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export SPARK_HOME=/opt/spark-2.4.0
export SPARK_WORKER_CORES=6
export SPARK_WORKER_MEMORY=24g

4.4 log4j.properties

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.
See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.
You may obtain a copy of the License at
#
#
http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Set everything to be logged to the console
log4j.rootCategory=INFO, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
# Set the default spark-shell log level to WARN. When running the spark-shell, the
# log level for this class is used to overwrite the root logger's log level, so that
# the user can have different defaults for the shell and regular Spark apps.
log4j.logger.org.apache.spark.repl.Main=WARN
# Settings to quiet third party logs that are too verbose
log4j.logger.org.spark_project.jetty=WARN
log4j.logger.org.spark_project.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO
log4j.logger.org.apache.parquet=ERROR
log4j.logger.parquet=ERROR
# SPARK-9183: Settings to avoid annoying messages when looking up nonexistent UDFs in SparkSQL with Hive support
log4j.logger.org.apache.hadoop.hive.metastore.RetryingHMSHandler=FATAL
log4j.logger.org.apache.hadoop.hive.ql.exec.FunctionRegistry=ERROR
  1. 分发spark程序以及配置文件到其他节点

    scp -r /opt/spark-2.4.0 res-spark-0002:/opt
    scp -r /opt/spark-2.4.0 res-spark-0003:/opt
    scp -r /opt/spark-2.4.0 res-spark-0004:/opt
    scp -r /opt/spark-2.4.0 res-spark-0005:/opt
  2. 修改 res-spark-0002节点的配置文件
    6.1 spark-defaults.conf

    spark.master
    spark://res-spark-0002:7077
  3. 启动集群
cd sbin
./start-all.sh

res-spark-0002节点

cd sbin
./start-master.sh
  1. 测试
    res-spark-0001节点执行
    ./stop-master.sh

    得到如下结果
    spark 集群部署(masterha)

spark 集群部署(masterha)

  1. submit app
    spark-submit --master spark://res-spark-0001:7077
    --driver-cores 4 --driver-memory 6g --conf spark.dynamicAllocation.enabled=true --conf spark.shuffle.service.enabled=true --class com.cloud.RuleEngine rule-engine-1.0-SNAPSHOT-jar-with-dependencies.jar

    报错信息

    18/12/30 08:47:41 ERROR TaskSchedulerImpl: Lost executor 3 on 172.16.0.24: Unable to create executor due to Unable to register with external shuffle server due to : Failed to connect to /172.16.0.24:7337

    官网:

    In standalone mode, simply start your workers with spark.shuffle.service.enabled set to true.

转载于:https://blog.51cto.com/1196740/2336758

最后

以上就是大方芒果为你收集整理的Spark 集群部署(MasterHA)一. 前提条件二. 部署步骤的全部内容,希望文章能够帮你解决Spark 集群部署(MasterHA)一. 前提条件二. 部署步骤所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(47)

评论列表共有 0 条评论

立即
投稿
返回
顶部