我是靠谱客的博主 勤恳奇异果,最近开发中收集的这篇文章主要介绍shark 安装 遇到的问题,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

声明版本号:

hadoop:  apache  2.2.0

spark:     0.9.1

shark:     0.9.1

hive:         0.11.0


shark官网:http://shark.cs.berkeley.edu/

shark on cluster 文档:https://github.com/amplab/shark/wiki/Running-Shark-on-a-Cluster

按照文档进行配置,最后启动shark,出现以下问题:

Exception in thread "main" org.apache.spark.SparkException: YARN mode not available ?
        at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:1275)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:201)
        at shark.SharkContext.<init>(SharkContext.scala:42)
        at shark.SharkContext.<init>(SharkContext.scala:61)
        at shark.SharkEnv$.initWithSharkContext(SharkEnv.scala:78)
        at shark.SharkEnv$.init(SharkEnv.scala:38)
        at shark.SharkCliDriver.<init>(SharkCliDriver.scala:278)
        at shark.SharkCliDriver$.main(SharkCliDriver.scala:162)
        at shark.SharkCliDriver.main(SharkCliDriver.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.scheduler.cluster.YarnClientClusterScheduler
        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:190)
        at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:1269)
        ... 8 more

自己猜测应该是 SPARK_ASSEMBLY_JAR 没有加载,通过追代码,确实是这个问题:

在 $SHARK_HOME/run 脚本中加入下面的代码:

if [ -f "$SPARK_JAR" ] ; then
    SPARK_CLASSPATH+=":$SPARK_JAR"
    echo "SPARK CLASSPATH : "$SPARK_CLASSPATH
fi

但是又出现下面的问题:

org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.reflect.InvocationTargetException
        at org.apache.hadoop.yarn.factories.impl.pb.RpcClientFactoryPBImpl.getClient(RpcClientFactoryPBImpl.java:79)
        at org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC.getProxy(HadoopYarnProtoRPC.java:48)
        at org.apache.hadoop.yarn.client.RMProxy$1.run(RMProxy.java:134)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:356)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1528)
        at org.apache.hadoop.yarn.client.RMProxy.getProxy(RMProxy.java:130)
        at org.apache.hadoop.yarn.client.RMProxy.createRMProxy(RMProxy.java:93)
        at org.apache.hadoop.yarn.client.ClientRMProxy.createRMProxy(ClientRMProxy.java:70)
        at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceStart(YarnClientImpl.java:114)
        at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
        at org.apache.spark.deploy.yarn.Client.runApp(Client.scala:76)
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:78)
        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:126)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:202)
        at shark.SharkContext.<init>(SharkContext.scala:42)
        at shark.SharkContext.<init>(SharkContext.scala:61)
        at shark.SharkEnv$.initWithSharkContext(SharkEnv.scala:78)
        at shark.SharkEnv$.init(SharkEnv.scala:38)
        at shark.SharkCliDriver.<init>(SharkCliDriver.scala:278)
        at shark.SharkCliDriver$.main(SharkCliDriver.scala:162)
        at shark.SharkCliDriver.main(SharkCliDriver.scala)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.yarn.factories.impl.pb.RpcClientFactoryPBImpl.getClient(RpcClientFactoryPBImpl.java:76)
        ... 21 more
Caused by: java.lang.VerifyError: class org.apache.hadoop.security.proto.SecurityProtos$GetDelegationTokenRequestProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:792)
        at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
        at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at java.lang.Class.getDeclaredMethods0(Native Method)
        at java.lang.Class.privateGetDeclaredMethods(Class.java:2521)
        at java.lang.Class.privateGetPublicMethods(Class.java:2641)
        at java.lang.Class.privateGetPublicMethods(Class.java:2651)
        at java.lang.Class.getMethods(Class.java:1457)
        at sun.misc.ProxyGenerator.generateClassFile(ProxyGenerator.java:426)
        at sun.misc.ProxyGenerator.generateProxyClass(ProxyGenerator.java:323)
        at java.lang.reflect.Proxy.getProxyClass0(Proxy.java:636)
        at java.lang.reflect.Proxy.newProxyInstance(Proxy.java:722)
        at org.apache.hadoop.ipc.ProtobufRpcEngine.getProxy(ProtobufRpcEngine.java:92)
        at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:537)
        at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:482)
        at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:447)
        at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:600)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:557)
        at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.<init>(ApplicationClientProtocolPBClientImpl.java:111)
        ... 26 more

这是protobuf版本冲突造成的,首先查了一下 shark 目录下的 protobuf 的jar 包:

find . -name "proto*.jar"

发现只有: ./lib_managed/bundles/com.google.protobuf/protobuf-java/protobuf-java-2.5.0.jar

我使用的 shark-0.9.1 版本的,之前使用过 0.9.0 版本,里面有 protobuf-java-2.4.1-shaded.jar

查了半天,终于找到问题的解决办法:

找到 ./lib_managed/jars/edu.berkeley.cs.shark/hive-exec/hive-exec-0.11.0-shark-0.9.1.jar这个jar 包

然后将这个jar包解压: jar tf hive-exec-0.11.0-shark-0.9.1.jar 

将  com/google/protobuf 目录的class文件全部干掉,重新打包即可。

ok,继续运行 bin/shark-withinfo

又遇到问题:

14/04/16 16:01:44 INFO yarn.Client: Setting up the launch environment
Exception in thread "main" java.lang.NullPointerException
        at scala.collection.mutable.ArrayOps$ofRef$.length$extension(ArrayOps.scala:114)
        at scala.collection.mutable.ArrayOps$ofRef.length(ArrayOps.scala:114)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:32)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
        at org.apache.spark.deploy.yarn.Client$.populateHadoopClasspath(Client.scala:498)
        at org.apache.spark.deploy.yarn.Client$.populateClasspath(Client.scala:519)
        at org.apache.spark.deploy.yarn.Client.setupLaunchEnv(Client.scala:333)
        at org.apache.spark.deploy.yarn.Client.runApp(Client.scala:94)
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:78)
        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:126)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:202)
        at shark.SharkContext.<init>(SharkContext.scala:42)
        at shark.SharkContext.<init>(SharkContext.scala:61)
        at shark.SharkEnv$.initWithSharkContext(SharkEnv.scala:78)
        at shark.SharkEnv$.init(SharkEnv.scala:38)
        at shark.SharkCliDriver.<init>(SharkCliDriver.scala:278)
        at shark.SharkCliDriver$.main(SharkCliDriver.scala:162)
        at shark.SharkCliDriver.main(SharkCliDriver.scala)

原来是shark的一个小bug: http://mail-archives.apache.org/mod_mbox/spark-user/201404.mbox/%3CCAM9h1cfrSmwczCMobHZxvVPLoP-syrvVCAsF9ohokRdwhUwrBQ@mail.gmail.com%3E

在 yarn-site.xml 文件中设置  yarn.application.classpath  的默认值试一试

这个问题 ok 了

但是,又有下面的问题了:

Exception in thread "main" org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
        at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1072)
        at shark.memstore2.TableRecovery$.reloadRdds(TableRecovery.scala:49)
        at shark.SharkCliDriver.<init>(SharkCliDriver.scala:283)
        at shark.SharkCliDriver$.main(SharkCliDriver.scala:162)
        at shark.SharkCliDriver.main(SharkCliDriver.scala)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1139)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:51)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:61)
        at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2288)
        at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2299)
        at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1070)
        ... 4 more
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1137)
        ... 9 more
Caused by: javax.jdo.JDOFatalInternalException: Error creating transactional connection factory
NestedThrowables:
java.lang.reflect.InvocationTargetException
        at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:587)
        at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:781)
        at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:326)
        at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:195)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953)
        at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159)
        at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803)
        at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698)
        at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:270)
        at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:299)
        at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:229)
        at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:204)
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
        at org.apache.hadoop.hive.metastore.RetryingRawStore.<init>(RetryingRawStore.java:62)
        at org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:413)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:401)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:439)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:325)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:285)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:54)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:59)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.newHMSHandler(HiveMetaStore.java:4102)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:121)
        ... 14 more
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
        at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:325)
        at org.datanucleus.store.AbstractStoreManager.registerConnectionFactory(AbstractStoreManager.java:281)
        at org.datanucleus.store.AbstractStoreManager.<init>(AbstractStoreManager.java:239)
        at org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:292)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
        at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301)
        at org.datanucleus.NucleusContext.createStoreManagerForProperties(NucleusContext.java:1069)
        at org.datanucleus.NucleusContext.initialise(NucleusContext.java:359)
        at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:768)
        ... 43 more
Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "DBCP" plugin to create a ConnectionPool gave an error : The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
        at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:237)
        at org.datanucleus.store.rdbms.ConnectionFactoryImpl.initialiseDataSources(ConnectionFactoryImpl.java:110)
        at org.datanucleus.store.rdbms.ConnectionFactoryImpl.<init>(ConnectionFactoryImpl.java:82)
        ... 61 more
Caused by: org.datanucleus.store.rdbms.datasource.DatastoreDriverNotFoundException: The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
        at org.datanucleus.store.rdbms.datasource.AbstractDataSourceFactory.loadDriver(AbstractDataSourceFactory.java:58)
        at org.datanucleus.store.rdbms.datasource.DBCPDataSourceFactory.makePooledDataSource(DBCPDataSourceFactory.java:55)
        at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:217)
        ... 63 more

将 mysql-connector.jar  文件添加到 $SHARK_HOME/lib_managed/jars/edu.berkeley.cs.shark/hive-jdbc 目录下 即可。

或许 还会遇到下面的问题:

14/04/16 17:03:44 ERROR DataNucleus.Datastore: Error thrown executing CREATE TABLE `SERDE_PARAMS`
(
    `SERDE_ID` BIGINT NOT NULL,
    `PARAM_KEY` VARCHAR(256) BINARY NOT NULL,
    `PARAM_VALUE` VARCHAR(4000) BINARY NULL,
    CONSTRAINT `SERDE_PARAMS_PK` PRIMARY KEY (`SERDE_ID`,`PARAM_KEY`)
) ENGINE=INNODB : Specified key was too long; max key length is 767 bytes
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Specified key was too long; max key length is 767 bytes
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

这是由于hive meta 数据库的字符集问题造成的,可以将hive meta 数据库的字符集设置成 latin1

http://hao3721.iteye.com/blog/1522392


每个slaver节点都需要安装配置hive、spark、shark , 否则会出问题。

org.apache.spark.SparkException: Job aborted: Task 1.0:0 failed 4 times (most recent failure: Exception failure: java.lang.RuntimeException: readObject can't find class org.apache.hadoop.hive.conf.HiveConf)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1028)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1026)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1026)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
        at scala.Option.foreach(Option.scala:236)
        at org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:619)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:207)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
        at akka.actor.ActorCell.invoke(ActorCell.scala:456)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
        at akka.dispatch.Mailbox.run(Mailbox.scala:219)
        at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

这就是因为没有在slave节点安装hive 造成的


简单测试:

>CREATE TABLE src(key INT, value STRING);

>LOAD DATA LOCAL INPATH'${env:HIVE_HOME}/examples/files/kv1.txt' INTO TABLE src;

>SELECT COUNT(1) FROM src;   

>CREATE TABLE src_cached AS SELECT * FROM SRC;

>SELECT COUNT(1) FROM src_cached;

shark 用户手册: https://github.com/amplab/shark/wiki/Shark-User-Guide


在shark启动的时候,你会发现他向yarn提交的application的设置跟你用手动提交spark任务的参数是类似的,比如说 --worker-memory ;这个很重要,因为我们想控制spark on yarn 所占用的资源数,但是我们如何设置这些参数呢?

经过一番查找,追踪代码,后来找到解决办法:http://spark.apache.org/docs/latest/configuration.html#environment-variables

将这些配置配到 shark-env.sh 中就可以了





最后

以上就是勤恳奇异果为你收集整理的shark 安装 遇到的问题的全部内容,希望文章能够帮你解决shark 安装 遇到的问题所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(49)

评论列表共有 0 条评论

立即
投稿
返回
顶部