While testing like this, it does not read hive-site.xml, spark-env.sh of the 
cluster (had to pass in SparkSession.builder().config()).

Is there a way to make it read spark config present in the cluster?


From: Sudha KS
Sent: Wednesday, July 5, 2017 6:45 PM
To: user@spark.apache.org
Subject: RE: SparkSession via HS2 - Error: Yarn application has already ended

For now, passing the config in SparkSession:
                SparkSession spark = SparkSession
                .builder()
                .enableHiveSupport()
                .master("yarn-client")
                .appName("SampleSparkUDTF_yarnV1")
                .config("spark.yarn.jars","hdfs:///hdp/apps/2.6.1.0-129/spark2")
                
.config("spark.yarn.am.extraJavaOptions","-Dhdp.version=2.6.1.0-129")
                
.config("spark.driver.extra.JavaOptions","-Dhdp.version=2.6.1.0-129")
                .config("spark.executor.memory","4g")
                .getOrCreate();


While testing via HS2 & this is the error:
beeline -u jdbc:hive2://localhost:10000 -d org.apache.hive.jdbc.HiveDriver
0: jdbc:hive2://localhost:10000>
……
Caused by: org.apache.spark.SparkException: Yarn application has already ended! 
It might have been killed or unable to launch application master.
        at 
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:85)
        at 
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
        at 
org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:156)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)
        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2320)
        at 
org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:868)
        at 
org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:860)
        at scala.Option.getOrElse(Option.scala:121)
        at 
org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860)
        at SparkHiveUDTF.sparkJob(SparkHiveUDTF.java:102)
        at SparkHiveUDTF.process(SparkHiveUDTF.java:78)
        at 
org.apache.hadoop.hive.ql.exec.UDTFOperator.process(UDTFOperator.java:109)
        at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:841)
        at 
org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:88)
        at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:841)
        at 
org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:133)
        at 
org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:170)
        at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:555)
        ... 18 more

Is there a way to resolve this error?



On Wed, Jul 5, 2017 at 2:01 PM, Sudha KS 
<sudha...@fuzzylogix.com<mailto:sudha...@fuzzylogix.com>> wrote:
The property “spark.yarn.jars” available via 
/usr/hdp/current/spark2-client/conf/spark-default.conf

spark.yarn.jars 
hdfs://ambari03.fuzzyl.com:8020/hdp/apps/2.6.1.0-129/spark2<http://ambari03.fuzzyl.com:8020/hdp/apps/2.6.1.0-129/spark2>


Is there any other way to set/read/pass this property “spark.yarn.jars” ?

From: Sudha KS [mailto:sudha...@fuzzylogix.com<mailto:sudha...@fuzzylogix.com>]
Sent: Wednesday, July 5, 2017 1:51 PM
To: user@spark.apache.org<mailto:user@spark.apache.org>
Subject: SparkSession via HS2 - Error -spark.yarn.jars not read

Why does “spark.yarn.jars” property not read, in this HDP 2.6 , Spark2.1.1 
cluster:
0: jdbc:hive2://localhost:10000/db> set spark.yarn.jars;
+------------------------------------------------------------------------------+--+
|                                     set                                      |
+------------------------------------------------------------------------------+--+
| 
spark.yarn.jars=hdfs://ambari03.fuzzyl.com:8020/hdp/apps/2.6.1.0-129/spark2<http://ambari03.fuzzyl.com:8020/hdp/apps/2.6.1.0-129/spark2>
  |
+------------------------------------------------------------------------------+--+
1 row selected (0.101 seconds)
0: jdbc:hive2://localhost:10000/db>



Error during launch of a SparkSession via HS2:
Caused by: java.lang.IllegalStateException: Library directory 
'/hadoop/yarn/local/usercache/hive/appcache/application_1499235958765_0042/container_e04_1499235958765_0042_01_000005/assembly/target/scala-2.11/jars'
 does not exist; make sure Spark is built.
        at 
org.apache.spark.launcher.CommandBuilderUtils.checkState(CommandBuilderUtils.java:260)
        at 
org.apache.spark.launcher.CommandBuilderUtils.findJarsDir(CommandBuilderUtils.java:380)
        at 
org.apache.spark.launcher.YarnCommandBuilderUtils$.findJarsDir(YarnCommandBuilderUtils.scala:38)
        at 
org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:570)
        at 
org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:895)
        at 
org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:171)
        at 
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
        at 
org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:156)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)
        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2320)
        at 
org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:868)
        at 
org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:860)
        at scala.Option.getOrElse(Option.scala:121)
        at 
org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860)
        at SparkHiveUDTF.sparkJob(SparkHiveUDTF.java:97)
        at SparkHiveUDTF.process(SparkHiveUDTF.java:78)
        at 
org.apache.hadoop.hive.ql.exec.UDTFOperator.process(UDTFOperator.java:109)
        at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:841)
        at 
org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:88)
        at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:841)
        at 
org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:133)
        at 
org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:170)
        at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:555)
        ... 18 more







--
  Regards
  Sandeep Nemuri

Reply via email to