STS will refer spark-thrift-sparkconf.conf, Can you check if the
spark.yarn.jars exists in this file?



On Wed, Jul 5, 2017 at 2:01 PM, Sudha KS <sudha...@fuzzylogix.com> wrote:

> The property “spark.yarn.jars” available via /usr/hdp/current/spark2-
> client/conf/spark-default.conf
>
>
>
> spark.yarn.jars hdfs://ambari03.fuzzyl.com:8020/hdp/apps/2.6.1.0-129/
> spark2
>
>
>
>
>
> Is there any other way to set/read/pass this property “spark.yarn.jars” ?
>
>
>
> *From:* Sudha KS [mailto:sudha...@fuzzylogix.com]
> *Sent:* Wednesday, July 5, 2017 1:51 PM
> *To:* user@spark.apache.org
> *Subject:* SparkSession via HS2 - Error -spark.yarn.jars not read
>
>
>
> Why does “spark.yarn.jars” property not read, in this HDP 2.6 , Spark2.1.1
> cluster:
>
> 0: jdbc:hive2://localhost:10000/db> set spark.yarn.jars;
>
> +-----------------------------------------------------------
> -------------------+--+
>
> |                                     set
> |
>
> +-----------------------------------------------------------
> -------------------+--+
>
> | spark.yarn.jars=hdfs://ambari03.fuzzyl.com:8020/hdp/apps/2.
> 6.1.0-129/spark2  |
>
> +-----------------------------------------------------------
> -------------------+--+
>
> 1 row selected (0.101 seconds)
>
> 0: jdbc:hive2://localhost:10000/db>
>
>
>
>
>
>
>
> Error during launch of a SparkSession via HS2:
>
> Caused by: java.lang.IllegalStateException: Library directory
> '/hadoop/yarn/local/usercache/hive/appcache/application_
> 1499235958765_0042/container_e04_1499235958765_0042_01_
> 000005/assembly/target/scala-2.11/jars' does not exist; make sure Spark
> is built.
>
>         at org.apache.spark.launcher.CommandBuilderUtils.checkState(
> CommandBuilderUtils.java:260)
>
>         at org.apache.spark.launcher.CommandBuilderUtils.findJarsDir(
> CommandBuilderUtils.java:380)
>
>         at org.apache.spark.launcher.YarnCommandBuilderUtils$.findJarsDir(
> YarnCommandBuilderUtils.scala:38)
>
>         at org.apache.spark.deploy.yarn.Client.prepareLocalResources(
> Client.scala:570)
>
>         at org.apache.spark.deploy.yarn.Client.
> createContainerLaunchContext(Client.scala:895)
>
>         at org.apache.spark.deploy.yarn.Client.submitApplication(
> Client.scala:171)
>
>         at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.
> start(YarnClientSchedulerBackend.scala:56)
>
>         at org.apache.spark.scheduler.TaskSchedulerImpl.start(
> TaskSchedulerImpl.scala:156)
>
>         at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)
>
>         at org.apache.spark.SparkContext$.getOrCreate(SparkContext.
> scala:2320)
>
>         at org.apache.spark.sql.SparkSession$Builder$$anonfun$
> 6.apply(SparkSession.scala:868)
>
>         at org.apache.spark.sql.SparkSession$Builder$$anonfun$
> 6.apply(SparkSession.scala:860)
>
>         at scala.Option.getOrElse(Option.scala:121)
>
>         at org.apache.spark.sql.SparkSession$Builder.
> getOrCreate(SparkSession.scala:860)
>
>         at SparkHiveUDTF.sparkJob(SparkHiveUDTF.java:97)
>
>         at SparkHiveUDTF.process(SparkHiveUDTF.java:78)
>
>         at org.apache.hadoop.hive.ql.exec.UDTFOperator.process(
> UDTFOperator.java:109)
>
>         at org.apache.hadoop.hive.ql.exec.Operator.forward(
> Operator.java:841)
>
>         at org.apache.hadoop.hive.ql.exec.SelectOperator.process(
> SelectOperator.java:88)
>
>         at org.apache.hadoop.hive.ql.exec.Operator.forward(
> Operator.java:841)
>
>         at org.apache.hadoop.hive.ql.exec.TableScanOperator.
> process(TableScanOperator.java:133)
>
>         at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.
> forward(MapOperator.java:170)
>
>         at org.apache.hadoop.hive.ql.exec.MapOperator.process(
> MapOperator.java:555)
>
>         ... 18 more
>
>
>
>
>
>
>
>
>



-- 
*  Regards*
*  Sandeep Nemuri*

Reply via email to