thanks!
i solved the problem.
spark-submit changed the HADOOP_CONF_DIR to spark/conf and was corrent
but using java *... didn't change the HADOOP_CONF_DIR. it still use
hadoop/etc/hadoop.
At 2016-05-10 16:39:47, "Saisai Shao" wrote:
The code is in Client.scala
The code is in Client.scala under yarn sub-module (see the below link).
Maybe you need to check the vendor version about their changes to the
Apache Spark code.
https://github.com/apache/spark/blob/branch-1.3/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
Thanks
Saisai
On Tue,
it was a product sold by huawei . name is FusionInsight. it says spark was 1.3
with hadoop 2.7.1
where can i find the code or config file which define the files to be uploaded?
At 2016-05-10 16:06:05, "Saisai Shao" wrote:
What is the version of Spark are you
What is the version of Spark are you using? From my understanding, there's
no code in yarn#client will upload "__hadoop_conf__" into distributed cache.
On Tue, May 10, 2016 at 3:51 PM, 朱旻 wrote:
> hi all:
> I found a problem using spark .
> WHEN I use spark-submit to
hi all:
I found a problem using spark .
WHEN I use spark-submit to launch a task. it works
spark-submit --num-executors 8 --executor-memory 8G --class
com.icbc.nss.spark.PfsjnlSplit --master yarn-cluster
/home/nssbatch/nss_schedual/jar/SparkBigtableJoinSqlJava.jar