Hi,

I've just tested spark in yarn mode, but something made me confused.

When I *delete* the "yarn.application.classpath" configuration in
yarn-site.xml, the following command works well.
*bin/spark-class org.apache.spark.deploy.yarn.Client --jar
examples/target/scala-2.10/spark-examples_2.10-assembly-0.9.0-incubating.jar
--class org.apache.spark.examples.SparkPi --args yarn-standalone
--num-worker 3*

However, when I configures it as follows, yarnAppState has always kept in
the *ACCEPTED state*. The application has no tend to stop.
<property>
        <name>yarn.application.classpath</name>
        <value>$HADOOP_HOME/etc/hadoop/conf,

 $HADOOP_HOME/share/hadoop/common/*,$HADOOP_HOME/share/hadoop/common/lib/*,

 $HADOOP_HOME/share/hadoop/hdfs/*,$HADOOP_HOME/share/hadoop/hdfs/lib/*,

$HADOOP_HOME/share/hadoop/mapreduce/*,$HADOOP_HOME/share/hadoop/mapreduce/lib/*,

 $HADOOP_HOME/share/hadoop/yarn/*,$HADOOP_HOME/share/hadoop/yarn/lib/*
        </value>
</property>

Hadoop version is 2.2.0 and the cluster has one master and three workers.

Does anyone have ideas about this problem?

Thanks,
Dan




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/yarn-application-classpath-in-yarn-site-xml-tp3512.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to