I executed the following commands to launch spark app with yarn client
mode. I have Hadoop 2.3.0, Spark 0.8.1 and Scala 2.9.3

SPARK_HADOOP_VERSION=2.3.0 SPARK_YARN=true sbt/sbt assembly

SPARK_YARN_MODE=true \
SPARK_JAR=./assembly/target/scala-2.9.3/spark-assembly-0.8.1-incubating-hadoop2.3.0.jar
\
SPARK_YARN_APP_JAR=examples/target/scala-2.9.3/spark-examples-assembly-0.8.1-incubating.jar
MASTER=yarn-client ./spark-shell

The spark context in the interactive shell is set properly, but after that
when i submit jobs, it tells that the application has not received any
resources.

LOGS:
DAGScheduler: Submitting 4 missing tasks from Stage 0 (MappedRDD[1] at
textFile at <console>:12)
YarnClientClusterScheduler: Adding task set 0.0 with 4 tasks
WARN YarnClientClusterScheduler: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient memory

What have i missed, i did start spark master and worker and have configured
SPARK_MEM.

Any help will be great !!

Reply via email to