hi,

I'm running spark standalone cluster with 5 slaves, each has 4 cores. When I
run job with the following configuration:

/root/spark/bin/spark-submit -v  --total-executor-cores 20 
--executor-memory 22g  --executor-cores 4  --class
com.windward.spark.apps.MyApp  --name dev-app  --properties-file
/mnt/spark-apps/apps/dev/my-app/app.properties   
/mnt/spark-apps/apps/dev/my-app/my-app-1.0-SNAPSHOT-jar-with-dependencies.jar
201401010000 201401030000

everything runs fine (total-executor-cores=20 -> I set it to all the cores I
have 4*5)

If I run it with the following configuration:

/root/spark/bin/spark-submit -v  --total-executor-cores 4  --executor-memory
22g  --executor-cores 4  --class com.windward.spark.apps.MyApp  --name
dev-app  --properties-file /mnt/spark-apps/apps/dev/my-app/app.properties   
/mnt/spark-apps/apps/dev/my-app/my-app-1.0-SNAPSHOT-jar-with-dependencies.jar
201401010000 201401030000

(I set total-executor-cores=4. because I want to use only small part of my
cluster for that task), I get the following message:

 org.apache.spark.scheduler.TaskSchedulerImpl- Initial job has not accepted
any resources; check your cluster UI to ensure that workers are registered
and have sufficient resources

Can't I tell spark to use only part of my cores for specific task? I need it
if I want to run many tasks in parallel

thanks, nizan



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-standalone-cluster-resource-management-tp23444.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to