I am running spark 1.1.0 on AWS EMR and I am running a batch job that
should seems to be highly parallelizable in yarn-client mode. But spark
stop spawning any more executors after spawning 6 executors even though
YARN cluster has 15 healthy m1.large nodes. I even tried providing
'--num-executors 60' argument during spark-submit but even that doesn't
help. A quick look at spark admin UI suggests there are active stages whose
tasks have not been started yet and even then spark doesn't start more
executors. I am not sure why. Any help on this would be greatly appreciated.

Here is link to screen shots that I took of spark admin and yarn admin -
http://imgur.com/a/ztjr7

Thanks,
Aniket

Reply via email to