Running on Amazon EMR w/Yarn and Spark 1.1.1, I have trouble getting Yarn
to use the number of executors that I specify in spark-submit:

--num-executors 2

In a cluster with two core nodes will typically only result in one executor
running at a time.  I can play with the memory settings and
num-cores-per-executor, and sometimes I can get 2 executors running at
once, but I'm not sure what the secret formula is to make this happen
consistently.

Reply via email to