How many cores / memory do you have available per NodeManager, and how
many cores / memory are you requesting for your job?

Remember that in Yarn mode, Spark launches "num executors + 1"
containers. The extra container, by default, reserves 1 core and about
1g of memory (more if running in cluster mode and specifying
"--driver-memory").

On Fri, Dec 19, 2014 at 12:57 PM, Jon Chase <jon.ch...@gmail.com> wrote:
> Running on Amazon EMR w/Yarn and Spark 1.1.1, I have trouble getting Yarn to
> use the number of executors that I specify in spark-submit:
>
> --num-executors 2
>
> In a cluster with two core nodes will typically only result in one executor
> running at a time.  I can play with the memory settings and
> num-cores-per-executor, and sometimes I can get 2 executors running at once,
> but I'm not sure what the secret formula is to make this happen
> consistently.



-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to