If there isn't enough space in your cluster for all the executors you asked for 
to be created, Spark will only get the ones which can be allocated. It will 
start work without waiting for the others to arrive.

Make sure you ask for enough memory: YARN is a lot more unforgiving about 
memory use than it is about CPU

> On 20 Apr 2016, at 16:21, Shushant Arora <shushantaror...@gmail.com> wrote:
> 
> I am running a spark application on yarn cluster.
> 
> say I have available vcors in cluster as 100.And I start spark application 
> with --num-executors 200 --num-cores 2 (so I need total 200*2=400 vcores) but 
> in my cluster only 100 are available.
> 
> What will happen ? Will the job abort or it will be submitted successfully 
> and 100 vcores will be aallocated to 50 executors and rest executors will be 
> started as soon as vcores are available ?
> 
> Please note dynamic allocation is not enabled in cluster. I have old version 
> 1.2.
> 
> Thanks
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to