I am running a spark application on yarn cluster.

say I have available vcors in cluster as 100.And I start spark application
with --num-executors 200 --num-cores 2 (so I need total 200*2=400 vcores)
but in my cluster only 100 are available.

What will happen ? Will the job abort or it will be submitted successfully
and 100 vcores will be aallocated to 50 executors and rest executors will
be started as soon as vcores are available ?

Please note dynamic allocation is not enabled in cluster. I have old
version 1.2.

Thanks

Reply via email to