it's probably because your YARN cluster has only 40 vCores available.

Go to your resource manager and check if "VCores Total" and "Memory Total" 
exceeds what you have set. (40 cores and 5120 MB)

If that looks fine, go to "Scheduler" page and find the queue on which your 
jobs run, and check the resources allocated for that queue.

Hope this helps.

Jong Wook


> On Jul 15, 2015, at 01:57, Shushant Arora <shushantaror...@gmail.com> wrote:
> 
> I am running spark application on yarn managed cluster.
> 
> When I specify --executor-cores > 4 it fails to start the application.
> I am starting the app as
> 
> spark-submit --class classname --num-executors 10 --executor-cores 5 --master 
> masteradd jarname
> 
> Exception in thread "main" org.apache.spark.SparkException: Yarn application 
> has already ended! It might have been killed or unable to launch application 
> master.
> 
> When I give --executor-cores as 4 , it works fine.
> 
> My Cluster has 10 nodes . 
> Why am I not able to specify more than 4 concurrent tasks. Is there any max 
> limit yarn side or spark side which I can override to make use of more tasks ?


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to