[ 
https://issues.apache.org/jira/browse/SPARK-10644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14906030#comment-14906030
 ] 

Sean Owen commented on SPARK-10644:
-----------------------------------

How many cores per executor? I'm assuming you mean 1 and have configured 
accordingly. I assume you do see 63 executors run successfully. What about 
memory? it could have enough cores but not enough memory.

On a side note, why have 3 executors per worker, instead of 1 with 3 cores? I 
get the overallocating cores thing, although I wonder out loud if Spark would 
just let a worker use "10" cores on a 4 core machine if you set it that way.

> Applications wait even if free executors are available
> ------------------------------------------------------
>
>                 Key: SPARK-10644
>                 URL: https://issues.apache.org/jira/browse/SPARK-10644
>             Project: Spark
>          Issue Type: Bug
>          Components: Scheduler
>    Affects Versions: 1.5.0
>         Environment: RHEL 6.5 64 bit
>            Reporter: Balagopal Nair
>            Priority: Minor
>
> Number of workers: 21
> Number of executors: 63
> Steps to reproduce:
> 1. Run 4 jobs each with max cores set to 10
> 2. The first 3 jobs run with 10 each. (30 executors consumed so far)
> 3. The 4 th job waits even though there are 33 idle executors.
> The reason is that a job will not get executors unless 
> the total number of EXECUTORS in use < the number of WORKERS
> If there are executors available, resources should be allocated to the 
> pending job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to