[ 
https://issues.apache.org/jira/browse/SPARK-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074189#comment-15074189
 ] 

Andrew Or commented on SPARK-12554:
-----------------------------------

I've downgraded the issue.

I also think the behavior is by design. The semantics of `spark.executor.cores` 
is that each executor has exactly N number of cores. Your cluster "hanging" 
means it doesn't have enough resources to launch the expected number of 
executors, so you should provision more resources.

> Standalone app scheduler will hang when app.coreToAssign < minCoresPerExecutor
> ------------------------------------------------------------------------------
>
>                 Key: SPARK-12554
>                 URL: https://issues.apache.org/jira/browse/SPARK-12554
>             Project: Spark
>          Issue Type: Bug
>          Components: Deploy, Scheduler
>    Affects Versions: 1.5.2
>            Reporter: Lijie Xu
>
> In scheduleExecutorsOnWorker() in Master.scala,
> *val keepScheduling = coresToAssign >= minCoresPerExecutor* should be changed 
> to *val keepScheduling = coresToAssign > 0*
> Suppose that an app's requested cores is 10 (i.e., spark.cores.max = 10) and 
> app.coresPerExecutor is 4 (i.e., spark.executor.cores = 4). 
> After allocating two executors (each has 4 cores) to this app, the 
> *app.coresToAssign = 2* and *minCoresPerExecutor = coresPerExecutor = 4*, so 
> *keepScheduling = false* and no extra executor will be allocated to this app. 
> If *spark.scheduler.minRegisteredResourcesRatio* is set to a large number 
> (e.g., > 0.8 in this case), the app will hang and never finish.
> Another case: if a small app's coresPerExecutor is larger than its requested 
> cores (e.g., spark.cores.max = 10, spark.executor.cores = 16), *val 
> keepScheduling = coresToAssign >= minCoresPerExecutor* is always FALSE. As a 
> result, this app will never get an executor to run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to