[ 
https://issues.apache.org/jira/browse/SPARK-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15074588#comment-15074588
 ] 

Saisai Shao commented on SPARK-12554:
-------------------------------------

For case 2, I think it is really a misconfiguration problem, better should be 
handled by user, not Spark.

For case 1 I agree with [~andrewor14] that a better choice is to provision more 
resources, rather than start executors with small resources. Since you increase 
the minimum resource ratio, you have to bear in mind the resource availability 
and performance tradeoff.

Thinking of your solution, it will really bring in different semantics, as now 
we assume all the executors have the same resource amount in different cluster 
manager (standalone, Yarn and Mesos), your change will break this. Also CPU is 
just one resource, what about memory, if we have less memory left on the node, 
do we need to create an executor with less memory?

Broadly saying, can we accept executors with different resource amount, it 
should be carefully thinking of.

> Standalone app scheduler will hang when app.coreToAssign < minCoresPerExecutor
> ------------------------------------------------------------------------------
>
>                 Key: SPARK-12554
>                 URL: https://issues.apache.org/jira/browse/SPARK-12554
>             Project: Spark
>          Issue Type: Bug
>          Components: Deploy, Scheduler
>    Affects Versions: 1.5.2
>            Reporter: Lijie Xu
>
> In scheduleExecutorsOnWorker() in Master.scala,
> {{val keepScheduling = coresToAssign >= minCoresPerExecutor}} should be 
> changed to {{val keepScheduling = coresToAssign > 0}}
> Case 1: 
> Suppose that an app's requested cores is 10 (i.e., {{spark.cores.max = 10}}) 
> and app.coresPerExecutor is 4 (i.e., {{spark.executor.cores = 4}}). 
> After allocating two executors (each has 4 cores) to this app, the 
> {{app.coresToAssign = 2}} and {{minCoresPerExecutor = coresPerExecutor = 4}}, 
> so {{keepScheduling = false}} and no extra executor will be allocated to this 
> app. If {{spark.scheduler.minRegisteredResourcesRatio}} is set to a large 
> number (e.g., > 0.8 in this case), the app will hang and never finish.
> Case 2: if a small app's coresPerExecutor is larger than its requested cores 
> (e.g., {{spark.cores.max = 10}}, {{spark.executor.cores = 16}}), {{val 
> keepScheduling = coresToAssign >= minCoresPerExecutor}} is always FALSE. As a 
> result, this app will never get an executor to run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to