[ 
https://issues.apache.org/jira/browse/SPARK-22683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16280555#comment-16280555
 ] 

Sean Owen commented on SPARK-22683:
-----------------------------------

When there are more tasks than executor slots, you're saying you'd prefer to 
_not_ launch an executor, which inherently means tasks wait for a free slot. 
This adds task latency. It also means that more tasks would have to build up 
before an executor launches.

I don't doubt the setting makes sense to you and your use case, but as you 
observe, there's already a knob you can turn to get a similar effect. That 
can't be removed, and having two knobs for mostly the same thing just isn't 
going to be worthwhile IMHO. That is, I suspect you can construct a case that 
gives the opposite result; there's nothing inherently more efficient about the 
scheduling policy you're proposing.

> Allow tuning the number of dynamically allocated executors wrt task number
> --------------------------------------------------------------------------
>
>                 Key: SPARK-22683
>                 URL: https://issues.apache.org/jira/browse/SPARK-22683
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.1.0, 2.2.0
>            Reporter: Julien Cuquemelle
>              Labels: pull-request-available
>
> let's say an executor has spark.executor.cores / spark.task.cpus taskSlots
> The current dynamic allocation policy allocates enough executors
> to have each taskSlot execute a single task, which minimizes latency, 
> but wastes resources when tasks are small regarding executor allocation
> overhead. 
> By adding the tasksPerExecutorSlot, it is made possible to specify how many 
> tasks
> a single slot should ideally execute to mitigate the overhead of executor
> allocation.
> PR: https://github.com/apache/spark/pull/19881



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to