[ 
https://issues.apache.org/jira/browse/SPARK-22765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16289676#comment-16289676
 ] 

Xuefu Zhang commented on SPARK-22765:
-------------------------------------

Hi [~tgraves], Thanks for your input.

In our busy, heavily loaded cluster environment, we have found that any idle 
time less than 60s is a problem. 30s works for small jobs, but starts having 
problem for bigger jobs. The symptom is that newly allocated executors are 
idled out before completing a single tasks! I suspected that this is caused by 
a busy scheduler. As a result, we have to keep 60s as a minimum.

Having said that, however, I'm not against container reuse. Also, I used the 
word "enhanced" to improve on MR scheduling. Reusing is good, but in my opinion 
the speculation factor in dynamic allocation goes against efficiency. That is, 
you set an idle time just in case a new task comes within that period of time. 
When that doesn't happen, you waste your executor for 1 minute. (This is good 
for performance.) Please note that this happens a lot at the end of each stage 
because no tasks from the next stage will be scheduled until the current stage 
finishes.

If we can remove the speculation aspect of the scheduling, the efficiency 
should improve significantly with some compromise on performance. This would be 
a good start point, which is the main purpose of my proposal of an enhanced 
MR-style scheduling, which is open to many other possible improvements.


> Create a new executor allocation scheme based on that of MR
> -----------------------------------------------------------
>
>                 Key: SPARK-22765
>                 URL: https://issues.apache.org/jira/browse/SPARK-22765
>             Project: Spark
>          Issue Type: Improvement
>          Components: Scheduler
>    Affects Versions: 1.6.0
>            Reporter: Xuefu Zhang
>
> Many users migrating their workload from MR to Spark find a significant 
> resource consumption hike (i.e, SPARK-22683). While this might not be a 
> concern for users that are more performance centric, for others conscious 
> about cost, such hike creates a migration obstacle. This situation can get 
> worse as more users are moving to cloud.
> Dynamic allocation make it possible for Spark to be deployed in multi-tenant 
> environment. With its performance-centric design, its inefficiency has also 
> unfortunately shown up, especially when compared with MR. Thus, it's believed 
> that MR-styled scheduler still has its merit. Based on our research, the 
> inefficiency associated with dynamic allocation comes in many aspects such as 
> executor idling out, bigger executors, many stages (rather than 2 stages only 
> in MR) in a spark job, etc.
> Rather than fine tuning dynamic allocation for efficiency, the proposal here 
> is to add a new, efficiency-centric  scheduling scheme based on that of MR. 
> Such a MR-based scheme can be further enhanced and be more adapted to Spark 
> execution model. This alternative is expected to offer good performance 
> improvement (compared to MR) still with similar to or even better efficiency 
> than MR.
> Inputs are greatly welcome!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to