[ 
https://issues.apache.org/jira/browse/SPARK-16158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15421803#comment-15421803
 ] 

Thomas Graves commented on SPARK-16158:
---------------------------------------

seems like an ok idea to me, but you have to make sure to do it right. for 
instance I would expect different policies to possibly have different configs.  
Need to think about how those work, are named, are documented, etc.  Make sure 
things aren't relying on implicit things happening, make sure the communication 
with the resource managers (mesos/yarn) is well defined.

I do also think the current policy could be improved a lot.  Is there a reason 
to not just improve that? Obviously having the user define the policy for 
different jobs makes things more complex for the user so it would be nice to 
have at least one generic that works ok in many situations but you could still 
have highly optimized ones.

One thing you mention is executors between stages. The min executor setting 
should help with that but obviously you could have largely varying number of 
tasks between different stages which makes that not ideal.   I could see some 
other policy having a different config to handle this.

> Support pluggable dynamic allocation heuristics
> -----------------------------------------------
>
>                 Key: SPARK-16158
>                 URL: https://issues.apache.org/jira/browse/SPARK-16158
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>            Reporter: Nezih Yigitbasi
>
> It would be nice if Spark supports plugging in custom dynamic allocation 
> heuristics. This feature would be useful for experimenting with new 
> heuristics and also useful for plugging in different heuristics per job etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to