[ 
https://issues.apache.org/jira/browse/SPARK-3174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14127188#comment-14127188
 ] 

Thomas Graves edited comment on SPARK-3174 at 9/9/14 4:51 PM:
--------------------------------------------------------------

Since you mention the graceful decommission as large enough to be a feature of 
its own the only way we would give executors back is if they are not being used 
and have no data in the cache, correct?

Perhaps this needs umbrella jira if we are splitting those apart.


was (Author: tgraves):
Since you mention the graceful decommission as large enough to be a feature of 
its own the only way we would give executors back is if they are not being used 
and have no data in the cache, correct?

> Under YARN, add and remove executors based on load
> --------------------------------------------------
>
>                 Key: SPARK-3174
>                 URL: https://issues.apache.org/jira/browse/SPARK-3174
>             Project: Spark
>          Issue Type: Improvement
>          Components: YARN
>    Affects Versions: 1.0.2
>            Reporter: Sandy Ryza
>            Assignee: Andrew Or
>         Attachments: SPARK-3174design.pdf
>
>
> A common complaint with Spark in a multi-tenant environment is that 
> applications have a fixed allocation that doesn't grow and shrink with their 
> resource needs.  We're blocked on YARN-1197 for dynamically changing the 
> resources within executors, but we can still allocate and discard whole 
> executors.
> I think it would be useful to have some heuristics that
> * Request more executors when many pending tasks are building up
> * Request more executors when RDDs can't fit in memory
> * Discard executors when few tasks are running / pending and there's not much 
> in memory
> Bonus points: migrate blocks from executors we're about to discard to 
> executors with free space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to