[ 
https://issues.apache.org/jira/browse/SPARK-3174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159250#comment-14159250
 ] 

Patrick Wendell commented on SPARK-3174:
----------------------------------------

Hey all - I'm going to restructure this JIRA a bit to divide things into 
components that will be required to make it happen and allow for individual 
design docs to be posted. This is something that would be nice to have for 
other deployment modes in addition to YARN, if the design can facilitate that 
longer term. [~sandyr] let me know if you see any issue with the 
re-organization since you are the original reporter of this. 

> Under YARN, add and remove executors based on load
> --------------------------------------------------
>
>                 Key: SPARK-3174
>                 URL: https://issues.apache.org/jira/browse/SPARK-3174
>             Project: Spark
>          Issue Type: Improvement
>          Components: YARN
>    Affects Versions: 1.0.2
>            Reporter: Sandy Ryza
>            Assignee: Andrew Or
>         Attachments: SPARK-3174design.pdf
>
>
> A common complaint with Spark in a multi-tenant environment is that 
> applications have a fixed allocation that doesn't grow and shrink with their 
> resource needs.  We're blocked on YARN-1197 for dynamically changing the 
> resources within executors, but we can still allocate and discard whole 
> executors.
> I think it would be useful to have some heuristics that
> * Request more executors when many pending tasks are building up
> * Request more executors when RDDs can't fit in memory
> * Discard executors when few tasks are running / pending and there's not much 
> in memory
> Bonus points: migrate blocks from executors we're about to discard to 
> executors with free space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to