[ 
https://issues.apache.org/jira/browse/SPARK-3174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14172387#comment-14172387
 ] 

Praveen Seluka commented on SPARK-3174:
---------------------------------------

[~andrewor] - Can you also comment on the API exposed to add/delete executors 
from SparkContext ? I believe it will be, sc.addExecutors(count : Int)
sc.deleteExecutors(List[String])

[~sandyr] [~tgraves] [~andrewor] [~vanzin] - Can you please take a look at the 
design doc I have proposed. I am sure there are some pros in doing it this way 
- Have indicated them in detail in the doc. Since, it does not change Spark 
Core itself, you could easily replace with another pluggable algorithm for 
dynamic scaling. I know that Anrdrew already have a PR based on his design doc, 
but would surely love to get some feedback.

> Provide elastic scaling within a Spark application
> --------------------------------------------------
>
>                 Key: SPARK-3174
>                 URL: https://issues.apache.org/jira/browse/SPARK-3174
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core, YARN
>    Affects Versions: 1.0.2
>            Reporter: Sandy Ryza
>            Assignee: Andrew Or
>         Attachments: SPARK-3174design.pdf, SparkElasticScalingDesignB.pdf, 
> dynamic-scaling-executors-10-6-14.pdf
>
>
> A common complaint with Spark in a multi-tenant environment is that 
> applications have a fixed allocation that doesn't grow and shrink with their 
> resource needs.  We're blocked on YARN-1197 for dynamically changing the 
> resources within executors, but we can still allocate and discard whole 
> executors.
> It would be useful to have some heuristics that
> * Request more executors when many pending tasks are building up
> * Discard executors when they are idle
> See the latest design doc for more information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to