[ 
https://issues.apache.org/jira/browse/SPARK-3174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14170998#comment-14170998
 ] 

Praveen Seluka edited comment on SPARK-3174 at 10/14/14 2:41 PM:
-----------------------------------------------------------------

Posted a detailed autoscaling design doc => 
https://issues.apache.org/jira/secure/attachment/12674773/SparkElasticScalingDesignB.pdf
 - Created PR for https://issues.apache.org/jira/browse/SPARK-3822 (hooks to 
add/delete executors from SparkContext)
 - Posted the detailed design on Autoscaling criteria's and also have a patch 
ready for the same. 

Just to give some context, I have been looking at elastic autoscaling for quite 
sometime. Mailed spark­users mailing list few weeks back on the idea of having 
hooks for adding and deleting executors and also ready to submit a patch. Last 
week, I saw the initial details design doc posted in SPARK­3174 JIRA here. 
After looking briefly at the design proposed, few things were evident for me.
- ­ The design largely overlaps with what I have built so far. Also, I have the 
patch in working state for this now.
- ­ Sharing this design doc which highlights the idea in slightly more detail. 
It will be great if we could collaborate on this as I have the basic pieces 
implemented already.
- ­ Contrasting some implementation level details when compared to the one 
proposed already (hence, calling this design B) and we could take the best 
course of action.

Looking forward to hear your view and collaborate on this.



was (Author: praveenseluka):
Posted a detailed autoscaling design doc => 
https://issues.apache.org/jira/secure/attachment/12674773/SparkElasticScalingDesignB.pdf
 - Created PR for https://issues.apache.org/jira/browse/SPARK-3822 (hooks to 
add/delete executors from SparkContext)
 - Posted the detailed design on Autoscaling criteria's and also have a patch 
ready for the same. 

Just to give some context, I have been looking at elastic autoscaling for quite 
sometime. Mailed spark­users mailing list few weeks back on the idea of having 
hooks for adding and deleting executors and also ready to submit a patch. Last 
week, I saw the initial details design doc posted in SPARK­3174 JIRA here. 
After looking briefly at the design proposed, few things were evident for me.
- ­ The design largely overlaps with what I have built so far. Also, I have the 
patch in working state for this now.
- ­ Sharing this design doc which highlights the idea in slightly more detail. 
It will be great if we could collaborate on this as I have the basic pieces 
implemented already.
- ­ Contrasting some implementation level details when compared to the one 
proposed already (hence, calling this design B) and we could take the best 
course of action.


> Provide elastic scaling within a Spark application
> --------------------------------------------------
>
>                 Key: SPARK-3174
>                 URL: https://issues.apache.org/jira/browse/SPARK-3174
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core, YARN
>    Affects Versions: 1.0.2
>            Reporter: Sandy Ryza
>            Assignee: Andrew Or
>         Attachments: SPARK-3174design.pdf, SparkElasticScalingDesignB.pdf, 
> dynamic-scaling-executors-10-6-14.pdf
>
>
> A common complaint with Spark in a multi-tenant environment is that 
> applications have a fixed allocation that doesn't grow and shrink with their 
> resource needs.  We're blocked on YARN-1197 for dynamically changing the 
> resources within executors, but we can still allocate and discard whole 
> executors.
> It would be useful to have some heuristics that
> * Request more executors when many pending tasks are building up
> * Discard executors when they are idle
> See the latest design doc for more information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to