[ 
https://issues.apache.org/jira/browse/SPARK-20624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17140094#comment-17140094
 ] 

Holden Karau commented on SPARK-20624:
--------------------------------------

[~hyukjin.kwon] the design doc is at 
[https://docs.google.com/document/d/1xVO1b6KAwdUhjEJBolVPl9C6sLj7oOveErwDSYdT-pE/edit?usp=sharing]
 . The previous design doc from when I made this back in 2017 is at 
[https://docs.google.com/document/d/1bC2sxHoF3XbAvUHQebpylAktH6B3PSTVAGIOCYj0Mbg/edit]
 . I brought this feature to the dev@ list on February 4th. The design docs 
have been previously shared with committers who expressed interest and was 
linked in the PR. The original design predates SPIP which is why it doesn't 
follow the pattern, but I believe the relevant design discussions have occurred 
(you can look at the folks involved in the document and let me know if you 
disagree). Of course if you're interested in collaborating and helping out with 
the decommissioning work I'd love more people to collaborate with.

> Add better handling for node shutdown
> -------------------------------------
>
>                 Key: SPARK-20624
>                 URL: https://issues.apache.org/jira/browse/SPARK-20624
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 3.0.0
>            Reporter: Holden Karau
>            Priority: Major
>
> While we've done some good work with better handling when Spark is choosing 
> to decommission nodes (SPARK-7955), it might make sense in environments where 
> we get preempted without our own choice (e.g. YARN over-commit, EC2 spot 
> instances, GCE Preemptiable instances, etc.) to do something for the data on 
> the node (or at least not schedule any new tasks).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to