[ 
https://issues.apache.org/jira/browse/SPARK-19753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030270#comment-16030270
 ] 

Apache Spark commented on SPARK-19753:
--------------------------------------

User 'sitalkedia' has created a pull request for this issue:
https://github.com/apache/spark/pull/18150

> Remove all shuffle files on a host in case of slave lost of fetch failure
> -------------------------------------------------------------------------
>
>                 Key: SPARK-19753
>                 URL: https://issues.apache.org/jira/browse/SPARK-19753
>             Project: Spark
>          Issue Type: Bug
>          Components: Scheduler
>    Affects Versions: 2.0.1
>            Reporter: Sital Kedia
>
> Currently, when we detect fetch failure, we only remove the shuffle files 
> produced by the executor, while the host itself might be down and all the 
> shuffle files are not accessible. In case we are running multiple executors 
> on a host, any host going down currently results in multiple fetch failures 
> and multiple retries of the stage, which is very inefficient. If we remove 
> all the shuffle files on that host, on first fetch failure, we can rerun all 
> the tasks on that host in a single stage retry. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to