[ 
https://issues.apache.org/jira/browse/SPARK-2666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14726339#comment-14726339
 ] 

Kay Ousterhout commented on SPARK-2666:
---------------------------------------

[~irashid] totally agree, and IIRC there's a TODO suggesting we kill all 
remaining running tasks once a stage becomes a zombie somewhere in the 
scheduler code.  

> when task is FetchFailed cancel running tasks of failedStage
> ------------------------------------------------------------
>
>                 Key: SPARK-2666
>                 URL: https://issues.apache.org/jira/browse/SPARK-2666
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>            Reporter: Lianhui Wang
>
> in DAGScheduler's handleTaskCompletion,when reason of failed task is 
> FetchFailed, cancel running tasks of failedStage before add failedStage to 
> failedStages queue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to