[ 
https://issues.apache.org/jira/browse/SPARK-10370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14725043#comment-14725043
 ] 

Apache Spark commented on SPARK-10370:
--------------------------------------

User 'suyanNone' has created a pull request for this issue:
https://github.com/apache/spark/pull/8550

> After a stages map outputs are registered, all running attempts should be 
> marked as zombies
> -------------------------------------------------------------------------------------------
>
>                 Key: SPARK-10370
>                 URL: https://issues.apache.org/jira/browse/SPARK-10370
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.5.0
>            Reporter: Imran Rashid
>
> Follow up to SPARK-5259.  During stage retry, its possible for a stage to 
> "complete" by registering all its map output and starting the downstream 
> stages, before the latest task set has completed.  This will result in the 
> earlier task set continuing to submit tasks, that are both unnecessary and 
> increase the chance of hitting SPARK-8029.
> Spark should mark all tasks sets for a stage as zombie as soon as its map 
> output is registered.  Note that this involves coordination between the 
> various scheduler components ({{DAGScheduler}} and {{TaskSetManager}} at 
> least) which isn't easily testable with the current setup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to