[ 
https://issues.apache.org/jira/browse/SPARK-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Or updated SPARK-8029:
-----------------------------
    Description: 
When stages get retried, a task may have more than one attempt running at the 
same time, on the same executor.  Currently this causes problems for 
ShuffleMapTasks, since all attempts try to write to the same output files.

This is resolved through 

  was:When stages get retried, a task may have more than one attempt running at 
the same time, on the same executor.  Currently this causes problems for 
ShuffleMapTasks, since all attempts try to write to the same output files.


> ShuffleMapTasks must be robust to concurrent attempts on the same executor
> --------------------------------------------------------------------------
>
>                 Key: SPARK-8029
>                 URL: https://issues.apache.org/jira/browse/SPARK-8029
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.4.0
>            Reporter: Imran Rashid
>            Assignee: Davies Liu
>            Priority: Critical
>             Fix For: 1.5.3, 1.6.0
>
>         Attachments: 
> AlternativesforMakingShuffleMapTasksRobusttoMultipleAttempts.pdf
>
>
> When stages get retried, a task may have more than one attempt running at the 
> same time, on the same executor.  Currently this causes problems for 
> ShuffleMapTasks, since all attempts try to write to the same output files.
> This is resolved through 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to