[ https://issues.apache.org/jira/browse/SPARK-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14968437#comment-14968437 ]
Apache Spark commented on SPARK-8029: ------------------------------------- User 'squito' has created a pull request for this issue: https://github.com/apache/spark/pull/9214 > ShuffleMapTasks must be robust to concurrent attempts on the same executor > -------------------------------------------------------------------------- > > Key: SPARK-8029 > URL: https://issues.apache.org/jira/browse/SPARK-8029 > Project: Spark > Issue Type: Bug > Components: Spark Core > Affects Versions: 1.4.0 > Reporter: Imran Rashid > Assignee: Imran Rashid > Priority: Critical > Attachments: > AlternativesforMakingShuffleMapTasksRobusttoMultipleAttempts.pdf > > > When stages get retried, a task may have more than one attempt running at the > same time, on the same executor. Currently this causes problems for > ShuffleMapTasks, since all attempts try to write to the same output files. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org