Github user devaraj-kavali commented on a diff in the pull request: https://github.com/apache/spark/pull/11916#discussion_r57349258 --- Diff: core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala --- @@ -620,6 +620,14 @@ private[spark] class TaskSetManager( // Note: "result.value()" only deserializes the value when it's called at the first time, so // here "result.value()" just returns the value and won't block other threads. sched.dagScheduler.taskEnded(tasks(index), Success, result.value(), result.accumUpdates, info) + // Kill other task attempts if any as the one attempt succeeded + for (attemptInfo <- taskAttempts(index) if attemptInfo.attemptNumber != info.attemptNumber --- End diff -- I can think that during the map phase(which don't write to Hadoop) there is a chance of succeeding two attempts as you explained. But in final phase(which write to Hadoop) tasks, during commitTask() if two attempts try to rename taskAttemptPath to committedTaskPath then only one attempt would succeed and other will fail with the rename failure.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org