viirya commented on code in PR #36564: URL: https://github.com/apache/spark/pull/36564#discussion_r1598660630
########## core/src/main/scala/org/apache/spark/scheduler/OutputCommitCoordinator.scala: ########## @@ -155,9 +158,9 @@ private[spark] class OutputCommitCoordinator(conf: SparkConf, isDriver: Boolean) val taskId = TaskIdentifier(stageAttempt, attemptNumber) stageState.failures.getOrElseUpdate(partition, mutable.Set()) += taskId if (stageState.authorizedCommitters(partition) == taskId) { - logDebug(s"Authorized committer (attemptNumber=$attemptNumber, stage=$stage, " + - s"partition=$partition) failed; clearing lock") - stageState.authorizedCommitters(partition) = null + sc.foreach(_.dagScheduler.stageFailed(stage, s"Authorized committer " + + s"(attemptNumber=$attemptNumber, stage=$stage, partition=$partition) failed; " + + s"but task commit success, data duplication may happen.")) } Review Comment: @cloud-fan I think this is not very clear or correct in the reason string. `stageState.authorizedCommitters` records a commit is allowed but it is not actually successful. So as you said the driver never knows if the task commit is successful or not. Maybe we should update this to reduce confusion. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org