venkata91 commented on a change in pull request #33896: URL: https://github.com/apache/spark/pull/33896#discussion_r742415317
########## File path: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala ########## @@ -1716,7 +1739,31 @@ private[spark] class DAGScheduler( if (runningStages.contains(shuffleStage) && shuffleStage.pendingPartitions.isEmpty) { if (!shuffleStage.shuffleDep.shuffleMergeFinalized && shuffleStage.shuffleDep.getMergerLocs.nonEmpty) { - scheduleShuffleMergeFinalize(shuffleStage) + // Check if a finalize task has already been scheduled. This is to prevent the + // following scenario: Stage A attempt 0 fails and gets retried. Stage A attempt 1 + // succeeded, triggering the scheduling of shuffle merge finalization. However, + // tasks from Stage A attempt 0 might still be running and sending task completion + // events to DAGScheduler. This check prevents multiple attempts to schedule merge + // finalization get triggered due to this. + if (shuffleStage.shuffleDep.getFinalizeTask.isEmpty) { + // If total shuffle size is smaller than the threshold, attempt to immediately + // schedule shuffle merge finalization and process map stage completion. + val totalSize = Try(mapOutputTracker + .getStatistics(shuffleStage.shuffleDep).bytesByPartitionId.sum).getOrElse(0L) Review comment: Other usages of `getStatistics` are with in `mapStage.isAvailable` check so its guaranteed to not throw exception. Unfortunately, that is not the case here. @mridulm Do you mean we can avoid finalizing the shuffle for non-deterministic stage in this case? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org