cloud-fan commented on code in PR #43954:
URL: https://github.com/apache/spark/pull/43954#discussion_r1402889739


##########
core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala:
##########
@@ -2179,12 +2164,12 @@ private[spark] class DAGScheduler(
           val message = s"Stage failed because barrier task $task finished 
unsuccessfully.\n" +
             failure.toErrorString
           try {
-            // killAllTaskAttempts will fail if a SchedulerBackend does not 
implement killTask.
+            // cancelTasks will fail if a SchedulerBackend does not implement 
killTask.
             val reason = s"Task $task from barrier stage $failedStage 
(${failedStage.name}) " +
               "failed."
             val job = jobIdToActiveJob.get(failedStage.firstJobId)
             val shouldInterrupt = job.exists(j => shouldInterruptTaskThread(j))
-            taskScheduler.killAllTaskAttempts(stageId, shouldInterrupt, reason)

Review Comment:
   I agree. The worst case is we manually trigger "abort stage" in the existing 
callers of `cancelTasks` to keep the old behavior.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to