Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22771#discussion_r230730694
  
    --- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala 
---
    @@ -1364,6 +1385,21 @@ private[spark] class DAGScheduler(
                       if (job.numFinished == job.numPartitions) {
                         markStageAsFinished(resultStage)
                         cleanupStateForJobAndIndependentStages(job)
    +                    try {
    +                      // killAllTaskAttempts will fail if a 
SchedulerBackend does not implement
    +                      // killTask.
    +                      logInfo(s"Job ${job.jobId} is finished. Cancelling 
potential speculative " +
    +                        "or zombie tasks for this job")
    +                      // ResultStage is only used by this job. It's safe 
to kill speculative or
    +                      // zombie tasks in this stage.
    +                      taskScheduler.killAllTaskAttempts(
    --- End diff --
    
    cc @jiangxb1987 IIRC we have some similar code in barrier execution. Shall 
we create a util method to safely kill tasks?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to