Github user markhamstra commented on a diff in the pull request:

    https://github.com/apache/spark/pull/686#discussion_r12409225
  
    --- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala 
---
    @@ -1055,10 +1055,16 @@ class DAGScheduler(
               // This is the only job that uses this stage, so fail the stage 
if it is running.
               val stage = stageIdToStage(stageId)
               if (runningStages.contains(stage)) {
    -            taskScheduler.cancelTasks(stageId, shouldInterruptThread)
    -            val stageInfo = stageToInfos(stage)
    -            stageInfo.stageFailed(failureReason)
    -            
listenerBus.post(SparkListenerStageCompleted(stageToInfos(stage)))
    +            try { // cancelTasks will fail if a SchedulerBackend does not 
implement killTask
    +              taskScheduler.cancelTasks(stageId, shouldInterruptThread)
    +            } catch {
    +              case e: UnsupportedOperationException =>
    +                logInfo(s"Could not cancel tasks for stage $stageId", e)
    +            } finally {
    +              val stageInfo = stageToInfos(stage)
    +              stageInfo.stageFailed(failureReason)
    --- End diff --
    
    Good question.  I'm mostly just trying to keep the DAGScheduler in a 
consistent state even when the backend doesn't support killing tasks, and I'll 
admit to working quickly while trying to get this significant bug fix into 
1.0.0, not having fully thought this part through.  If you can't see any use 
for the finally block unless taskScheduler.cancelTasks is successful, then we 
can drop the finally. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to