Github user markhamstra commented on a diff in the pull request:

    https://github.com/apache/spark/pull/686#discussion_r14211667
  
    --- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala 
---
    @@ -1062,10 +1062,15 @@ class DAGScheduler(
               // This is the only job that uses this stage, so fail the stage 
if it is running.
               val stage = stageIdToStage(stageId)
               if (runningStages.contains(stage)) {
    -            taskScheduler.cancelTasks(stageId, shouldInterruptThread)
    -            val stageInfo = stageToInfos(stage)
    -            stageInfo.stageFailed(failureReason)
    -            
listenerBus.post(SparkListenerStageCompleted(stageToInfos(stage)))
    +            try { // cancelTasks will fail if a SchedulerBackend does not 
implement killTask
    +              taskScheduler.cancelTasks(stageId, shouldInterruptThread)
    +              val stageInfo = stageToInfos(stage)
    +              stageInfo.stageFailed(failureReason)
    +              
listenerBus.post(SparkListenerStageCompleted(stageToInfos(stage)))
    +            } catch {
    +              case e: UnsupportedOperationException =>
    +                logInfo(s"Could not cancel tasks for stage $stageId", e)
    +            }
    --- End diff --
    
    Ok, either tonight or tomorrow I can update this PR to reflect that 
strategy, or you can go ahead and make the change @pwendell.  Outside the 
immediate scope of this PR, what prevents Mesos from being able to kill tasks, 
and when do we expect that to change? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to