[ 
https://issues.apache.org/jira/browse/SPARK-1202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13954500#comment-13954500
 ] 

ASF GitHub Bot commented on SPARK-1202:
---------------------------------------

Github user sundeepn commented on a diff in the pull request:

    https://github.com/apache/spark/pull/246#discussion_r11095628
  
    --- Diff: 
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala ---
    @@ -116,6 +118,16 @@ private[ui] class JobProgressListener(conf: SparkConf) 
extends SparkListener {
     
         val stages = poolToActiveStages.getOrElseUpdate(poolName, new 
HashMap[Int, StageInfo]())
         stages(stage.stageId) = stage
    +    
    +    // Extract Job ID and double check if we have the details
    +    val jobId = Option(stageSubmitted.properties).flatMap {
    +      p => Option(p.getProperty("spark.job.id"))
    +    }.getOrElse("-1").toInt
    --- End diff --
    
    Well, this is only to ensure we can handle things if we get any scenarios 
where the onJobStart does not arrive before stageSubmitted. I am not familiar 
with the scheduling code sufficiently to rule that out. If you are sure, I can 
take this out.


> Add a "cancel" button in the UI for stages
> ------------------------------------------
>
>                 Key: SPARK-1202
>                 URL: https://issues.apache.org/jira/browse/SPARK-1202
>             Project: Apache Spark
>          Issue Type: New Feature
>          Components: Web UI
>            Reporter: Patrick Cogan
>            Assignee: Sundeep Narravula
>            Priority: Critical
>             Fix For: 1.0.0
>
>
> Seems like this would be really useful for people. It's not that hard, we 
> just need to lookup the jobs associated with the stage and kill them. Might 
> involve exposing some additional API's in SparkContext.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to