Github user markhamstra commented on a diff in the pull request:

    https://github.com/apache/spark/pull/3009#discussion_r20174342
  
    --- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
    @@ -0,0 +1,144 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.spark.ui.jobs
    +
    +import scala.xml.{Node, NodeSeq}
    +
    +import javax.servlet.http.HttpServletRequest
    +
    +import org.apache.spark.ui.{WebUIPage, UIUtils}
    +import org.apache.spark.ui.jobs.UIData.JobUIData
    +
    +
    +/** Page showing list of all ongoing and recently finished jobs */
    +private[ui] class AllJobsPage(parent: JobsTab) extends WebUIPage("") {
    +  private val sc = parent.sc
    +  private val listener = parent.listener
    +
    +  private def getSubmissionTime(job: JobUIData): Option[Long] = {
    +    for (
    +      firstStageId <- job.stageIds.headOption;
    +      firstStageInfo <- listener.stageIdToInfo.get(firstStageId);
    +      submitTime <- firstStageInfo.submissionTime
    +    ) yield submitTime
    +  }
    +
    +  private def jobsTable(jobs: Seq[JobUIData]): Seq[Node] = {
    +    val columns: Seq[Node] = {
    +      <th>Job Id (Job Group)</th>
    --- End diff --
    
    I'm not sure how Job Group is being used in all cases now, or whether it 
even works particularly well at all, but the concept of a Job Group could be 
useful when the "job" from the user's point of view is actually composed of 
multiple Spark jobs.  That can be the case when you want to do something like 
sorting an RDD without falling into the nastiness of embedded, eager RDD 
actions to generate a RangePartitioner.  Instead, you'd queue up multiple jobs 
in a Job Group with later jobs depending on the results of earlier jobs in the 
group.  If the user decides that the "job" should be killed, then all of the 
jobs in the Job Group should be canceled.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to