Github user squito commented on a diff in the pull request: https://github.com/apache/spark/pull/16473#discussion_r96459828 --- Diff: core/src/main/scala/org/apache/spark/ui/jobs/UIData.scala --- @@ -127,6 +127,14 @@ private[spark] object UIData { def updateTaskMetrics(metrics: Option[TaskMetrics]): Unit = { _metrics = TaskUIData.toTaskMetricsUIData(metrics) } + + def taskDuration: Long = { + if (taskInfo.status == "RUNNING") { + _taskInfo.timeRunning(System.currentTimeMillis) + } else { + _metrics.map(_.executorRunTime).getOrElse(1L) --- End diff -- `executorRunTime` will not get set in some failure handling scenarios: https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/executor/Executor.scala#L403 hence the option. I don't think it make sense to return `1L` in those cases in the rest api -- it should probably keep it as an option. (In the current code, the UI will use `1L` just for sorting, but displays an empty string -- that seems more consistent with keeping it as an option.)
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org