[ https://issues.apache.org/jira/browse/SPARK-32898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Dongjoon Hyun updated SPARK-32898: ---------------------------------- Affects Version/s: 2.4.7 > totalExecutorRunTimeMs is too big > --------------------------------- > > Key: SPARK-32898 > URL: https://issues.apache.org/jira/browse/SPARK-32898 > Project: Spark > Issue Type: Bug > Components: Spark Core > Affects Versions: 2.4.7, 3.0.1 > Reporter: Linhong Liu > Assignee: wuyi > Priority: Major > Fix For: 3.0.2, 3.1.0 > > > This might be because of incorrectly calculating executorRunTimeMs in > Executor.scala > The function collectAccumulatorsAndResetStatusOnFailure(taskStartTimeNs) can > be called when taskStartTimeNs is not set yet (it is 0). > As of now in master branch, here is the problematic code: > [https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/executor/Executor.scala#L470] > > There is a throw exception before this line. The catch branch still updates > the metric. > However the query shows as SUCCESSful. Maybe this task is speculative. Not > sure. > > submissionTime in LiveExecutionData may also have similar problem. > [https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLAppStatusListener.scala#L449] > -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org