[ https://issues.apache.org/jira/browse/SPARK-34898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Mridul Muralidharan resolved SPARK-34898. ----------------------------------------- Fix Version/s: 3.2.0 Resolution: Fixed Issue resolved by pull request 31992 [https://github.com/apache/spark/pull/31992] > Send ExecutorMetricsUpdate EventLog appropriately > ------------------------------------------------- > > Key: SPARK-34898 > URL: https://issues.apache.org/jira/browse/SPARK-34898 > Project: Spark > Issue Type: Sub-task > Components: Spark Core > Affects Versions: 3.2.0 > Reporter: angerszhu > Assignee: Apache Spark > Priority: Major > Fix For: 3.2.0 > > > In current EventLoggingListener, we won't write > SparkListenerExecutorMetricsUpdate message at all > {code:java} > override def onExecutorMetricsUpdate(event: > SparkListenerExecutorMetricsUpdate): Unit = { > if (shouldLogStageExecutorMetrics) { > event.executorUpdates.foreach { case (stageKey1, newPeaks) => > liveStageExecutorMetrics.foreach { case (stageKey2, metricsPerExecutor) > => > // If the update came from the driver, stageKey1 will be the dummy > key (-1, -1), > // so record those peaks for all active stages. > // Otherwise, record the peaks for the matching stage. > if (stageKey1 == DRIVER_STAGE_KEY || stageKey1 == stageKey2) { > val metrics = metricsPerExecutor.getOrElseUpdate( > event.execId, new ExecutorMetrics()) > metrics.compareAndUpdatePeakValues(newPeaks) > } > } > } > } > } > {code} > It causes this effect that we can't get driver peakMemoryMetrics in SHS. We > can get executor's since it will update with TaskEnd events. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org