Github user edwinalu commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21221#discussion_r200826235
  
    --- Diff: 
core/src/main/scala/org/apache/spark/scheduler/SparkListener.scala ---
    @@ -160,11 +160,29 @@ case class 
SparkListenerBlockUpdated(blockUpdatedInfo: BlockUpdatedInfo) extends
      * Periodic updates from executors.
      * @param execId executor id
      * @param accumUpdates sequence of (taskId, stageId, stageAttemptId, 
accumUpdates)
    + * @param executorUpdates executor level metrics updates
      */
     @DeveloperApi
     case class SparkListenerExecutorMetricsUpdate(
         execId: String,
    -    accumUpdates: Seq[(Long, Int, Int, Seq[AccumulableInfo])])
    +    accumUpdates: Seq[(Long, Int, Int, Seq[AccumulableInfo])],
    +    executorUpdates: Option[Array[Long]] = None)
    +  extends SparkListenerEvent
    +
    +/**
    + * Peak metric values for the executor for the stage, written to the 
history log at stage
    + * completion.
    + * @param execId executor id
    + * @param stageId stage id
    + * @param stageAttemptId stage attempt
    + * @param executorMetrics executor level metrics, indexed by 
MetricGetter.values
    + */
    +@DeveloperApi
    +case class SparkListenerStageExecutorMetrics(
    +    execId: String,
    +    stageId: Int,
    +    stageAttemptId: Int,
    +    executorMetrics: Array[Long])
    --- End diff --
    
    We can change back to using an ExecutorMetrics class in this case.
    
    The plan was for any new metrics to be added to the end, so that there 
wouldn't be any change in ordering, and executorMetrics could be changed to 
immutable Seq[Long], but there would still be the issue of having to reference 
MetricGetter to find out how the metrics are indexed. 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to