Github user advancedxy commented on the issue:

    https://github.com/apache/spark/pull/21165
  
    > We should not do these 2 things together, and to me the second one is way 
simpler to get in and we should do it first.
    
    Agreed. For the scope of this pr, let's get killed tasks's accumulators 
into metrics first. After that we can discuss the possibility to expose the 
ability under users' request.
    
    > but please make sure this patch only touches internal accumulators that 
are used for metrics reporting.
    
    After a second look, this part is already be handled by Task's 
collectAccumulatorUpdates:
    ```
      def collectAccumulatorUpdates(taskFailed: Boolean = false): 
Seq[AccumulatorV2[_, _]] = {
        if (context != null) {
          // Note: internal accumulators representing task metrics always count 
failed values
          context.taskMetrics.nonZeroInternalAccums() ++
            // zero value external accumulators may still be useful, e.g. 
SQLMetrics, we should not
            // filter them out.
            context.taskMetrics.externalAccums.filter(a => !taskFailed || 
a.countFailedValues)
        } else {
          Seq.empty
        }
      }
    
    ```


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to