Github user cloud-fan commented on the issue:

    https://github.com/apache/spark/pull/21165
  
    > For example user may want to record CPU time for every task and get the 
total CPU time for the application.
    
    The problem is, shall we allow end users to collect metrics via 
accumulators? Currently only Spark can do that via internal accumulators which 
count failed tasks. We need a careful API design about how to expose this 
ability in the end users.
    
    In the meanwhile, since we already count failed tasks, it makes sense to 
also count killed tasks for internal metrics collecting.
    
    We should not do these 2 things together, and to me the second one is way 
simpler to get in and we should do it first.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to