Github user aarondav commented on the pull request:

    https://github.com/apache/incubator-spark/pull/597#issuecomment-35226013
  
    What is the use-case you have in mind here? Just some sort of final status 
of all executors right before terminating a job/shell?
    
    If you're just interested in the HDFS stats, you might take a look at our 
[MetricsSystem](https://github.com/apache/incubator-spark/blob/master/core/src/main/scala/org/apache/spark/metrics/MetricsSystem.scala#L32),
 where we do register [hdfs 
bytes_read](https://github.com/apache/incubator-spark/blob/master/core/src/main/scala/org/apache/spark/executor/ExecutorSource.scala#L66)
 (sorry, this part of the code is a little hairy to follow). If you can attach 
to our metrics sink, you may be able to get the info you're looking for without 
modification to Spark.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. To do so, please top-post your response.
If your project does not have this feature enabled and wishes so, or if the
feature is enabled but not working, please contact infrastructure at
infrastruct...@apache.org or file a JIRA ticket with INFRA.
---

Reply via email to