[ https://issues.apache.org/jira/browse/SPARK-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15638836#comment-15638836 ]
Yongjia Wang commented on SPARK-10912: -------------------------------------- s3a and hdfs are different "schemes" in Spark's FileSystem.Statistics I think it is Spark's responsibility to choose what to report, and currently only "hdfs" and "file" are reported. I have been using the attached s3a_metrics.patch to build Spark in order to get the s3a metrics reported. I'm not sure whether there is a way to report s3a metrics just through configuration (without changing Spark source like what was did in the attached patch file). Now I need to add GoogleHadoopFileSystem's "gs" metrics, please advise the best approach. Thank you. > Improve Spark metrics executor.filesystem > ----------------------------------------- > > Key: SPARK-10912 > URL: https://issues.apache.org/jira/browse/SPARK-10912 > Project: Spark > Issue Type: Improvement > Components: Deploy > Affects Versions: 1.5.0 > Reporter: Yongjia Wang > Priority: Minor > Attachments: s3a_metrics.patch > > > In org.apache.spark.executor.ExecutorSource it has 2 filesystem metrics: > "hdfs" and "file". I started using s3 as the persistent storage with Spark > standalone cluster in EC2, and s3 read/write metrics do not appear anywhere. > The 'file' metric appears to be only for driver reading local file, it would > be nice to also report shuffle read/write metrics, so it can help with > optimization. > I think these 2 things (s3 and shuffle) are very useful and cover all the > missing information about Spark IO especially for s3 setup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org