[ 
https://issues.apache.org/jira/browse/SPARK-24174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16463575#comment-16463575
 ] 

Saisai Shao commented on SPARK-24174:
-------------------------------------

I believe Hadoop web UI already expose such configurations. It seems not so 
proper and necessary to expose here in the Spark side, this potentially mixed 
things up.

> Expose Hadoop config as part of /environment API
> ------------------------------------------------
>
>                 Key: SPARK-24174
>                 URL: https://issues.apache.org/jira/browse/SPARK-24174
>             Project: Spark
>          Issue Type: Wish
>          Components: Spark Core
>    Affects Versions: 2.1.0
>            Reporter: Nikolay Sokolov
>            Priority: Minor
>              Labels: features, usability
>
> Currently, /environment API call exposes only system properties and 
> SparkConf. However, in some cases when Spark is used in conjunction with 
> Hadoop, it is useful to know Hadoop configuration properties. For example, 
> HDFS or GS buffer sizes, hive metastore settings, and so on.
> So it would be good to have hadoop properties being exposed in /environment 
> API, for example:
> {code:none}
> GET .../application_1525395994996_5/environment
> {
>    "runtime": {"javaVersion": "1.8.0_131 (Oracle Corporation)", ...}
>    "sparkProperties": ["java.io.tmpdir","/tmp", ...],
>    "systemProperties": [["spark.yarn.jars", "local:/usr/lib/spark/jars/*"], 
> ...],
>    "classpathEntries": [["/usr/lib/hadoop/hadoop-annotations.jar","System 
> Classpath"], ...],
>    "hadoopProperties": [["dfs.stream-buffer-size": 4096], ...],
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to