[ 
https://issues.apache.org/jira/browse/SPARK-1809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15966782#comment-15966782
 ] 

Andrew Ash edited comment on SPARK-1809 at 4/12/17 11:00 PM:
-------------------------------------------------------------

I'm not using Mesos anymore, so closing


was (Author: aash):
Not using Mesos anymore, so closing

> Mesos backend doesn't respect HADOOP_CONF_DIR
> ---------------------------------------------
>
>                 Key: SPARK-1809
>                 URL: https://issues.apache.org/jira/browse/SPARK-1809
>             Project: Spark
>          Issue Type: Bug
>          Components: Mesos
>    Affects Versions: 1.0.0
>            Reporter: Andrew Ash
>
> In order to use HDFS paths without the server component, standalone mode 
> reads spark-env.sh and scans the HADOOP_CONF_DIR to open core-site.xml and 
> get the fs.default.name parameter.
> This lets you use HDFS paths like:
> - hdfs:///tmp/myfile.txt
> instead of
> - hdfs://myserver.mydomain.com:8020/tmp/myfile.txt
> However as of recent 1.0.0 pre-release (hash 756c96) I had to specify HDFS 
> paths with the full server even though I have HADOOP_CONF_DIR still set in 
> spark-env.sh.  The HDFS, Spark, and Mesos nodes are all co-located and 
> non-domain HDFS paths work fine when using the standalone mode.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to