I'm running Spark 1.2 with Yarn.  My logs show that my executors are failing
to connect to my driver.  This is because they are using the wrong hostname.

Since I'm running with Yarn, I can't set spark.driver.host as explained in
SPARK-4253.  So it should come from my HDFS configuration.  Do you know
which piece of HDFS configuration determines my driver hostname?  

It's definitely not using the hostname i have in
yarn-site.xml:yarn.iresourcemanager.hostname or
core-site.xml:fs.default.name.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Driver-Host-under-Yarn-tp21536.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to