[GitHub] spark pull request #19663: [SPARK-21888][YARN][SQL][Hive]add hadoop/hive/hba...

2017-11-06 Thread cloud-fan
Github user cloud-fan commented on a diff in the pull request:

https://github.com/apache/spark/pull/19663#discussion_r149066829
  
--- Diff: 
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala 
---
@@ -705,6 +705,19 @@ private[spark] class Client(
   }
 }
 
+val confDir =
+  sys.env.getOrElse("SPARK_CONF_DIR", sys.env("SPARK_HOME") + 
File.separator + "conf")
+val dir = new File(confDir)
+if (dir.isDirectory) {
+  val files = dir.listFiles(new FileFilter {
+override def accept(pathname: File): Boolean = {
+  pathname.isFile && pathname.getName.endsWith("xml")
+}
+  })
+  files.foreach { f => hadoopConfFiles(f.getName) = f }
--- End diff --

This indicates files in SPARK_CONF_DIR have higher priority than 
HADOOP_CONF_DIR or `YARN_CONF_DIR`, is it expected?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19663: [SPARK-21888][YARN][SQL][Hive]add hadoop/hive/hba...

2017-11-06 Thread yaooqinn
Github user yaooqinn commented on a diff in the pull request:

https://github.com/apache/spark/pull/19663#discussion_r149017871
  
--- Diff: 
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala 
---
@@ -705,6 +705,19 @@ private[spark] class Client(
   }
 }
 
+val confDir =
+  sys.env.getOrElse("SPARK_CONF_DIR", sys.env("SPARK_HOME") + 
File.separator + "conf")
+val dir = new File(confDir)
+if (dir.isDirectory) {
+  val files = dir.listFiles(new FileFilter {
+override def accept(pathname: File): Boolean = {
+  pathname.isFile && pathname.getName.endsWith("xml")
--- End diff --

I guess that we do not check the `$HADOOP(YARN)_CONF_DIR` either


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19663: [SPARK-21888][YARN][SQL][Hive]add hadoop/hive/hba...

2017-11-06 Thread jerryshao
Github user jerryshao commented on a diff in the pull request:

https://github.com/apache/spark/pull/19663#discussion_r149017279
  
--- Diff: 
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala 
---
@@ -705,6 +705,19 @@ private[spark] class Client(
   }
 }
 
+val confDir =
+  sys.env.getOrElse("SPARK_CONF_DIR", sys.env("SPARK_HOME") + 
File.separator + "conf")
+val dir = new File(confDir)
+if (dir.isDirectory) {
+  val files = dir.listFiles(new FileFilter {
+override def accept(pathname: File): Boolean = {
+  pathname.isFile && pathname.getName.endsWith("xml")
--- End diff --

Yes, I understand. My question is that do we need to explicitly check the 
expected file names, rather than blindly match any xml file?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19663: [SPARK-21888][YARN][SQL][Hive]add hadoop/hive/hba...

2017-11-06 Thread yaooqinn
Github user yaooqinn commented on a diff in the pull request:

https://github.com/apache/spark/pull/19663#discussion_r149016913
  
--- Diff: 
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala 
---
@@ -705,6 +705,19 @@ private[spark] class Client(
   }
 }
 
+val confDir =
+  sys.env.getOrElse("SPARK_CONF_DIR", sys.env("SPARK_HOME") + 
File.separator + "conf")
+val dir = new File(confDir)
+if (dir.isDirectory) {
+  val files = dir.listFiles(new FileFilter {
+override def accept(pathname: File): Boolean = {
+  pathname.isFile && pathname.getName.endsWith("xml")
--- End diff --

According to the doc, 
> Configuration of Hive is done by placing your hive-site.xml, 
core-site.xml (for security configuration), and hdfs-site.xml (for HDFS 
configuration) file in conf/.
here we may not only get hive-site.xml


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org