Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22289#discussion_r214414143
  
    --- Diff: 
launcher/src/main/java/org/apache/spark/launcher/AbstractCommandBuilder.java ---
    @@ -200,6 +200,7 @@ void addOptionString(List<String> cmd, String options) {
     
         addToClassPath(cp, getenv("HADOOP_CONF_DIR"));
         addToClassPath(cp, getenv("YARN_CONF_DIR"));
    +    addToClassPath(cp, getEffectiveConfig().get("spark.yarn.conf.dir"));
    --- End diff --
    
    Saisai's question about the classpath configuration is actually the most 
complicated part of this feature. I haven't fully thought about how they would 
play out, but I really don't think it's as simple as appending this new config 
to the classpath.
    
    e.g. what is the expectation if you run "spark-shell" with this option? Do 
you end up using the config from the env variable or from the config? If you 
have both, and you reference a file in `--files` that is on an HDFS namespace 
declared in the `hdfs-site.xml` from the config, what will happen? (Answer: it 
will be ignored, since that is being masked by the `hdfs-site.xml` from the env 
variable.)


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to