Github user tgravescs commented on the pull request:

    https://github.com/apache/spark/pull/3409#issuecomment-65149255
  
    @vanzin  I'm not really following where you are going with the questions? 
Spark started out having many configs be the same across everything but there 
are many cases they could and should be different so split them apart again.  
Having things separate allow for more flexibility.  Yes you may have to 
duplicate a setting but it shouldn't be a big deal to put it in 2 places in the 
configs. If other files (hdfs-site, core-site) support them they can currently 
be set via executor.extraJavaOptions and driver.extraJavaOptions. The only 
thing it can't be set on is the ApplicationMaster in client mode.
    
    I agree that its a bit weird that the user has to do 2 different things 
between client and cluster mode but the fact is they run differently.   Until 
we can fix that there are things that might be different.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to