Github user tgravescs commented on the pull request:

    https://github.com/apache/spark/pull/3409#issuecomment-65131243
  
    So I think we should do all 3 of these configs: 
spark.yarn.am.extraJavaOptions, spark.yarn.am.extraClassPath, and 
spark.yarn.am.extraLibraryPath and make then behave similarly. We should change 
the driver.extraClassPath to only apply in cluster mode so it matches the 
behavior of the others.  I also think it might be easiest to only have these 
apply in client mode.  Otherwise we have a conflict between driver.extra* and 
am.extra* and we would either error or define the precendence each one takes. 
Although I would be ok with it applying to both client and cluster mode if 
someone has specific concerns as long as we define the behavior.
    
    There is also code in SparkConf that validates the user isn't trying to set 
spark configs and memory size through the java options, we should probably run 
this through similar validations. 
    
    I know that would expand this jiras scope but I think it would be best to 
be done together.
    thoughts or concerns?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to