Github user KevinGrealish commented on the issue:
https://github.com/apache/spark/pull/13824
@Vanzin, I think we are good to good now. Agree?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user KevinGrealish commented on the issue:
https://github.com/apache/spark/pull/13824
How about just this:
` // propagate PYSPARK_DRIVER_PYTHON and PYSPARK_PYTHON to driver in
cluster mode`
` if (!env.contains("PYSPARK_DRIVER_P
Github user KevinGrealish commented on the issue:
https://github.com/apache/spark/pull/13824
Created https://issues.apache.org/jira/browse/SPARK-16744 for the
override/append issue. linked to 16110. This fix remains just about being able
to run Python 3.
---
If your project is set
Github user KevinGrealish commented on the issue:
https://github.com/apache/spark/pull/13824
@tgravescs do you have a preference for this bug fix to use a hard coded
list of PATH, LD_LIBRARY_PATH, CLASSPATH, APP_CLASSPATH (an append list), or a
list of PYSPARK_PYTHON
Github user KevinGrealish commented on the issue:
https://github.com/apache/spark/pull/13824
Even when working with env vars values that will be interpreted as lists of
paths, the user intent could still be append or override. There could be a
spark.yarn.appMasterAppendEnv.* config
Github user KevinGrealish commented on the issue:
https://github.com/apache/spark/pull/13824
@Vanzin I looked at refactoring the client to be more amenable to more
granular unit testing, including using the mockable env var on conf you
mentioned, but concluded it would be too
Github user KevinGrealish commented on the issue:
https://github.com/apache/spark/pull/13824
@vanzin the function setupLaunchEnv reads env vars directly using
sys.env.get so I don't understand how SparkConf helps you setup values for this
function to read. See my earlier link
Github user KevinGrealish commented on the issue:
https://github.com/apache/spark/pull/13824
Was on vacation. Looking again today.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user KevinGrealish commented on the issue:
https://github.com/apache/spark/pull/13824
Unit tests involving env vars can get ugly:
[http://stackoverflow.com/questions/318239/how-do-i-set-environment-variables-from-java](http://stackoverflow.com/questions/318239/how-do-i-set
Github user KevinGrealish commented on the issue:
https://github.com/apache/spark/pull/13824
That is possible. It was done as is to be a narrower fix and less likely to
cause other behavior to change. It does not look like env hashmap is read by
the rest of the function (only written
GitHub user KevinGrealish reopened a pull request:
https://github.com/apache/spark/pull/13824
[SPARK-16110][YARN][PYSPARK] Fix allowing python version to be speicifed
per submit for cluster mode.
## What changes were proposed in this pull request?
This fix allows submit
Github user KevinGrealish closed the pull request at:
https://github.com/apache/spark/pull/13824
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user KevinGrealish opened a pull request:
https://github.com/apache/spark/pull/13824
[SPARK-16110][YARN][PYSPARK] Fix allowing python version to be speicifed
per submit for cluster mode.
## What changes were proposed in this pull request?
This fix allows submit
Github user KevinGrealish commented on a diff in the pull request:
https://github.com/apache/spark/pull/13146#discussion_r67953263
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala ---
@@ -510,6 +521,9 @@ private[deploy] class SparkSubmitArguments
14 matches
Mail list logo