Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/1969#issuecomment-53689928
Actually you can just set `spark.home` in `spark-defaults.conf` for this
use case.
---
If your project is set up for it, you can reply to this email and have your
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/1969#issuecomment-53764382
Hi @iven, `spark-shell` actually goes through `spark-submit`. As @liancheng
mentioned, you can set `spark.home` to control the executor side Spark
location. This is
Github user iven commented on the pull request:
https://github.com/apache/spark/pull/1969#issuecomment-53830594
@liancheng @andrewor14 Thanks, it works! I'm closing this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user iven closed the pull request at:
https://github.com/apache/spark/pull/1969
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/1969#issuecomment-53633009
@iven I'm a little confused here. Are you referring to some use case like
this:
1. Spark is installed in directory `A` on driver node, but directory `B` on
Github user iven commented on the pull request:
https://github.com/apache/spark/pull/1969#issuecomment-53665867
@liancheng Yes. Although I'm using `spark-submit`, not `spark-shell`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/1969#issuecomment-53359507
@liancheng
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user iven commented on the pull request:
https://github.com/apache/spark/pull/1969#issuecomment-52388625
@JoshRosen I'm using Spark 1.0.1 with Mesos. If I don't specify SPARK_HOME
in the driver, Mesos executors will LOST with error:
```
sh:
Github user iven commented on the pull request:
https://github.com/apache/spark/pull/1969#issuecomment-52388869
@andrewor14 OK. I've update the patch when we confirm this PR is necessary.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
GitHub user iven opened a pull request:
https://github.com/apache/spark/pull/1969
Use user defined $SPARK_HOME in spark-submit if possible
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/iven/spark spark-home
Alternatively you
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1969#issuecomment-52291226
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1969#issuecomment-52309962
I once submitted a similar patch, but the latest solution (merged?) is that
we will not send local SPARK_HOME to the remote end entirely. @andrewor14?
---
If your
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1969#issuecomment-52342115
There was a bunch of prior discussion about this in an old pull request for
[SPARK-1110](http://issues.apache.org/jira/browse/SPARK-1110) (I'd link to it,
but it's
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/1969#issuecomment-52348177
This is an updated JIRA for the same issue
[SPARK-2290](https://issues.apache.org/jira/browse/SPARK-2290). We established
that, for standalone mode, we don't need to
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1969#issuecomment-52348600
In PySpark, it looks like we only use `SPARK_HOME` on the driver, where
it's used to find the path to `spark-submit` and to locate test support files.
---
If your
15 matches
Mail list logo