Github user holdenk commented on the pull request:
https://github.com/apache/spark/pull/8911#issuecomment-143359655
ah that makes sense, I guess I forgot the Client was a public API.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user holdenk commented on the pull request:
https://github.com/apache/spark/pull/8911#issuecomment-143356542
@vanzin so in my own code (where I do try and switch between yarn and non
yarn mode) I clear the SPARK_YARN_MODE as done in the test.
I could update
Github user holdenk commented on the pull request:
https://github.com/apache/spark/pull/8911#issuecomment-143356986
Makes sense, do you think I should put that change in the SparkContext (on
startup of non-yarn client or stop of any client) or in the yarnclient stop
code?
---
If
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/8911#issuecomment-143356828
Yeah, having the Spark code clean up after itself is easier because it
means people don't have to remember to do it, and it doesn't need to be
documented.
---
If your
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/8911#issuecomment-143359407
I don't think so. `Client.scala`, for better or for worse, is still a
public API. So you can submit a `yarn-cluster` job by calling `Client.scala`
directly, and that
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/8911#issuecomment-143349924
There's code in different places that set `SPARK_YARN_MODE`, but there's no
code to unset it. So, to follow your example, if you start a context with
yarn-client and
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/8911#issuecomment-143358255
I did a cursory lookup for where it is set, and I think the places that
need to be changed are `SparkContext.stop()` and YARN's `Client.scala`.
Doing it in
Github user holdenk commented on the pull request:
https://github.com/apache/spark/pull/8911#issuecomment-143359153
@vanzin so it _seems_ like if I do it in SparkContext shutdown that should
be sufficient for all cases?
---
If your project is set up for it, you can reply to this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/8911#issuecomment-143325990
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/8911#issuecomment-143325969
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/8911#issuecomment-143328680
[Test build #43031 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/43031/consoleFull)
for PR 8911 at commit
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8911#discussion_r40404934
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -385,20 +385,13 @@ class SparkHadoopUtil extends Logging {
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8911#discussion_r40404998
--- Diff:
yarn/src/test/scala/org/apache/spark/deploy/yarn/YarnSparkHadoopUtilSuite.scala
---
@@ -233,4 +233,17 @@ class YarnSparkHadoopUtilSuite extends
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/8911#issuecomment-143084906
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
GitHub user holdenk opened a pull request:
https://github.com/apache/spark/pull/8911
[SPARK-10812][YARN][WIP] Spark hadoop util support switching to yarn
While this is likely not a huge issue for real production systems, for test
systems which may setup a Spark Context and tear it
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/8911#issuecomment-143084890
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/8911#issuecomment-143085986
[Test build #42993 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/42993/consoleFull)
for PR 8911 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/8911#issuecomment-143086344
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/8911#issuecomment-143086342
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/8911#issuecomment-143086340
[Test build #42993 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/42993/console)
for PR 8911 at commit
20 matches
Mail list logo