Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/2401
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enab
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57847794
Ok, LGTM I'm merging this into master and 1.1. Thanks @brndnmtthws.
---
If your project is set up for it, you can reply to this email and have your
reply appear on Git
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57842699
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21258/consoleFull)
for PR 2401 at commit
[`4abaa5d`](https://github.com/a
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57842713
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/2
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57833806
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21258/consoleFull)
for PR 2401 at commit
[`4abaa5d`](https://github.com/ap
Github user brndnmtthws commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57833282
Done as per @andrewor14's suggestions.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57831156
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/2
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57830054
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/21
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57826312
Hey @brndnmtthws I left a few more minor comments but I think this is
basically ready for merge. Thanks for your changes.
---
If your project is set up for it, you ca
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57822909
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21254/consoleFull)
for PR 2401 at commit
[`54a4a06`](https://github.com/ap
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57822162
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this f
Github user brndnmtthws commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57814884
That test failure appears to be unrelated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your pro
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57814712
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21249/consoleFull)
for PR 2401 at commit
[`54a4a06`](https://github.com/a
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57814725
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/21
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57807681
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21249/consoleFull)
for PR 2401 at commit
[`54a4a06`](https://github.com/ap
Github user brndnmtthws commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57806924
Fixed typo's (also switch from `Math.max` to `math.max` because that seems
to be the Scala way).
---
If your project is set up for it, you can reply to this email an
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57760828
This looks very reasonable. Counting the executor's bookkeeping core
against the resources also seems much more correct than pretending it doesn't
exist like before.
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2401#discussion_r18382851
--- Diff: docs/configuration.md ---
@@ -253,6 +253,17 @@ Apart from these, the following properties are also
available, and may be useful
spark.execut
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57717093
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/21
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57717079
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21221/consoleFull)
for PR 2401 at commit
[`747b490`](https://github.com/a
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57703942
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21221/consoleFull)
for PR 2401 at commit
[`747b490`](https://github.com/ap
Github user brndnmtthws commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57703146
Updated to match YARN.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user brndnmtthws commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57700183
It's counterintuitive to the policy used elsewhere, but if it will appease
the Spark folks, I will make the change.
---
If your project is set up for it, you can rep
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57699365
The Yarn patch sets a default value based on the JVM heap. That works well
for a set of workloads (basically, Java/Scala apps). But the setting itself is
an absolute value
Github user brndnmtthws commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57698498
The definition is indeed the same, but I don't see how the YARN patch
solves that better than this one.
---
If your project is set up for it, you can reply to this e
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57697590
I guess it depends on what "--executor-memory" means on Mesos. On Yarn, it
means the "-Xmx" value for the JVM. So it doesn't include the memory used by
any python processe
Github user brndnmtthws commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57697287
Mesos will indeed kill your contains as well (provided cgroup limits are
enabled). I also don't see how this would necessarily apply differently for
Python, since it
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57697092
A fraction wouldn't work well for pyspark on Yarn, because the memory used
by the python processes is not a fraction of the JVM heap. I don't know about
Mesos, but Yarn wi
Github user brndnmtthws commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57696587
As expressed elsewhere, I think the model that the #2485 has doesn't make
sense. The most important knob is the overhead fraction, rather than the
minimum number of
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57694715
@brndnmtthws Thanks for adapting this to what Yarn does. One concern I have
now though is that `spark.yarn.executor.memoryOverhead` and
`spark.mesos.executor.memoryOve
Github user brndnmtthws commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57676227
This code has been tested & verified on a cluster of this size:
:
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/20
Github user brndnmtthws commented on a diff in the pull request:
https://github.com/apache/spark/pull/2401#discussion_r18122508
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MemoryUtils.scala
---
@@ -0,0 +1,34 @@
+/*
+ * Licensed to the Apache Soft
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2401#discussion_r18122496
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MemoryUtils.scala
---
@@ -0,0 +1,34 @@
+/*
+ * Licensed to the Apache Software
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57032218
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/20
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57032214
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20883/consoleFull)
for PR 2401 at commit
[`b391649`](https://github.com/a
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57026573
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20883/consoleFull)
for PR 2401 at commit
[`b391649`](https://github.com/ap
Github user brndnmtthws commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57023278
Build error appears to be unrelated to my patch.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If yo
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57015171
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/20
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57015165
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20869/consoleFull)
for PR 2401 at commit
[`908d90a`](https://github.com/a
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57008686
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20869/consoleFull)
for PR 2401 at commit
[`908d90a`](https://github.com/ap
Github user brndnmtthws commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57008399
I've updated the PR to match closer to what #2485 does.
I'd like to keep the fractional param. Having successfully operated
various services on Mesos in prod
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57007834
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/20
Github user brndnmtthws commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57001557
That code is still YARN specific. Shouldn't we have common code for this?
Also, I disagree on the 7% overhead. I think 15% is a better default.
---
If your
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57001147
Hey @brndnmtthws have you looked at the latest changes in #2485? I think
that one's basically ready to be merged, and the discussion there is that we
want to avoid exp
Github user brndnmtthws commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-56216904
I thought there was some desire to have the same thing also #1391?
Furthermore, from my experience writing frameworks, I think a much better
model is the frac
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-56216747
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20574/consoleFull)
for PR 2401 at commit
[`56988e3`](https://github.com/a
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-56216271
So, I'm a little disappointed that this doesn't at least follow the Yarn
model of "one setting that defines the overhead". Instead, it has two settings,
one for a fraction
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-56207061
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20574/consoleFull)
for PR 2401 at commit
[`56988e3`](https://github.com/ap
49 matches
Mail list logo