Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4818#issuecomment-76769047
[Test build #28177 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28177/consoleFull)
for PR 4818 at commit
[`991c803`](https://githu
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/4818#issuecomment-76764668
I'll just remove the option then, since it doesn't seem very useful to have
an option to enable more strict matching given the discussion.
---
If your project is set up f
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/4818#issuecomment-76659139
@vanzin okay so maybe set this to true then? I don't have any opinion, but
would love to get this in as it's one of the only release blockers.
---
If your project is se
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/4818#issuecomment-76556828
I think having the default true would be better so that its backwards
compatible. As @sryza mentioned YARN shouldn't really be giving you
containers smaller then you
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/4818#issuecomment-76520133
AFAIK this is not documented or part of the YARN interfaces/public contract
: I would prefer that spark depended on defined interfaces which are reasonably
stable.
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/4818#issuecomment-76516622
I wouldn't really agree that this is a YARN implementation detail. This is
of course somewhat subjective given that YARN doesn't really document this
behavior, but, speaki
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/4818#issuecomment-76515153
@sryza When cpu scheduling is enabled (ref @tgravescs comment here and in
jira) it must be validated.
Just as we validate memory and while prioritizing based on locali
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/4818#issuecomment-76514683
My opinion is that we should make the default true, as the vanilla YARN
default of `FIFOScheduler` will run into this issue (though most vendor
distributions have a better
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/4818#discussion_r2091
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala ---
@@ -290,8 +290,19 @@ private[yarn] class YarnAllocator(
location:
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/4818#discussion_r2088
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala ---
@@ -290,8 +290,19 @@ private[yarn] class YarnAllocator(
location:
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/4818#issuecomment-76502508
It seems reasonable to me to have the default of "false" and make a comment
in the release notes. No strong feelings here though.
---
If your project is set up for it,
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4818#issuecomment-76492244
[Test build #28095 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28095/consoleFull)
for PR 4818 at commit
[`8c9c346`](https://gith
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4818#issuecomment-76492252
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/28
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/4818#issuecomment-76487231
No strong opinion from me on the default. Whatever people prefer.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/4818#issuecomment-76487019
Looks good to me - pending addressing Tom's comment about what the default
should be.
---
If your project is set up for it, you can reply to this email and have your
rep
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/4818#discussion_r25545356
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala ---
@@ -290,8 +290,18 @@ private[yarn] class YarnAllocator(
locati
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4818#issuecomment-76486649
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/28
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4818#issuecomment-76486637
[Test build #28090 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28090/consoleFull)
for PR 4818 at commit
[`3359692`](https://gith
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4818#discussion_r25544599
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala ---
@@ -290,8 +290,18 @@ private[yarn] class YarnAllocator(
location:
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/4818#discussion_r25544439
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala ---
@@ -290,8 +290,18 @@ private[yarn] class YarnAllocator(
locati
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4818#issuecomment-76481190
[Test build #28095 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28095/consoleFull)
for PR 4818 at commit
[`8c9c346`](https://githu
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4818#discussion_r25540656
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala ---
@@ -290,8 +290,18 @@ private[yarn] class YarnAllocator(
location:
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4818#discussion_r25540573
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala ---
@@ -290,8 +290,18 @@ private[yarn] class YarnAllocator(
location:
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/4818#discussion_r25540529
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala ---
@@ -290,8 +290,18 @@ private[yarn] class YarnAllocator(
locati
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/4818#discussion_r25540303
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala ---
@@ -290,8 +290,18 @@ private[yarn] class YarnAllocator(
locati
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4818#issuecomment-76474129
[Test build #28090 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28090/consoleFull)
for PR 4818 at commit
[`3359692`](https://githu
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4818#issuecomment-76473277
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/28
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/4818#issuecomment-76473405
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/4818#issuecomment-76471641
We don't really lose anything, as far as I can tell. That information is
only used to make sure that the allocated containers match those that were
requested, not to do an
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/4818#issuecomment-76471465
This is specific to vcores and not mem iirc.
A solution might be to check vcores returned and modify it to what we
requested if found to be 1 when flag is set (we l
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/4818#issuecomment-76470593
@tgravescs @mridulm
Tested:
- --executor-cores 1, no conf = passed
- --executor-cores 2, no conf = cannot allocate resources, job waits forever
---exec
GitHub user vanzin opened a pull request:
https://github.com/apache/spark/pull/4818
[SPARK-6050] [yarn] Add config option to do lax resource matching.
Some YARN configurations return a resource structure for allocated
containers that does not match the requested resource. That me
32 matches
Mail list logo