Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/1180#discussion_r15289308
--- Diff:
yarn/common/src/main/scala/org/apache/spark/scheduler/cluster/YarnClientSchedulerBackend.scala
---
@@ -115,7 +117,30 @@ private[spark] class
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/1180#discussion_r15289735
--- Diff:
yarn/common/src/main/scala/org/apache/spark/scheduler/cluster/YarnClientSchedulerBackend.scala
---
@@ -115,7 +117,30 @@ private[spark] class
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1180#issuecomment-49881766
QA tests have started for PR 1180. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17038/consoleFull
---
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1180#issuecomment-49881861
A little error repair at once.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1180#issuecomment-49882258
QA results for PR 1180:br- This patch FAILED unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user lianhuiwang commented on the pull request:
https://github.com/apache/spark/pull/1180#issuecomment-49883664
i think a long-running application,sometimes there has
maxNumExecutorFailures because yarn's reason, but yarn quickly provide spark
to new container.although
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1180#issuecomment-49884701
QA tests have started for PR 1180. This patch DID NOT merge cleanly!
brView progress:
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1180#issuecomment-49885520
QA tests have started for PR 1180. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17040/consoleFull
---
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/1180#issuecomment-49884681
@lianhuiwang I'm not sure I follow. Why would we want to kill it if you
have minNumExecutors. Yarn will just give you more. I understand that long
running
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1180#issuecomment-49886325
QA tests have started for PR 1180. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17041/consoleFull
---
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1180#issuecomment-49886617
Done
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user lianhuiwang commented on the pull request:
https://github.com/apache/spark/pull/1180#issuecomment-49886521
@tgravescs i think if yarn will give application more executors,
application will continue work and it donot need maxNumExecutorFailures. i
think
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/1180#issuecomment-49887688
The reason for maxNumExecutorsFailure isn't because yarn can't give it
more, its because something has happened to enough of your executors that you
think there is a
13 matches
Mail list logo