Github user devaraj-kavali closed the pull request at:
https://github.com/apache/spark/pull/12571
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the featur
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/12571#issuecomment-215654328
Thank you @devaraj-kavali for implementing this to show its impact, but I
think you can close this PR now.
---
If your project is set up for it, you can reply to this e
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/12571#issuecomment-215530798
thanks for pointing that out, I made similar comments on the jira in case
others don't agree with me we can discuss more.
---
If your project is set up for it, you c
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/12571#issuecomment-215521904
I'm OK with not doing this, I think the contributor was just following up
on an old idea from Matei.
---
If your project is set up for it, you can reply to this email a
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/12571#issuecomment-215521675
I understand the argument of we want the best user experience and I'm not
against the settings themselves, I just think the benefit isn't worth the cost
here.
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/12571#issuecomment-215150523
re: perm gen, what Sean said. Spark just fails with the default value.
I don't feel strongly about adding this option or not. I have never seen
anyone explicitly
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/12571#issuecomment-215125053
It's debatable, yeah. This one was suggested by @mateiz a long time ago.
Max perm size was set because Spark jobs would generally fail with the default
JVM settings. (No
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/12571#issuecomment-215089176
I understand that it can be overridden but I'm not sure we should be in the
business of setting the GC flags for people. You set them to one value now,
someone comes
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/12571#discussion_r61209327
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/SparkDeploySchedulerBackend.scala
---
@@ -66,12 +66,20 @@ private[spark] class Spark
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/12571#issuecomment-214982150
Thanks @tgravescs for the comment, users can still specify this gc params
as part of the java opts. If the user doesn't specify these gc params then only
we are
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/12571#issuecomment-214878609
why are we adding this option at all here? users can specify it or any
other options in the java options. Every application could be different and
want to set this
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/12571#discussion_r61133560
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/SparkDeploySchedulerBackend.scala
---
@@ -66,12 +66,20 @@ private[spark] class SparkDeploySc
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/12571#discussion_r60826861
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/SparkDeploySchedulerBackend.scala
---
@@ -66,12 +66,20 @@ private[spark] class SparkDeploySc
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/12571#issuecomment-213522339
@srowen I have made the changes, Please have a look into this. Thanks
---
If your project is set up for it, you can reply to this email and have your
reply appea
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/12571#issuecomment-212841070
Thanks @srowen for checking this immediately, I will make the changes as
per your explanation.
---
If your project is set up for it, you can reply to this email
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/12571#issuecomment-212832752
(PS I think it would be useful to perhaps set these JVM flags, if not
already set, to some more conservative value. I think that's the idea in the
JIRA.)
---
If your p
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/12571#issuecomment-212832596
You can already specify this option, like a bunch of other potentially
relevant JVM flags, via the JVM flags option. I don't think we need to plumb
through a special val
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/12571#issuecomment-212827647
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your p
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/12571
[SPARK-1989] [CORE] Exit executors faster if they get into a cycle of heavy
GC
## What changes were proposed in this pull request?
Added spark.executor.gcTimeLimit config for gettin
19 matches
Mail list logo