Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/13130#issuecomment-222502337
The size of JobConf is about 124k and the memory of my dirver is 10g (>
124k * 1 = 1.18g), so it is work. @rxin
---
If your project is set up for it, you can
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/13130#issuecomment-221161008
Does it work for you when you changed it to 1 rather than 1000?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/13130#issuecomment-221146422
I'm confused about it too
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/13130#issuecomment-220497728
Do you know why it OOMs when it is softValues?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your pro
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/13130#issuecomment-220497354
cc @yhuai @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have thi
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/13130#issuecomment-220273234
I refer to other codes using the maximumSize method @akohli
```
/** A cache of Spark SQL data source tables that have been accessed. */
protected[hive]
Github user akohli commented on the pull request:
https://github.com/apache/spark/pull/13130#issuecomment-219998908
sure (i get that) but 1000 is an arbitrary number that may or may not
sufficient. why not 10? why not 1?
---
If your project is set up for it, you can reply to t
Github user DoingDone9 commented on the pull request:
https://github.com/apache/spark/pull/13130#issuecomment-219905068
It's ok. If the size > 1000, the new values will replace the old values.
@akohli
---
If your project is set up for it, you can reply to this email and have your
re
Github user akohli commented on the pull request:
https://github.com/apache/spark/pull/13130#issuecomment-219750370
curious on a couple of things, firstly, you are using CacheBuilder with a
cache size of 1000, is that sufficient? Wouldn't it be better to catch an
exception or detect
Github user DoingDone9 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13130#discussion_r63453073
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -363,7 +363,7 @@ private[spark] object HadoopRDD extends Logging {
d
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13130#discussion_r63334846
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -363,7 +363,7 @@ private[spark] object HadoopRDD extends Logging {
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/13130#issuecomment-219383243
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your p
GitHub user DoingDone9 opened a pull request:
https://github.com/apache/spark/pull/13130
[SPARK-15340][SQL]Limit the size of the map used to cache JobConfs to void
OOM
# What changes were proposed in this pull request?
limit the size of the map used to cache JobConfs in SparkEnv
13 matches
Mail list logo