[
https://issues.apache.org/jira/browse/SPARK-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838681#comment-15838681
]
Sven Krasser commented on SPARK-4049:
-
[~srowen], see my comment from before:
{quote}
As a user, when
[
https://issues.apache.org/jira/browse/SPARK-12675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15224914#comment-15224914
]
Sven Krasser commented on SPARK-12675:
--
This problem also occurs in 1.6.1.
{noformat}
16/04/04
[
https://issues.apache.org/jira/browse/SPARK-14138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15214787#comment-15214787
]
Sven Krasser commented on SPARK-14138:
--
Thanks for the speedy fix! Will this make it into 1.6.2?
>
Sven Krasser created SPARK-14138:
Summary: Generated SpecificColumnarIterator code can exceed JVM
size limit for cached DataFrames
Key: SPARK-14138
URL: https://issues.apache.org/jira/browse/SPARK-14138
[
https://issues.apache.org/jira/browse/SPARK-12675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15150872#comment-15150872
]
Sven Krasser commented on SPARK-12675:
--
More findings (Spark 1.6.0): For our initial 200 partition
[
https://issues.apache.org/jira/browse/SPARK-12675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15142120#comment-15142120
]
Sven Krasser commented on SPARK-12675:
--
I'm running into the same issue (same exception) running
[
https://issues.apache.org/jira/browse/SPARK-5395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14295409#comment-14295409
]
Sven Krasser commented on SPARK-5395:
-
Thanks Davies!
Large number of Python workers
[
https://issues.apache.org/jira/browse/SPARK-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14296037#comment-14296037
]
Sven Krasser commented on SPARK-4049:
-
I'm also seeing this for a 2x replicated RDD
[
https://issues.apache.org/jira/browse/SPARK-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14296037#comment-14296037
]
Sven Krasser edited comment on SPARK-4049 at 1/29/15 12:07 AM:
[
https://issues.apache.org/jira/browse/SPARK-5051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293916#comment-14293916
]
Sven Krasser commented on SPARK-5051:
-
I assume this is related to this thread:
[
https://issues.apache.org/jira/browse/SPARK-5395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14294570#comment-14294570
]
Sven Krasser commented on SPARK-5395:
-
Some new findings: I can trigger the problem
[
https://issues.apache.org/jira/browse/SPARK-5395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14293843#comment-14293843
]
Sven Krasser commented on SPARK-5395:
-
I've definitely seen this behavior when adding
[
https://issues.apache.org/jira/browse/SPARK-5395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14292913#comment-14292913
]
Sven Krasser commented on SPARK-5395:
-
Some additional findings from my side: I've
[
https://issues.apache.org/jira/browse/SPARK-5395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14292630#comment-14292630
]
Sven Krasser commented on SPARK-5395:
-
[~mkman84], do you also see this for both
[
https://issues.apache.org/jira/browse/SPARK-5395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sven Krasser updated SPARK-5395:
Description:
During job execution a large number of Python worker accumulates eventually
causing
Sven Krasser created SPARK-5395:
---
Summary: Large number of Python workers causing resource depletion
Key: SPARK-5395
URL: https://issues.apache.org/jira/browse/SPARK-5395
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14290224#comment-14290224
]
Sven Krasser commented on SPARK-5209:
-
Thanks Amo! Assuming this is the root cause,
[
https://issues.apache.org/jira/browse/SPARK-4779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14290298#comment-14290298
]
Sven Krasser commented on SPARK-4779:
-
Here's a potentially related issue occurring on
Sven Krasser created SPARK-5392:
---
Summary: Shuffle spill size is shown as negative
Key: SPARK-5392
URL: https://issues.apache.org/jira/browse/SPARK-5392
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-5392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sven Krasser updated SPARK-5392:
Attachment: Screen Shot 2015-01-23 at 5.13.55 PM.png
Shuffle spill size is shown as negative
[
https://issues.apache.org/jira/browse/SPARK-5051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14290271#comment-14290271
]
Sven Krasser commented on SPARK-5051:
-
Do you see
[
https://issues.apache.org/jira/browse/SPARK-5392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sven Krasser updated SPARK-5392:
Description: The Shuffle Spill (Memory) metric on the Stage Detail Web UI
shows as negative for
Sven Krasser created SPARK-5209:
---
Summary: Jobs fail with unexpected value exception in certain
environments
Key: SPARK-5209
URL: https://issues.apache.org/jira/browse/SPARK-5209
Project: Spark
23 matches
Mail list logo