[ 
https://issues.apache.org/jira/browse/SPARK-1989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-1989:
-----------------------------------

    Assignee: Apache Spark

> Exit executors faster if they get into a cycle of heavy GC
> ----------------------------------------------------------
>
>                 Key: SPARK-1989
>                 URL: https://issues.apache.org/jira/browse/SPARK-1989
>             Project: Spark
>          Issue Type: New Feature
>          Components: Spark Core
>            Reporter: Matei Zaharia
>            Assignee: Apache Spark
>
> I've seen situations where an application is allocating too much memory 
> across its tasks + cache to proceed, but Java gets into a cycle where it 
> repeatedly runs full GCs, frees up a bit of the heap, and continues instead 
> of giving up. This then leads to timeouts and confusing error messages. It 
> would be better to crash with OOM sooner. The JVM has options to support 
> this: http://java.dzone.com/articles/tracking-excessive-garbage.
> The right solution would probably be:
> - Add some config options used by spark-submit to set XX:GCTimeLimit and 
> XX:GCHeapFreeLimit, with more conservative values than the defaults (e.g. 90% 
> time limit, 5% free limit)
> - Make sure we pass these into the Java options for executors in each 
> deployment mode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to