[ https://issues.apache.org/jira/browse/SPARK-1392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Pat McDonough updated SPARK-1392: --------------------------------- Description: Using the spark-0.9.0 Hadoop2 binary from the project download page, running the spark-shell locally in out of the box configuration, and attempting to cache all the attached data, spark OOMs with: java.lang.OutOfMemoryError: GC overhead limit exceeded You can work around the issue by either decreasing spark.storage.memoryFraction or increasing SPARK_MEM was: Using the spark-0.9.0 Hadoop2 binary from the project download page, running the spark-shell locally in out of the box configuration, and attempting to cache all the attached data, spark OOMs with: java.lang.OutOfMemoryError: GC overhead limit exceeded You can work around the issue by either decreasing {{spark.storage.memoryFraction}} or increasing {{SPARK_MEM}} > Local spark-shell Runs Out of Memory With Default Settings > ---------------------------------------------------------- > > Key: SPARK-1392 > URL: https://issues.apache.org/jira/browse/SPARK-1392 > Project: Spark > Issue Type: Bug > Components: Spark Core > Affects Versions: 0.9.0 > Environment: OS X 10.9.2, Java 1.7.0_51, Scala 2.10.3 > Reporter: Pat McDonough > > Using the spark-0.9.0 Hadoop2 binary from the project download page, running > the spark-shell locally in out of the box configuration, and attempting to > cache all the attached data, spark OOMs with: java.lang.OutOfMemoryError: GC > overhead limit exceeded > You can work around the issue by either decreasing > spark.storage.memoryFraction or increasing SPARK_MEM -- This message was sent by Atlassian JIRA (v6.2#6252)