[ https://issues.apache.org/jira/browse/SPARK-2650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14093785#comment-14093785 ]
Apache Spark commented on SPARK-2650: ------------------------------------- User 'liancheng' has created a pull request for this issue: https://github.com/apache/spark/pull/1901 > Caching tables larger than memory causes OOMs > --------------------------------------------- > > Key: SPARK-2650 > URL: https://issues.apache.org/jira/browse/SPARK-2650 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 1.0.0, 1.0.1 > Reporter: Michael Armbrust > Assignee: Michael Armbrust > Priority: Critical > Fix For: 1.1.0 > > > The logic for setting up the initial column buffers is different for Spark > SQL compared to Shark and I'm seeing OOMs when caching tables that are larger > than available memory (where shark was okay). > Two suspicious things: the intialSize is always set to 0 so we always go with > the default. The default looks like it was copied from code like 10 * 1024 * > 1024... but in Spark SQL its 10 * 102 * 1024. -- This message was sent by Atlassian JIRA (v6.2#6252) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org