[jira] [Updated] (SPARK-2650) Wrong initial sizes for in-memory column buffers

2014-08-05 Thread Cheng Lian (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-2650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cheng Lian updated SPARK-2650:
--

Target Version/s: 1.2.0  (was: 1.1.0)

> Wrong initial sizes for in-memory column buffers
> 
>
> Key: SPARK-2650
> URL: https://issues.apache.org/jira/browse/SPARK-2650
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 1.0.0, 1.0.1
>Reporter: Michael Armbrust
>Assignee: Cheng Lian
>Priority: Critical
>
> The logic for setting up the initial column buffers is different for Spark 
> SQL compared to Shark and I'm seeing OOMs when caching tables that are larger 
> than available memory (where shark was okay).
> Two suspicious things: the intialSize is always set to 0 so we always go with 
> the default.  The default looks like it was copied from code like 10 * 1024 * 
> 1024... but in Spark SQL its 10 * 102 * 1024.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-2650) Wrong initial sizes for in-memory column buffers

2014-07-23 Thread Michael Armbrust (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-2650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Armbrust updated SPARK-2650:


Target Version/s: 1.1.0

> Wrong initial sizes for in-memory column buffers
> 
>
> Key: SPARK-2650
> URL: https://issues.apache.org/jira/browse/SPARK-2650
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 1.0.0, 1.0.1
>Reporter: Michael Armbrust
>Assignee: Cheng Lian
>Priority: Critical
>
> The logic for setting up the initial column buffers is different for Spark 
> SQL compared to Shark and I'm seeing OOMs when caching tables that are larger 
> than available memory (where shark was okay).
> Two suspicious things: the intialSize is always set to 0 so we always go with 
> the default.  The default looks like it was copied from code like 10 * 1024 * 
> 1024... but in Spark SQL its 10 * 102 * 1024.



--
This message was sent by Atlassian JIRA
(v6.2#6252)