[jira] [Updated] (SPARK-1379) Calling .cache() on a SchemaRDD should do something more efficient than caching the individual row objects.

2014-09-17 Thread Michael Armbrust (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-1379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Armbrust updated SPARK-1379:

Target Version/s: 1.2.0
   Fix Version/s: (was: 1.2.0)
Assignee: Michael Armbrust

> Calling .cache() on a SchemaRDD should do something more efficient than 
> caching the individual row objects.
> ---
>
> Key: SPARK-1379
> URL: https://issues.apache.org/jira/browse/SPARK-1379
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Reporter: Michael Armbrust
>Assignee: Michael Armbrust
>
> Since rows aren't black boxes we could use InMemoryColumnarTableScan.  This 
> would significantly reduce GC pressure on the workers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-1379) Calling .cache() on a SchemaRDD should do something more efficient than caching the individual row objects.

2014-09-15 Thread Patrick Wendell (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-1379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Wendell updated SPARK-1379:
---
Fix Version/s: (was: 1.1.0)
   1.2.0

> Calling .cache() on a SchemaRDD should do something more efficient than 
> caching the individual row objects.
> ---
>
> Key: SPARK-1379
> URL: https://issues.apache.org/jira/browse/SPARK-1379
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Reporter: Michael Armbrust
> Fix For: 1.2.0
>
>
> Since rows aren't black boxes we could use InMemoryColumnarTableScan.  This 
> would significantly reduce GC pressure on the workers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org