[ 
https://issues.apache.org/jira/browse/SPARK-4912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14325388#comment-14325388
 ] 

Kay Ousterhout edited comment on SPARK-4912 at 2/18/15 4:23 AM:
----------------------------------------------------------------

Is it possible to backport this to 1.2?  It fixes 2 annoying issues:

(1) If you do:

cache table foo as ....;
cache table foo;

The second cache table creates a 2nd, new RDD, meaning the first cached RDD is 
stuck in-memory and can't be deleted ("uncache foo" just deletes the 2nd RDD, 
but the 1st one is still there)

(2)

cache table foo as ...;
drop table foo;

Leaves foo still in-memory (and, similar to the above, now un-deletable).


was (Author: kayousterhout):
Is it possible to backport this to 1.2?  It fixes an annoying issue where if 
you do:

cache table foo as ....;
cache table foo;

The second cache table creates a 2nd, new RDD, meaning the first cached RDD is 
stuck in-memory and can't be deleted ("uncache foo" just deletes the 2nd RDD, 
but the 1st one is still there)

> Persistent data source tables
> -----------------------------
>
>                 Key: SPARK-4912
>                 URL: https://issues.apache.org/jira/browse/SPARK-4912
>             Project: Spark
>          Issue Type: Sub-task
>          Components: SQL
>            Reporter: Michael Armbrust
>            Assignee: Michael Armbrust
>            Priority: Blocker
>             Fix For: 1.3.0
>
>
> It would be good if tables created through the new data sources api could be 
> persisted to the hive metastore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to