[ 
https://issues.apache.org/jira/browse/SPARK-9588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shenghu Yang updated SPARK-9588:
--------------------------------
    Description: 
In spark 1.4, we can only do 'cache table <table_name>'. However, if we have 
table which will get a new partition periodically, say every 10 minutes, we 
have to do 'uncache' & then 'cache' the whole table, taking long time.

Things would be much faster if we can do:
(1) cache table <table_name> partition <newest_partition>
(2) uncache table <table_name> partition <oldest_partition>

This way we will alway have a sliding window type of cached data.


  was:
In spark 1.4, we can only do 'cache table <table_name>'. However, if we have 
table which will get a new partition periodically, say every 10 minutes, we 
have to do 'uncache' & then 'cache' the whole table, taking long time.

Things would be much faster if we can do:
(1) cache table <table_name> partition <newest_partition>
(2) uncache table <table_name> partition <oldest_partition>

This way we can always having a sliding window type of cached data.



> spark sql cache: partition level cache eviction
> -----------------------------------------------
>
>                 Key: SPARK-9588
>                 URL: https://issues.apache.org/jira/browse/SPARK-9588
>             Project: Spark
>          Issue Type: Improvement
>            Reporter: Shenghu Yang
>
> In spark 1.4, we can only do 'cache table <table_name>'. However, if we have 
> table which will get a new partition periodically, say every 10 minutes, we 
> have to do 'uncache' & then 'cache' the whole table, taking long time.
> Things would be much faster if we can do:
> (1) cache table <table_name> partition <newest_partition>
> (2) uncache table <table_name> partition <oldest_partition>
> This way we will alway have a sliding window type of cached data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to