[ https://issues.apache.org/jira/browse/SPARK-21121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Oleg Danilov updated SPARK-21121: --------------------------------- Description: Currently, "CACHE TABLE" always uses the default MEMORY_AND_DISK storage level. We can add a possibility to specify it using variable, let say, spark.sql.inMemoryColumnarStorage.level. It will give user a chance to fit data into the memory with using MEMORY_AND_DISK_SER storage level. Going to submit PR for this change. was: Currently, "CACHE TABLE" always uses the default MEMORY_AND_DISK storage level. We can add a possibility to specify it using variable, let say, spark.sql.inMemoryColumnarStorage.compressed. It will give user a chance to fit data into the memory with using MEMORY_AND_DISK_SER storage level. Going to submit PR for this change. > Set up StorageLevel for CACHE TABLE command > ------------------------------------------- > > Key: SPARK-21121 > URL: https://issues.apache.org/jira/browse/SPARK-21121 > Project: Spark > Issue Type: New Feature > Components: SQL > Affects Versions: 2.1.1 > Reporter: Oleg Danilov > Priority: Minor > > Currently, "CACHE TABLE" always uses the default MEMORY_AND_DISK storage > level. We can add a possibility to specify it using variable, let say, > spark.sql.inMemoryColumnarStorage.level. It will give user a chance to fit > data into the memory with using MEMORY_AND_DISK_SER storage level. > Going to submit PR for this change. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org