Github user watermen commented on the issue:

    https://github.com/apache/carbondata/pull/1245
  
    @jackylk Agreed with @xuchuanyin, Spark’s storage levels are meant to 
provide different trade-offs between memory usage and CPU efficiency. So 
different environment correspond to different storage level. So here we'd 
better make a conf named 'storage_level', and the default value of it is 
MEMORY_ONLY(the same to default value in spark). We can get more info about 
storage level in spark 
website.(http://spark.apache.org/docs/latest/rdd-programming-guide.html#rdd-persistence)
 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to