Repository: spark
Updated Branches:
  refs/heads/master 19a8802e7 -> f7a41a0e7


[SPARK-4916][SQL][DOCS]Update SQL programming guide about cache section

`SchemeRDD.cache()` now uses in-memory columnar storage.

Author: luogankun <luogan...@gmail.com>

Closes #3759 from luogankun/SPARK-4916 and squashes the following commits:

7b39864 [luogankun] [SPARK-4916]Update SQL programming guide
6018122 [luogankun] Merge branch 'master' of https://github.com/apache/spark 
into SPARK-4916
0b93785 [luogankun] [SPARK-4916]Update SQL programming guide
99b2336 [luogankun] [SPARK-4916]Update SQL programming guide


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/f7a41a0e
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/f7a41a0e
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/f7a41a0e

Branch: refs/heads/master
Commit: f7a41a0e79561a722e41800257dca886732ccaad
Parents: 19a8802
Author: luogankun <luogan...@gmail.com>
Authored: Tue Dec 30 12:17:49 2014 -0800
Committer: Michael Armbrust <mich...@databricks.com>
Committed: Tue Dec 30 12:17:49 2014 -0800

----------------------------------------------------------------------
 docs/sql-programming-guide.md | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/f7a41a0e/docs/sql-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md
index 2aea8a8..1b5fde9 100644
--- a/docs/sql-programming-guide.md
+++ b/docs/sql-programming-guide.md
@@ -831,13 +831,10 @@ turning on some experimental options.
 
 ## Caching Data In Memory
 
-Spark SQL can cache tables using an in-memory columnar format by calling 
`sqlContext.cacheTable("tableName")`.
+Spark SQL can cache tables using an in-memory columnar format by calling 
`sqlContext.cacheTable("tableName")` or `schemaRDD.cache()`.
 Then Spark SQL will scan only required columns and will automatically tune 
compression to minimize
 memory usage and GC pressure. You can call 
`sqlContext.uncacheTable("tableName")` to remove the table from memory.
 
-Note that if you call `schemaRDD.cache()` rather than 
`sqlContext.cacheTable(...)`, tables will _not_ be cached using
-the in-memory columnar format, and therefore `sqlContext.cacheTable(...)` is 
strongly recommended for this use case.
-
 Configuration of in-memory caching can be done using the `setConf` method on 
SQLContext or by running
 `SET key=value` commands using SQL.
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to