spark git commit: [SPARK-19585][DOC][SQL] Fix the cacheTable and uncacheTable api call in the doc

2017-02-13 Thread lixiao
Repository: spark
Updated Branches:
  refs/heads/branch-2.1 7fe3543fd -> c8113b0ee


[SPARK-19585][DOC][SQL] Fix the cacheTable and uncacheTable api call in the doc

## What changes were proposed in this pull request?

https://spark.apache.org/docs/latest/sql-programming-guide.html#caching-data-in-memory
In the doc, the call spark.cacheTable(“tableName”) and 
spark.uncacheTable(“tableName”) actually needs to be 
spark.catalog.cacheTable and spark.catalog.uncacheTable

## How was this patch tested?
Built the docs and verified the change shows up fine.

Author: Sunitha Kambhampati 

Closes #16919 from skambha/docChange.

(cherry picked from commit 9b5e460a9168ab78607034434ca45ab6cb51e5a6)
Signed-off-by: Xiao Li 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/c8113b0e
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/c8113b0e
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/c8113b0e

Branch: refs/heads/branch-2.1
Commit: c8113b0ee0555efe72827a91246af2737d1d4993
Parents: 7fe3543
Author: Sunitha Kambhampati 
Authored: Mon Feb 13 22:49:29 2017 -0800
Committer: Xiao Li 
Committed: Mon Feb 13 22:49:40 2017 -0800

--
 docs/sql-programming-guide.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/c8113b0e/docs/sql-programming-guide.md
--
diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md
index 55ed913..2173aba 100644
--- a/docs/sql-programming-guide.md
+++ b/docs/sql-programming-guide.md
@@ -1217,9 +1217,9 @@ turning on some experimental options.
 
 ## Caching Data In Memory
 
-Spark SQL can cache tables using an in-memory columnar format by calling 
`spark.cacheTable("tableName")` or `dataFrame.cache()`.
+Spark SQL can cache tables using an in-memory columnar format by calling 
`spark.catalog.cacheTable("tableName")` or `dataFrame.cache()`.
 Then Spark SQL will scan only required columns and will automatically tune 
compression to minimize
-memory usage and GC pressure. You can call `spark.uncacheTable("tableName")` 
to remove the table from memory.
+memory usage and GC pressure. You can call 
`spark.catalog.uncacheTable("tableName")` to remove the table from memory.
 
 Configuration of in-memory caching can be done using the `setConf` method on 
`SparkSession` or by running
 `SET key=value` commands using SQL.


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-19585][DOC][SQL] Fix the cacheTable and uncacheTable api call in the doc

2017-02-13 Thread lixiao
Repository: spark
Updated Branches:
  refs/heads/master 1ab97310e -> 9b5e460a9


[SPARK-19585][DOC][SQL] Fix the cacheTable and uncacheTable api call in the doc

## What changes were proposed in this pull request?

https://spark.apache.org/docs/latest/sql-programming-guide.html#caching-data-in-memory
In the doc, the call spark.cacheTable(“tableName”) and 
spark.uncacheTable(“tableName”) actually needs to be 
spark.catalog.cacheTable and spark.catalog.uncacheTable

## How was this patch tested?
Built the docs and verified the change shows up fine.

Author: Sunitha Kambhampati 

Closes #16919 from skambha/docChange.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/9b5e460a
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/9b5e460a
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/9b5e460a

Branch: refs/heads/master
Commit: 9b5e460a9168ab78607034434ca45ab6cb51e5a6
Parents: 1ab9731
Author: Sunitha Kambhampati 
Authored: Mon Feb 13 22:49:29 2017 -0800
Committer: Xiao Li 
Committed: Mon Feb 13 22:49:29 2017 -0800

--
 docs/sql-programming-guide.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/9b5e460a/docs/sql-programming-guide.md
--
diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md
index 9cf480c..235f5ec 100644
--- a/docs/sql-programming-guide.md
+++ b/docs/sql-programming-guide.md
@@ -1272,9 +1272,9 @@ turning on some experimental options.
 
 ## Caching Data In Memory
 
-Spark SQL can cache tables using an in-memory columnar format by calling 
`spark.cacheTable("tableName")` or `dataFrame.cache()`.
+Spark SQL can cache tables using an in-memory columnar format by calling 
`spark.catalog.cacheTable("tableName")` or `dataFrame.cache()`.
 Then Spark SQL will scan only required columns and will automatically tune 
compression to minimize
-memory usage and GC pressure. You can call `spark.uncacheTable("tableName")` 
to remove the table from memory.
+memory usage and GC pressure. You can call 
`spark.catalog.uncacheTable("tableName")` to remove the table from memory.
 
 Configuration of in-memory caching can be done using the `setConf` method on 
`SparkSession` or by running
 `SET key=value` commands using SQL.


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org