Repository: spark
Updated Branches:
  refs/heads/master 6b45d7e94 -> e4d8f9a36


[MINOR][SQL] Correct DataFrame doc.

## What changes were proposed in this pull request?
Correct DataFrame doc.

## How was this patch tested?
Only doc change, no tests.

Author: Yanbo Liang <yblia...@gmail.com>

Closes #19173 from yanboliang/df-doc.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/e4d8f9a3
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/e4d8f9a3
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/e4d8f9a3

Branch: refs/heads/master
Commit: e4d8f9a36ac27b0175f310bf5592b2881b025468
Parents: 6b45d7e
Author: Yanbo Liang <yblia...@gmail.com>
Authored: Sat Sep 9 09:25:12 2017 -0700
Committer: gatorsmile <gatorsm...@gmail.com>
Committed: Sat Sep 9 09:25:12 2017 -0700

----------------------------------------------------------------------
 python/pyspark/sql/dataframe.py                       | 14 +++++++-------
 .../src/main/scala/org/apache/spark/sql/Dataset.scala |  4 ++--
 2 files changed, 9 insertions(+), 9 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/e4d8f9a3/python/pyspark/sql/dataframe.py
----------------------------------------------------------------------
diff --git a/python/pyspark/sql/dataframe.py b/python/pyspark/sql/dataframe.py
index 8f88545..88ac413 100644
--- a/python/pyspark/sql/dataframe.py
+++ b/python/pyspark/sql/dataframe.py
@@ -46,9 +46,9 @@ class DataFrame(object):
     """A distributed collection of data grouped into named columns.
 
     A :class:`DataFrame` is equivalent to a relational table in Spark SQL,
-    and can be created using various functions in :class:`SQLContext`::
+    and can be created using various functions in :class:`SparkSession`::
 
-        people = sqlContext.read.parquet("...")
+        people = spark.read.parquet("...")
 
     Once created, it can be manipulated using the various 
domain-specific-language
     (DSL) functions defined in: :class:`DataFrame`, :class:`Column`.
@@ -59,9 +59,9 @@ class DataFrame(object):
 
     A more concrete example::
 
-        # To create DataFrame using SQLContext
-        people = sqlContext.read.parquet("...")
-        department = sqlContext.read.parquet("...")
+        # To create DataFrame using SparkSession
+        people = spark.read.parquet("...")
+        department = spark.read.parquet("...")
 
         people.filter(people.age > 30).join(department, people.deptId == 
department.id) \\
           .groupBy(department.name, "gender").agg({"salary": "avg", "age": 
"max"})
@@ -116,9 +116,9 @@ class DataFrame(object):
 
     @since(1.3)
     def registerTempTable(self, name):
-        """Registers this RDD as a temporary table using the given name.
+        """Registers this DataFrame as a temporary table using the given name.
 
-        The lifetime of this temporary table is tied to the :class:`SQLContext`
+        The lifetime of this temporary table is tied to the 
:class:`SparkSession`
         that was used to create this :class:`DataFrame`.
 
         >>> df.registerTempTable("people")

http://git-wip-us.apache.org/repos/asf/spark/blob/e4d8f9a3/sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala
----------------------------------------------------------------------
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala
index 6db6aa3..ab0c412 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala
@@ -2891,8 +2891,8 @@ class Dataset[T] private[sql](
    *
    * Global temporary view is cross-session. Its lifetime is the lifetime of 
the Spark application,
    * i.e. it will be automatically dropped when the application terminates. 
It's tied to a system
-   * preserved database `_global_temp`, and we must use the qualified name to 
refer a global temp
-   * view, e.g. `SELECT * FROM _global_temp.view1`.
+   * preserved database `global_temp`, and we must use the qualified name to 
refer a global temp
+   * view, e.g. `SELECT * FROM global_temp.view1`.
    *
    * @group basic
    * @since 2.2.0


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to