This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
     new 6027137  [SPARK-36288][DOCS][PYTHON] Update API usage on pyspark 
pandas documents
6027137 is described below

commit 602713792819752e1131d2ebbfce59ee5de609f6
Author: Leona <yo...@oss.nttdata.com>
AuthorDate: Tue Jul 27 12:30:52 2021 +0900

    [SPARK-36288][DOCS][PYTHON] Update API usage on pyspark pandas documents
    
    ### What changes were proposed in this pull request?
    
    Update api usage examples on PySpark pandas API documents.
    
    ### Why are the changes needed?
    
    If users try to use PySpark pandas API from the document, they will see 
some API deprication warnings.
    It is kind for users to update those documents to avoid confusion.
    
    ### Does this PR introduce _any_ user-facing change?
    
    No
    
    ### How was this patch tested?
    
    ```
    make html
    ```
    
    Closes #33519 from yoda-mon/update-pyspark-configurations.
    
    Authored-by: Leona <yo...@oss.nttdata.com>
    Signed-off-by: Hyukjin Kwon <gurwls...@apache.org>
    (cherry picked from commit 9a47483f740138d7df4d0f254a935088c78ae72c)
    Signed-off-by: Hyukjin Kwon <gurwls...@apache.org>
---
 python/docs/source/user_guide/pandas_on_spark/best_practices.rst | 2 +-
 python/docs/source/user_guide/pandas_on_spark/from_to_dbms.rst   | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/python/docs/source/user_guide/pandas_on_spark/best_practices.rst 
b/python/docs/source/user_guide/pandas_on_spark/best_practices.rst
index 4d90ed5..a088500 100644
--- a/python/docs/source/user_guide/pandas_on_spark/best_practices.rst
+++ b/python/docs/source/user_guide/pandas_on_spark/best_practices.rst
@@ -34,7 +34,7 @@ it can be set into Spark session as below:
 
    from pyspark.sql import SparkSession
    builder = SparkSession.builder.appName("pandas-on-spark")
-   builder = builder.config("spark.sql.execution.arrow.enabled", "true")
+   builder = builder.config("spark.sql.execution.arrow.pyspark.enabled", 
"true")
    # Pandas API on Spark automatically uses this Spark session with the 
configurations set.
    builder.getOrCreate()
 
diff --git a/python/docs/source/user_guide/pandas_on_spark/from_to_dbms.rst 
b/python/docs/source/user_guide/pandas_on_spark/from_to_dbms.rst
index 429bdcd..2f271b9 100644
--- a/python/docs/source/user_guide/pandas_on_spark/from_to_dbms.rst
+++ b/python/docs/source/user_guide/pandas_on_spark/from_to_dbms.rst
@@ -95,7 +95,7 @@ You can also write it back to the ``stocks`` table as below:
 .. code-block:: python
 
     df.price += 1
-    df.to_spark_io(
+    df.spark.to_spark_io(
         format="jdbc", mode="append",
         dbtable="stocks", url="jdbc:sqlite:{}/example.db".format(os.getcwd()))
     ps.read_sql("stocks", con="jdbc:sqlite:{}/example.db".format(os.getcwd()))

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to