This is an automated email from the ASF dual-hosted git repository.

ruifengz pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 1a276bdb3d36 [SPARK-45723][PYTHON][CONNECT][FOLLOWUP] Replace 
`toPandas` with `_to_table` in catalog methods
1a276bdb3d36 is described below

commit 1a276bdb3d369efaa0ad806fb0a5d2f6f2920214
Author: Ruifeng Zheng <ruife...@apache.org>
AuthorDate: Mon Nov 13 17:29:37 2023 +0800

    [SPARK-45723][PYTHON][CONNECT][FOLLOWUP] Replace `toPandas` with 
`_to_table` in catalog methods
    
    ### What changes were proposed in this pull request?
    followup of https://github.com/apache/spark/pull/43583, replace `toPandas` 
with `_to_table` in catalog methods
    
    ### Why are the changes needed?
    pandas conversion not needed
    
    ### Does this PR introduce _any_ user-facing change?
    no
    
    ### How was this patch tested?
    ci
    
    ### Was this patch authored or co-authored using generative AI tooling?
    no
    
    Closes #43780 from zhengruifeng/py_catalog_arrow_followup.
    
    Authored-by: Ruifeng Zheng <ruife...@apache.org>
    Signed-off-by: Ruifeng Zheng <ruife...@apache.org>
---
 python/pyspark/sql/connect/catalog.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/python/pyspark/sql/connect/catalog.py 
b/python/pyspark/sql/connect/catalog.py
index 657aa7b6fb41..e725e381b8db 100644
--- a/python/pyspark/sql/connect/catalog.py
+++ b/python/pyspark/sql/connect/catalog.py
@@ -223,7 +223,7 @@ class Catalog:
             options=options,
         )
         df = DataFrame.withPlan(catalog, session=self._sparkSession)
-        df.toPandas()  # Eager execution.
+        df._to_table()  # Eager execution.
         return df
 
     createExternalTable.__doc__ = PySparkCatalog.createExternalTable.__doc__
@@ -246,7 +246,7 @@ class Catalog:
             options=options,
         )
         df = DataFrame.withPlan(catalog, session=self._sparkSession)
-        df.toPandas()  # Eager execution.
+        df._to_table()  # Eager execution.
         return df
 
     createTable.__doc__ = PySparkCatalog.createTable.__doc__


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to