This is an automated email from the ASF dual-hosted git repository.
ruifengz pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new 49a3c132e1cb [SPARK-53632][PYTHON][DOCS][TESTS] Reenable doctest for
`DataFrame.pandas_api`
49a3c132e1cb is described below
commit 49a3c132e1cbb939751766f5c46c94c08ea3dcc3
Author: Ruifeng Zheng <[email protected]>
AuthorDate: Thu Sep 18 19:13:33 2025 +0800
[SPARK-53632][PYTHON][DOCS][TESTS] Reenable doctest for
`DataFrame.pandas_api`
### What changes were proposed in this pull request?
Reenable doctest for `DataFrame.pandas_api`
### Why are the changes needed?
for test coverage, the doctest will be ran when pandas and pyarrow are
installed
### Does this PR introduce _any_ user-facing change?
no
### How was this patch tested?
ci
### Was this patch authored or co-authored using generative AI tooling?
no
Closes #52383 from zhengruifeng/doc_test_pandas_api.
Authored-by: Ruifeng Zheng <[email protected]>
Signed-off-by: Ruifeng Zheng <[email protected]>
---
python/pyspark/sql/classic/dataframe.py | 1 +
python/pyspark/sql/connect/dataframe.py | 1 +
python/pyspark/sql/dataframe.py | 4 ++--
3 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/python/pyspark/sql/classic/dataframe.py
b/python/pyspark/sql/classic/dataframe.py
index fe8625ce1de9..05ec586dc8f7 100644
--- a/python/pyspark/sql/classic/dataframe.py
+++ b/python/pyspark/sql/classic/dataframe.py
@@ -1949,6 +1949,7 @@ def _test() -> None:
del pyspark.sql.dataframe.DataFrame.toPandas.__doc__
del pyspark.sql.dataframe.DataFrame.mapInArrow.__doc__
del pyspark.sql.dataframe.DataFrame.mapInPandas.__doc__
+ del pyspark.sql.dataframe.DataFrame.pandas_api.__doc__
spark = (
SparkSession.builder.master("local[4]").appName("sql.classic.dataframe
tests").getOrCreate()
diff --git a/python/pyspark/sql/connect/dataframe.py
b/python/pyspark/sql/connect/dataframe.py
index 19dfe46fddaa..aeafd8552dd0 100644
--- a/python/pyspark/sql/connect/dataframe.py
+++ b/python/pyspark/sql/connect/dataframe.py
@@ -2321,6 +2321,7 @@ def _test() -> None:
del pyspark.sql.dataframe.DataFrame.toPandas.__doc__
del pyspark.sql.dataframe.DataFrame.mapInArrow.__doc__
del pyspark.sql.dataframe.DataFrame.mapInPandas.__doc__
+ del pyspark.sql.dataframe.DataFrame.pandas_api.__doc__
globs["spark"] = (
PySparkSession.builder.appName("sql.connect.dataframe tests")
diff --git a/python/pyspark/sql/dataframe.py b/python/pyspark/sql/dataframe.py
index 974b8e2e8357..3d8dc970ba43 100644
--- a/python/pyspark/sql/dataframe.py
+++ b/python/pyspark/sql/dataframe.py
@@ -6295,7 +6295,7 @@ class DataFrame:
>>> df = spark.createDataFrame(
... [(14, "Tom"), (23, "Alice"), (16, "Bob")], ["age", "name"])
- >>> df.pandas_api() # doctest: +SKIP
+ >>> df.pandas_api()
age name
0 14 Tom
1 23 Alice
@@ -6303,7 +6303,7 @@ class DataFrame:
We can specify the index columns.
- >>> df.pandas_api(index_col="age") # doctest: +SKIP
+ >>> df.pandas_api(index_col="age")
name
age
14 Tom
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]