This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new aa646d30500 [SPARK-45941][PS] Upgrade `pandas` to version 2.1.3
aa646d30500 is described below

commit aa646d3050028272f7333deaef52f20e6975e0ed
Author: Bjørn Jørgensen <bjornjorgen...@gmail.com>
AuthorDate: Wed Nov 15 13:20:27 2023 -0800

    [SPARK-45941][PS] Upgrade `pandas` to version 2.1.3
    
    ### What changes were proposed in this pull request?
    Upgrade pandas from 2.1.2 to 2.1.3
    
    ### Why are the changes needed?
    Fixed infinite recursion from operations that return a new object on some 
DataFrame subclasses ([GH 
55763](https://github.com/pandas-dev/pandas/issues/55763))
    and Fix 
[read_parquet()](https://pandas.pydata.org/docs/reference/api/pandas.read_parquet.html#pandas.read_parquet)
 and 
[read_feather()](https://pandas.pydata.org/docs/reference/api/pandas.read_feather.html#pandas.read_feather)
 for [CVE-2023-47248](https://www.cve.org/CVERecord?id=CVE-2023-47248) ([GH 
55894](https://github.com/pandas-dev/pandas/issues/55894))
    
    [Release notes for 
2.1.3](https://pandas.pydata.org/docs/whatsnew/v2.1.3.html)
    
    ### Does this PR introduce _any_ user-facing change?
    No.
    
    ### How was this patch tested?
    Pass GA
    
    ### Was this patch authored or co-authored using generative AI tooling?
    No.
    
    Closes #43822 from bjornjorgensen/pandas-2_1_3.
    
    Authored-by: Bjørn Jørgensen <bjornjorgen...@gmail.com>
    Signed-off-by: Dongjoon Hyun <dh...@apple.com>
---
 dev/infra/Dockerfile                       | 4 ++--
 python/pyspark/pandas/supported_api_gen.py | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/dev/infra/Dockerfile b/dev/infra/Dockerfile
index e6a58cc3fc7..b433faa14c8 100644
--- a/dev/infra/Dockerfile
+++ b/dev/infra/Dockerfile
@@ -84,8 +84,8 @@ RUN Rscript -e "devtools::install_version('roxygen2', 
version='7.2.0', repos='ht
 # See more in SPARK-39735
 ENV R_LIBS_SITE 
"/usr/local/lib/R/site-library:${R_LIBS_SITE}:/usr/lib/R/library"
 
-RUN pypy3 -m pip install numpy 'pandas<=2.1.2' scipy coverage matplotlib
-RUN python3.9 -m pip install numpy pyarrow 'pandas<=2.1.2' scipy 
unittest-xml-reporting plotly>=4.8 'mlflow>=2.3.1' coverage matplotlib openpyxl 
'memory-profiler==0.60.0' 'scikit-learn==1.1.*'
+RUN pypy3 -m pip install numpy 'pandas<=2.1.3' scipy coverage matplotlib
+RUN python3.9 -m pip install numpy pyarrow 'pandas<=2.1.3' scipy 
unittest-xml-reporting plotly>=4.8 'mlflow>=2.3.1' coverage matplotlib openpyxl 
'memory-profiler==0.60.0' 'scikit-learn==1.1.*'
 
 # Add Python deps for Spark Connect.
 RUN python3.9 -m pip install 'grpcio>=1.48,<1.57' 'grpcio-status>=1.48,<1.57' 
'protobuf==3.20.3' 'googleapis-common-protos==1.56.4'
diff --git a/python/pyspark/pandas/supported_api_gen.py 
b/python/pyspark/pandas/supported_api_gen.py
index 8d49fef2799..a83731db8fc 100644
--- a/python/pyspark/pandas/supported_api_gen.py
+++ b/python/pyspark/pandas/supported_api_gen.py
@@ -98,7 +98,7 @@ def generate_supported_api(output_rst_file_path: str) -> None:
 
     Write supported APIs documentation.
     """
-    pandas_latest_version = "2.1.2"
+    pandas_latest_version = "2.1.3"
     if LooseVersion(pd.__version__) != LooseVersion(pandas_latest_version):
         msg = (
             "Warning: Latest version of pandas (%s) is required to generate 
the documentation; "


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to