Yikun commented on a change in pull request #34314:
URL: https://github.com/apache/spark/pull/34314#discussion_r754774559



##########
File path: python/pyspark/pandas/tests/data_type_ops/testing_utils.py
##########
@@ -49,8 +49,14 @@ def numeric_pdf(self):
         dtypes = [np.int32, int, np.float32, float]
         sers = [pd.Series([1, 2, 3], dtype=dtype) for dtype in dtypes]
         sers.append(pd.Series([decimal.Decimal(1), decimal.Decimal(2), 
decimal.Decimal(3)]))
+        sers.append(pd.Series([decimal.Decimal(1), decimal.Decimal(2), 
decimal.Decimal(np.nan)]))
+        sers.append(pd.Series([1, 2, np.nan], dtype=float))
         pdf = pd.concat(sers, axis=1)
-        pdf.columns = [dtype.__name__ for dtype in dtypes] + ["decimal"]
+        pdf.columns = [dtype.__name__ for dtype in dtypes] + [
+            "decimal",
+            "decimal_nan",

Review comment:
       We can skip decimal_nan test before if pandas version < 1.3 and add a 
note in here.
   ```python
           # To work around https://github.com/pandas-dev/pandas/pull/39409
           if LooseVersion(pd.__version__) > LooseVersion("1.3.0"):
               sers.append(pd.Series([decimal.Decimal(1), decimal.Decimal(2), 
decimal.Decimal(np.nan)]))
               pdf.columns.append("decimal_nan")
   ```
   
   @HyukjinKwon WDYT?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to