Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21928#discussion_r206410816
  
    --- Diff: python/pyspark/serializers.py ---
    @@ -236,6 +237,11 @@ def create_array(s, t):
                 # TODO: need decode before converting to Arrow in Python 2
                 return pa.Array.from_pandas(s.apply(
                     lambda v: v.decode("utf-8") if isinstance(v, str) else v), 
mask=mask, type=t)
    +        elif t is not None and pa.types.is_decimal(t) and \
    +                LooseVersion("0.9.0") <= LooseVersion(pa.__version__) < 
LooseVersion("0.10.0"):
    +            # TODO: see ARROW-2432. Remove when the minimum PyArrow 
version becomes 0.10.0.
    +            return pa.Array.from_pandas(s.apply(
    +                lambda v: decimal.Decimal('NaN') if v is None else v), 
mask=mask, type=t)
    --- End diff --
    
    existing test should test this `test_vectorized_udf_null_decimal`. This is 
failed without the current change and PyArrow 0.9.0.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to