Github user shaneknapp commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19325#discussion_r140848562
  
    --- Diff: python/pyspark/sql/functions.py ---
    @@ -2183,14 +2183,29 @@ def pandas_udf(f=None, returnType=StringType()):
         :param f: python function if used as a standalone function
         :param returnType: a :class:`pyspark.sql.types.DataType` object
     
    -    # TODO: doctest
    +    >>> from pyspark.sql.types import IntegerType, StringType
    +    >>> slen = pandas_udf(lambda s: s.str.len(), IntegerType())
    +    >>> @pandas_udf(returnType=StringType())
    --- End diff --
    
    adding @JoshRosen too.
    
    the doc building node (amp-jenkins-worker-01) doesn't have arrow installed 
for the default conda python 2.7 environment.  for the python 3 environment, 
we're running arrow 0.4.0.
    
    i looked at the script and it seems to be agnostic to python 2 vs 3...   
once i know which version of python we'll be running i can make sure that the 
version of arrow installed is correct. 



---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to