Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20137#discussion_r159506607
  
    --- Diff: python/pyspark/sql/catalog.py ---
    @@ -255,9 +255,26 @@ def registerFunction(self, name, f, 
returnType=StringType()):
             >>> _ = spark.udf.register("stringLengthInt", len, IntegerType())
             >>> spark.sql("SELECT stringLengthInt('test')").collect()
             [Row(stringLengthInt(test)=4)]
    +
    +        >>> import random
    +        >>> from pyspark.sql.functions import udf
    +        >>> from pyspark.sql.types import IntegerType, StringType
    +        >>> random_udf = udf(lambda: random.randint(0, 100), 
IntegerType()).asNondeterministic()
    +        >>> newRandom_udf = spark.catalog.registerFunction(
    +        ...     "random_udf", random_udf, StringType())  # doctest: +SKIP
    +        >>> spark.sql("SELECT random_udf()").collect()  # doctest: +SKIP
    +        [Row(random_udf()=u'82')]
    +        >>> spark.range(1).select(newRandom_udf()).collect()  # doctest: 
+SKIP
    +        [Row(random_udf()=u'62')]
             """
    -        udf = UserDefinedFunction(f, returnType=returnType, name=name,
    -                                  evalType=PythonEvalType.SQL_BATCHED_UDF)
    +
    +        if hasattr(f, 'asNondeterministic'):
    --- End diff --
    
    Actually, this one made me to suggest `wrapper._unwrapped = lambda: self` 
way.
    
    So, here this can be wrapped function or `UserDefinedFunction` and I 
thought it's not quite clear what we expect here by `hasattr(f, 
'asNondeterministic')`.
    
    Could we at least leave come comments saying that this can be both wrapped 
function for `UserDefinedFunction` and `UserDefinedFunction` itself?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to