Github user BryanCutler commented on the issue:

    https://github.com/apache/spark/pull/18906
  
    I believe the equivalent API in Scala would only be in the following form 
when registering a UDF
    ```
    spark.udf.register("func", () => { 1 }).asNonNullable()
    ```
    Would it be preferable to just stick with a similar API for Python if we 
are trying to match the behavior?
    
    > So I think with the performance improvements coming into Python UDFs 
considering annotating results as nullable or not could make sense (although I 
imagine we'd need to do something differeent for the vectorized UDFs if they 
aren't already being done).
    
    Regarding performance increases with vectorized UDFs, right now the Java 
side is only implemented to accept nullable return types, so there wouldn't be 
any difference.  In the future it would be possible to accept either and that 
would give a little performance bump.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to