Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22732#discussion_r225393220
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ScalaUDF.scala
 ---
    @@ -39,29 +40,29 @@ import org.apache.spark.sql.types.DataType
      * @param nullable  True if the UDF can return null value.
      * @param udfDeterministic  True if the UDF is deterministic. 
Deterministic UDF returns same result
      *                          each time it is invoked with a particular 
input.
    - * @param nullableTypes which of the inputTypes are nullable (i.e. not 
primitive)
      */
     case class ScalaUDF(
         function: AnyRef,
         dataType: DataType,
         children: Seq[Expression],
    +    handleNullForInputs: Seq[Boolean],
    --- End diff --
    
    Maybe I missed something but:
    
    1. Why don't we just merge `handleNullForInputs` and `inputTypes`?
    2. Why `handleNullForInputs` is required whereas `inputTypes`'s default is 
`Nil`?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to