Github user BryanCutler commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19592#discussion_r147784119
  
    --- Diff: python/pyspark/worker.py ---
    @@ -105,8 +105,14 @@ def read_single_udf(pickleSer, infile, eval_type):
         elif eval_type == PythonEvalType.SQL_PANDAS_GROUPED_UDF:
             # a groupby apply udf has already been wrapped under apply()
             return arg_offsets, row_func
    -    else:
    +    elif eval_type == PythonEvalType.SQL_BATCHED_UDF:
             return arg_offsets, wrap_udf(row_func, return_type)
    +    elif eval_type == PythonEvalType.SQL_BATCHED_OPT_UDF:
    --- End diff --
    
    Would it be possible to do this type of wrapping in `BatchEvalPython`, and 
remove the need to add another eval_type?  If so then you could just the 
true/false result as is and not have to add anything in python.  I think that 
would reduce the scope of this and simplify things a bit.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to