[ 
https://issues.apache.org/jira/browse/SPARK-19162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maciej Szymkiewicz updated SPARK-19162:
---------------------------------------
    Description: 
Current state

Right now `UserDefinedFunctions` don't perform any input type validation. It 
will accept non-callable objects just to fail with hard to understand traceback:


{code}

In [1]: from pyspark.sql.functions import udf

In [2]: df = spark.range(0, 1)

In [3]: f = udf(None)

In [4]: df.select(f()).first()
17/01/07 19:30:50 ERROR Executor: Exception in task 2.0 in stage 2.0 (TID 7)

...
Py4JJavaError: An error occurred while calling o51.collectToPython.
...
TypeError: 'NoneType' object is not callable
...

{code}

Proposed

Apply basic validation for {{func}} argument:

{code}

In [7]: udf(None)

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-7-0765fbe657a9> in <module>()
----> 1 udf(None)
...
TypeError: func should be a callable object (a function or an instance of a 
class with __call__). Got <class 'NoneType'>

{code}





  was:
Current state

Right now `UserDefinedFunctions` don't perform any input type validation. It 
will accept non-callable objects just to fail with hard to understand traceback:


{code}

In [1]: from pyspark.sql.functions import udf

In [2]: df = spark.range(0, 1)

In [3]: f = udf(None)

In [4]: df.select(f()).first()
17/01/07 19:30:50 ERROR Executor: Exception in task 2.0 in stage 2.0 (TID 7)

...
Py4JJavaError: An error occurred while calling o51.collectToPython.
...
TypeError: 'NoneType' object is not callable
...

{code}

Invalid arguments to UDF call fail fast but with a bit cryptic Py4J errors:


{code}
In [5]: g = udf(lambda x: x)

In [6]: df.select(f([]))

---------------------------------------------------------------------------
Py4JError                                 Traceback (most recent call last)
<ipython-input-10-5fb48a5d66d2> in <module>()
----> 1 df.select(f([]))
....
Py4JError: An error occurred while calling 
z:org.apache.spark.sql.functions.col. Trace:
py4j.Py4JException: Method col([class java.util.ArrayList]) does not exist
        at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:318)
        at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:339)
        at py4j.Gateway.invoke(Gateway.java:274)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:214)
        at java.lang.Thread.run(Thread.java:745)

{code}

Proposed

Apply basic type validation for both constructor arguments:

{code}

In [7]: udf(None)

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-7-0765fbe657a9> in <module>()
----> 1 udf(None)
...
TypeError: func should be a callable object (a function or an instance of a 
class with __call__). Got <class 'NoneType'>

{code}

and call arguments:


{code}
In [8]: f = udf(lambda x: x)

In [9]: f(1)
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)

...
TypeError: All arguments should be Columns or strings representing column 
names. Got 1 of type <class 'int'>

{code}




> UserDefinedFunction constructor should verify that func is callable
> -------------------------------------------------------------------
>
>                 Key: SPARK-19162
>                 URL: https://issues.apache.org/jira/browse/SPARK-19162
>             Project: Spark
>          Issue Type: Sub-task
>          Components: PySpark, SQL
>    Affects Versions: 1.6.0, 2.0.0, 2.1.0, 2.2.0
>            Reporter: Maciej Szymkiewicz
>
> Current state
> Right now `UserDefinedFunctions` don't perform any input type validation. It 
> will accept non-callable objects just to fail with hard to understand 
> traceback:
> {code}
> In [1]: from pyspark.sql.functions import udf
> In [2]: df = spark.range(0, 1)
> In [3]: f = udf(None)
> In [4]: df.select(f()).first()
> 17/01/07 19:30:50 ERROR Executor: Exception in task 2.0 in stage 2.0 (TID 7)
> ...
> Py4JJavaError: An error occurred while calling o51.collectToPython.
> ...
> TypeError: 'NoneType' object is not callable
> ...
> {code}
> Proposed
> Apply basic validation for {{func}} argument:
> {code}
> In [7]: udf(None)
> ---------------------------------------------------------------------------
> TypeError                                 Traceback (most recent call last)
> <ipython-input-7-0765fbe657a9> in <module>()
> ----> 1 udf(None)
> ...
> TypeError: func should be a callable object (a function or an instance of a 
> class with __call__). Got <class 'NoneType'>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to