[ 
https://issues.apache.org/jira/browse/SPARK-12157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15751563#comment-15751563
 ] 

George Resor commented on SPARK-12157:
--------------------------------------

I believe the issue is the UDFs can't return numpy objects as results (although 
function mapped to RDDs can). This can get confusing when some numpy objects 
are not obviously numpy objects, like np.int and np.float look just like ints 
and floats so if you cast them back to ints or floats in your udf everything 
should work fine. The UDFs can use any numpy type object in the function (as 
far as i can tell) as long as the returned result is not a numpy object. 

> Support numpy types as return values of Python UDFs
> ---------------------------------------------------
>
>                 Key: SPARK-12157
>                 URL: https://issues.apache.org/jira/browse/SPARK-12157
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark, SQL
>    Affects Versions: 1.5.2
>            Reporter: Justin Uang
>
> Currently, if I have a python UDF
> {code}
> import pyspark.sql.types as T
> import pyspark.sql.functions as F
> from pyspark.sql import Row
> import numpy as np
> argmax = F.udf(lambda x: np.argmax(x), T.IntegerType())
> df = sqlContext.createDataFrame([Row(array=[1,2,3])])
> df.select(argmax("array")).count()
> {code}
> I get an exception that is fairly opaque:
> {code}
> Caused by: net.razorvine.pickle.PickleException: expected zero arguments for 
> construction of ClassDict (for numpy.dtype)
>         at 
> net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23)
>         at net.razorvine.pickle.Unpickler.load_reduce(Unpickler.java:701)
>         at net.razorvine.pickle.Unpickler.dispatch(Unpickler.java:171)
>         at net.razorvine.pickle.Unpickler.load(Unpickler.java:85)
>         at net.razorvine.pickle.Unpickler.loads(Unpickler.java:98)
>         at 
> org.apache.spark.sql.execution.BatchPythonEvaluation$$anonfun$doExecute$1$$anonfun$apply$3.apply(python.scala:404)
>         at 
> org.apache.spark.sql.execution.BatchPythonEvaluation$$anonfun$doExecute$1$$anonfun$apply$3.apply(python.scala:403)
> {code}
> Numpy types like np.int and np.float64 should automatically be cast to the 
> proper dtypes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to