[ 
https://issues.apache.org/jira/browse/SPARK-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15231585#comment-15231585
 ] 

Davies Liu commented on SPARK-8632:
-----------------------------------

[~bijay697] Python UDFs had been improved a lot recently in master, see 
https://issues.apache.org/jira/browse/SPARK-14267 and 
https://issues.apache.org/jira/browse/SPARK-14215.

Could you try master ?

> Poor Python UDF performance because of RDD caching
> --------------------------------------------------
>
>                 Key: SPARK-8632
>                 URL: https://issues.apache.org/jira/browse/SPARK-8632
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark, SQL
>    Affects Versions: 1.4.0
>            Reporter: Justin Uang
>            Assignee: Davies Liu
>            Priority: Blocker
>             Fix For: 1.5.1, 1.6.0
>
>
> {quote}
> We have been running into performance problems using Python UDFs with 
> DataFrames at large scale.
> From the implementation of BatchPythonEvaluation, it looks like the goal was 
> to reuse the PythonRDD code. It caches the entire child RDD so that it can do 
> two passes over the data. One to give to the PythonRDD, then one to join the 
> python lambda results with the original row (which may have java objects that 
> should be passed through).
> In addition, it caches all the columns, even the ones that don't need to be 
> processed by the Python UDF. In the cases I was working with, I had a 500 
> column table, and i wanted to use a python UDF for one column, and it ended 
> up caching all 500 columns. 
> {quote}
> http://apache-spark-developers-list.1001551.n3.nabble.com/Python-UDF-performance-at-large-scale-td12843.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to