[ 
https://issues.apache.org/jira/browse/SPARK-22239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16452977#comment-16452977
 ] 

Li Jin commented on SPARK-22239:
--------------------------------

[~hvanhovell], I have done a bit further research of UDF over rolling windows 
and posted my results here:

[https://docs.google.com/document/d/14EjeY5z4-NC27-SmIP9CsMPCANeTcvxN44a7SIJtZPc/edit?usp=sharing]

TL; DR I think we can implement efficiently by computing window indices in the 
JVM and pass the indices along with the window Python and do rolling over the 
indices in Python. I have not addressed the issue of splitting the window 
partition into smaller batches but I think it's doable as well. Would you be 
interested in taking a look and let me know what you think?

> User-defined window functions with pandas udf
> ---------------------------------------------
>
>                 Key: SPARK-22239
>                 URL: https://issues.apache.org/jira/browse/SPARK-22239
>             Project: Spark
>          Issue Type: Sub-task
>          Components: PySpark
>    Affects Versions: 2.2.0
>         Environment: 
>            Reporter: Li Jin
>            Priority: Major
>
> Window function is another place we can benefit from vectored udf and add 
> another useful function to the pandas_udf suite.
> Example usage (preliminary):
> {code:java}
> w = Window.partitionBy('id').orderBy('time').rangeBetween(-200, 0)
> @pandas_udf(DoubleType())
> def ema(v1):
>     return v1.ewm(alpha=0.5).mean().iloc[-1]
> df.withColumn('v1_ema', ema(df.v1).over(window))
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to