[ 
https://issues.apache.org/jira/browse/SPARK-24403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16494707#comment-16494707
 ] 

Felix Cheung commented on SPARK-24403:
--------------------------------------

Reuse worker (daemon process) is actually supported and the default for SparkR.

The specific use case you have linked to R UDF might be a different issue all 
together.

Please refer back to the original issue  - don't open a new JIRA. Thanks. 

> reuse r worker
> --------------
>
>                 Key: SPARK-24403
>                 URL: https://issues.apache.org/jira/browse/SPARK-24403
>             Project: Spark
>          Issue Type: Improvement
>          Components: SparkR
>    Affects Versions: 2.3.0
>            Reporter: Deepansh
>            Priority: Major
>              Labels: sparkR
>
> Currently, SparkR doesn't support reuse of its workers, so broadcast and 
> closure are transferred to workers each time. Can we add the idea of python 
> worker reuse to SparkR also, to enhance its performance?
> performance issues reference 
> [https://issues.apache.org/jira/browse/SPARK-23650]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to