[ 
https://issues.apache.org/jira/browse/SPARK-3030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14120975#comment-14120975
 ] 

Davies Liu commented on SPARK-3030:
-----------------------------------

I had update the PR to cleanup idle workers after 1 minute.

This feature can be turn of by "spark.python.worker.reuse=false". But I think 
it's better to enable it by default, so we can test it. 

We can disable it by default for the next release (if it's not stable enough).

> reuse python worker
> -------------------
>
>                 Key: SPARK-3030
>                 URL: https://issues.apache.org/jira/browse/SPARK-3030
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark
>            Reporter: Davies Liu
>            Assignee: Davies Liu
>
> Currently, it will fork an Python worker for each task, it will better if we 
> can reuse the worker for later tasks.
> This will be very useful for large dataset with big broadcast, so it does not 
> need to sending broadcast to worker again and again. Also it can reduce the 
> overhead of launch a task.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to