[ 
https://issues.apache.org/jira/browse/SPARK-44705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon updated SPARK-44705:
---------------------------------
    Fix Version/s: 4.0.0

> Make PythonRunner single-threaded
> ---------------------------------
>
>                 Key: SPARK-44705
>                 URL: https://issues.apache.org/jira/browse/SPARK-44705
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark
>    Affects Versions: 3.5.0
>            Reporter: Utkarsh Agarwal
>            Assignee: Utkarsh Agarwal
>            Priority: Major
>             Fix For: 4.0.0
>
>
> PythonRunner, a utility that executes Python UDFs in Spark, uses two threads 
> in a producer-consumer model today. This multi-threading model is problematic 
> and confusing as Spark's execution model within a task is commonly understood 
> to be single-threaded. 
> More importantly, this departure of a double-threaded execution resulted in a 
> series of customer issues involving [race 
> conditions|https://issues.apache.org/jira/browse/SPARK-33277] and 
> [deadlocks|https://issues.apache.org/jira/browse/SPARK-38677] between threads 
> as the code was hard to reason about. There have been multiple attempts to 
> reign in these issues, viz., [fix 
> 1|https://issues.apache.org/jira/browse/SPARK-22535], [fix 
> 2|https://github.com/apache/spark/pull/30177], [fix 
> 3|https://github.com/apache/spark/commit/243c321db2f02f6b4d926114bd37a6e74c2be185].
>  Moreover, the fixes have made the code base somewhat abstruse by introducing 
> multiple daemon [monitor 
> threads|https://github.com/apache/spark/blob/a3a32912be04d3760cb34eb4b79d6d481bbec502/core/src/main/scala/org/apache/spark/api/python/PythonRunner.scala#L579]
>  to detect deadlocks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to