dongjoon-hyun commented on code in PR #53055:
URL: https://github.com/apache/spark/pull/53055#discussion_r2529520186


##########
core/src/main/scala/org/apache/spark/internal/config/Python.scala:
##########
@@ -138,4 +138,15 @@ private[spark] object Python {
       .intConf
       .checkValue(_ > 0, "If set, the idle worker max size must be > 0.")
       .createOptional
+
+  val PYTHON_DAEMON_KILL_WORKER_ON_FLUSH_FAILURE =
+    ConfigBuilder("spark.python.daemon.killWorkerOnFlushFailure")
+      .doc("When enabled, exceptions raised during output flush operations in 
the Python " +
+        "worker managed under Python daemon are not caught, causing the worker 
to terminate " +
+        "with the exception. This allows Spark to detect the failure and retry 
the task. " +
+        "When disabled, flush exceptions are caught and logged but the worker 
continues, " +
+        "which could cause the worker to get stuck due to protocol mismatch.")
+      .version("4.1.0")

Review Comment:
   Is it targeting Apache Spark 4.1.0 as a bug fix?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to