Github user icexelloss commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19852#discussion_r154990418
  
    --- Diff: 
core/src/main/scala/org/apache/spark/api/python/PythonRunner.scala ---
    @@ -319,7 +319,7 @@ private[spark] abstract class BasePythonRunner[IN, OUT](
     
           case e: Exception if env.isStopped =>
             logDebug("Exception thrown after context is stopped", e)
    -        null.asInstanceOf[OUT]  // exit silently
    +        throw new SparkException("Spark session has been stopped", e)
    --- End diff --
    
    Python task has the same retry mechanism as Java - when a python task 
fails, the corresponding java task (EvalPythonExec/FlatMapGroupsInPandasExec) 
also fails and the java task is retried.
    
    I think the python/java behavior would be consistent with this patch - when 
spark session is stopped, java task will receive an when reading from python 
task output, and then throw an SparkException. The java task will then be 
enqueued for retry as which point it will be the same as other java tasks that 
are enqueued for retry during shutdown. 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to