[ 
https://issues.apache.org/jira/browse/SPARK-16925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Rosen updated SPARK-16925:
-------------------------------
    Summary: Spark tasks which cause JVM to exit with a zero exit code may 
cause app to hang in Standalone mode  (was: Spark tasks which cause JVM to exit 
with a zero exit code may cause app to hang)

> Spark tasks which cause JVM to exit with a zero exit code may cause app to 
> hang in Standalone mode
> --------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-16925
>                 URL: https://issues.apache.org/jira/browse/SPARK-16925
>             Project: Spark
>          Issue Type: Bug
>          Components: Deploy
>    Affects Versions: 1.6.0, 2.0.0
>            Reporter: Josh Rosen
>            Assignee: Josh Rosen
>            Priority: Critical
>
> If you have a Spark standalone cluster which runs a single application and 
> you have a Spark task which repeatedly fails by causing the executor JVM to 
> exit with a _zero_ exit code then this may temporarily freeze / hang the 
> Spark application.
> For example, running
> {code}
>         sc.parallelize(1 to 1, 1).foreachPartition { _ => System.exit(0) }
> {code}
> on a cluster will cause all executors to die but those executors won't be 
> replaced unless another Spark application or worker joins or leaves the 
> cluster. This is caused by a bug in the standalone Master where 
> {{schedule()}} is only called on executor exit when the exit code is 
> non-zero, whereas I think that we should always call {{schedule()}} even on a 
> "clean" executor shutdown since {{schedule()}} should be idempotent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to