[ https://issues.apache.org/jira/browse/SPARK-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16275066#comment-16275066 ]
Dmitriy Reshetnikov commented on SPARK-7736: -------------------------------------------- Spark 2.2 still facing that issue. In my case Azkaban executes Spark Job and finalStatus of this job in Resource Manager is SUCCESS in anycase. > Exception not failing Python applications (in yarn cluster mode) > ---------------------------------------------------------------- > > Key: SPARK-7736 > URL: https://issues.apache.org/jira/browse/SPARK-7736 > Project: Spark > Issue Type: Bug > Components: YARN > Environment: Spark 1.3.1, Yarn 2.7.0, Ubuntu 14.04 > Reporter: Shay Rojansky > Assignee: Marcelo Vanzin > Fix For: 1.5.1, 1.6.0 > > > It seems that exceptions thrown in Python spark apps after the SparkContext > is instantiated don't cause the application to fail, at least in Yarn: the > application is marked as SUCCEEDED. > Note that any exception right before the SparkContext correctly places the > application in FAILED state. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org