[ https://issues.apache.org/jira/browse/SPARK-6415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14545356#comment-14545356 ]
Sean Owen commented on SPARK-6415: ---------------------------------- Sort of related to https://issues.apache.org/jira/browse/SPARK-4545 You don't really want the whole streaming system to stop if one batch fails though, right? I can see wanting to stop it if every batch will fail, though that's harder to know. > Spark Streaming fail-fast: Stop scheduling jobs when a batch fails, and kills > the app > ------------------------------------------------------------------------------------- > > Key: SPARK-6415 > URL: https://issues.apache.org/jira/browse/SPARK-6415 > Project: Spark > Issue Type: Bug > Components: Streaming > Reporter: Hari Shreedharan > > Of course, this would have to be done as a configurable param, but such a > fail-fast is useful else it is painful to figure out what is happening when > there are cascading failures. In some cases, the SparkContext shuts down and > streaming keeps scheduling jobs -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org