[ https://issues.apache.org/jira/browse/SPARK-4783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14492393#comment-14492393 ]
Apache Spark commented on SPARK-4783: ------------------------------------- User 'srowen' has created a pull request for this issue: https://github.com/apache/spark/pull/5492 > System.exit() calls in SparkContext disrupt applications embedding Spark > ------------------------------------------------------------------------ > > Key: SPARK-4783 > URL: https://issues.apache.org/jira/browse/SPARK-4783 > Project: Spark > Issue Type: Bug > Components: Spark Core > Reporter: David Semeria > > A common architectural choice for integrating Spark within a larger > application is to employ a gateway to handle Spark jobs. The gateway is a > server which contains one or more long-running sparkcontexts. > A typical server is created with the following pseudo code: > var continue = true > while (continue){ > try { > server.run() > } catch (e) { > continue = log_and_examine_error(e) > } > The problem is that sparkcontext frequently calls System.exit when it > encounters a problem which means the server can only be re-spawned at the > process level, which is much more messy than the simple code above. > Therefore, I believe it makes sense to replace all System.exit calls in > sparkcontext with the throwing of a fatal error. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org