[ https://issues.apache.org/jira/browse/SPARK-11137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Thomas Sebastian reopened SPARK-11137: -------------------------------------- Re-opening this issue based on the discussion at https://issues.apache.org/jira/browse/SPARK-11139 . As the duplicate marked SPARK-11139 does not handle the fix in the StreamingContext, this issue is re opened. > Make StreamingContext.stop() exception-safe > ------------------------------------------- > > Key: SPARK-11137 > URL: https://issues.apache.org/jira/browse/SPARK-11137 > Project: Spark > Issue Type: Bug > Components: Streaming > Affects Versions: 1.5.1 > Reporter: Felix Cheung > Priority: Minor > > In StreamingContext.stop(), when an exception is thrown the rest of the > stop/cleanup action is aborted. > Discussed in https://github.com/apache/spark/pull/9116, > srowen commented > Hm, this is getting unwieldy. There are several nested try blocks here. The > same argument goes for many of these methods -- if one fails should they not > continue trying? A more tidy solution would be to execute a series of () -> > Unit code blocks that perform some cleanup and make sure that they each fire > in succession, regardless of the others. The final one to remove the shutdown > hook could occur outside synchronization. > I realize we're expanding the scope of the change here, but is it maybe > worthwhile to go all the way here? > Really, something similar could be done for SparkContext and there's an > existing JIRA for it somewhere. > At least, I'd prefer to either narrowly fix the deadlock here, or fix all of > the finally-related issue separately and all at once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org