Felix Cheung created SPARK-11137:
------------------------------------

             Summary: Make StreamingContext.stop() exception-safe
                 Key: SPARK-11137
                 URL: https://issues.apache.org/jira/browse/SPARK-11137
             Project: Spark
          Issue Type: Bug
          Components: Streaming
    Affects Versions: 1.5.1
            Reporter: Felix Cheung
            Priority: Minor


In StreamingContext.stop(), when an exception is thrown the rest of the 
stop/cleanup action is aborted.

Discussed in https://github.com/apache/spark/pull/9116,
srowen commented
Hm, this is getting unwieldy. There are several nested try blocks here. The 
same argument goes for many of these methods -- if one fails should they not 
continue trying? A more tidy solution would be to execute a series of () -> 
Unit code blocks that perform some cleanup and make sure that they each fire in 
succession, regardless of the others. The final one to remove the shutdown hook 
could occur outside synchronization.

I realize we're expanding the scope of the change here, but is it maybe 
worthwhile to go all the way here?

Really, something similar could be done for SparkContext and there's an 
existing JIRA for it somewhere.

At least, I'd prefer to either narrowly fix the deadlock here, or fix all of 
the finally-related issue separately and all at once.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to