Github user tdas commented on the issue:

    https://github.com/apache/spark/pull/20622
  
    Unfortunately, we havent figured out any good to avoid that for 
MicroBatchExecution till now. The streaming thread is doing a whole lot of 
different things, and interrupting is the only reliable way to stop it 
immediately. And since different pieces of code can react to interrupts 
differently, it can finally manifest a small set of interrupt-related 
exceptions. This whitelist of exceptions have been sufficient for a while now 
for MicroBatchExecution, so I dont think this will grow much more. 
    
    I am convinced that ContinuousExecution has the same set of problems 
(thread needs to be interrupted from whatever it is doing) and therefore needs 
to be solved in a similar way. The only difference is that besides stopping, 
there is an additional way to interrupt the currently active query (i.e. while 
reconfiguring). And we need to catch the *same* set of exceptions as stop, 
except the expected state will be RECONFIGURING instead of TERMINATED. So we 
can reuse the method `StreamExecution.isInterruptionException`.



---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to