Github user jose-torres commented on the issue:

    https://github.com/apache/spark/pull/20622
  
    @zsxwing pointed out that the original behavior was more subtly wrong than 
I expected.
    
    What we want to do is cancel the Spark job, and then cleanly restart it 
from the last checkpoint. But in fact, this was not working, since cancelling a 
Spark job throws an opaque SparkException which we didn't anticipate.
    
    The reason things seemed to work was that the interrupt() call would almost 
always (but was not guaranteed to) interrupt the job cancellation, thus 
preventing the SparkException. So I've updated the PR to anticipate that 
SparkException, and filed SPARK-23444 to ask for a better handle for job 
cancellations.
    
    Note that the continuous processing reconfiguration tests will always 
deterministically fail if they don't properly catch this exception, so the 
checking logic isn't really fragile despite being weird.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to