Github user szhem commented on the issue:

    https://github.com/apache/spark/pull/19410
  
    Hi @mallman,
    
    I believe, that `ContextCleaner` currently does not delete checkpoint data 
it case of unexpected failures.
    Also as it works at the end of the job then there is still a chance that a 
job processing quite a big graph with a lot of iterations can influence other 
running jobs by consuming a lot of disk during its run.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to