[ https://issues.apache.org/jira/browse/SPARK-7941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15579878#comment-15579878 ]
holdenk commented on SPARK-7941: -------------------------------- So if its ok - since I don't see other reports of this - unless this is an issue someone (including [~cqnguyen]) is still experience I'll go ahead and soft-close this at the end of next weak. > Cache Cleanup Failure when job is killed by Spark > -------------------------------------------------- > > Key: SPARK-7941 > URL: https://issues.apache.org/jira/browse/SPARK-7941 > Project: Spark > Issue Type: Bug > Components: PySpark, YARN > Affects Versions: 1.3.1 > Reporter: Cory Nguyen > Attachments: screenshot-1.png > > > Problem/Bug: > If a job is running and Spark kills the job intentionally, the cache files > remains on the local/worker nodes and are not cleaned up properly. Over time > the old cache builds up and causes "No Space Left on Device" error. > The cache is cleaned up properly when the job succeeds. I have not verified > if the cached remains when the user intentionally kills the job. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org