[ 
https://issues.apache.org/jira/browse/SPARK-12430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132603#comment-15132603
 ] 

Fede Bar commented on SPARK-12430:
----------------------------------

Hi Jean-Baptiste and Iulian, 

I took a look at [this 
comment|https://github.com/apache/spark/pull/4984#issuecomment-117351436]  and 
it does point out the good old directory structure as Jean-Baptiste probably 
restored with his fix. I am not running in dynamic mode and shuffle service is 
not enabled, so I don't need any temporary blockmgr files sitting there 
occupying disk space. I assume that, even if we restore the original file 
structure tree /mesos/../spark-ID#/blockmgr-ID#/ , if shuffle is enabled the 
entire directory would not be cleaned up. You have more insights with the code 
than me, so please let me know if this makes sense to you. Thank you.

> Temporary folders do not get deleted after Task completes causing problems 
> with disk space.
> -------------------------------------------------------------------------------------------
>
>                 Key: SPARK-12430
>                 URL: https://issues.apache.org/jira/browse/SPARK-12430
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.5.1, 1.5.2, 1.6.0
>         Environment: Ubuntu server
>            Reporter: Fede Bar
>
> We are experiencing an issue with automatic /tmp folder deletion after 
> framework completes. Completing a M/R job using Spark 1.5.2 (same behavior as 
> Spark 1.5.1) over Mesos will not delete some temporary folders causing free 
> disk space on server to exhaust. 
> Behavior of M/R job using Spark 1.4.1 over Mesos cluster:
> - Launched using spark-submit on one cluster node.
> - Following folders are created: */tmp/mesos/slaves/id#* , */tmp/spark-#/*  , 
>  */tmp/spark-#/blockmgr-#*
> - When task is completed */tmp/spark-#/* gets deleted along with 
> */tmp/spark-#/blockmgr-#* sub-folder.
> Behavior of M/R job using Spark 1.5.2 over Mesos cluster (same identical job):
> - Launched using spark-submit on one cluster node.
> - Following folders are created: */tmp/mesos/mesos/slaves/id** * , 
> */tmp/spark-***/ *  ,{color:red} /tmp/blockmgr-***{color}
> - When task is completed */tmp/spark-***/ * gets deleted but NOT shuffle 
> container folder {color:red} /tmp/blockmgr-***{color}
> Unfortunately, {color:red} /tmp/blockmgr-***{color} can account for several 
> GB depending on the job that ran. Over time this causes disk space to become 
> full with consequences that we all know. 
> Running a shell script would probably work but it is difficult to identify 
> folders in use by a running M/R or stale folders. I did notice similar issues 
> opened by other users marked as "resolved", but none seems to exactly match 
> the above behavior. 
> I really hope someone has insights on how to fix it.
> Thank you very much!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to