[ 
https://issues.apache.org/jira/browse/SPARK-12430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15132759#comment-15132759
 ] 

Fede Bar commented on SPARK-12430:
----------------------------------

Yes I confirm that I am using version 1.6. Since that code release, the effects 
of the issue are mitigated by Mesos GC (due to the fact that Mesos deletes the 
framework directory including the blockmgr-ID# folder as per 
https://issues.apache.org/jira/browse/SPARK-9708 ). In my case blockmgr-ID# 
never gets deleted when the task ends. Space gets reclaimed only when Mesos GC 
deletes parent folder.  I see two possible permanent solutions: (1) Fixing the 
race condition that causes the blockmgr-ID# folder not getting deleted as you 
indicated. (2) Move the blockmgr-ID# folder as subdirectory of 
/mesos/../framework/mesos-ID#/spark-ID#/ as per Jean-Baptiste fix. Moving the 
folder somehow forces deletion (apparently).

What do you think? Should we move forward with Jean-Baptiste fix?
This issues is not as blocking as before but should still be addressed. Also, 
it's not clear why the blockmgr-ID# folder has been moved out of Spark-ID# in 
the first place (with version 1.5.0 I suspect). Would be nice if anyone could 
explain it (just for personal reference).Thank you very much!

> Temporary folders do not get deleted after Task completes causing problems 
> with disk space.
> -------------------------------------------------------------------------------------------
>
>                 Key: SPARK-12430
>                 URL: https://issues.apache.org/jira/browse/SPARK-12430
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.5.1, 1.5.2, 1.6.0
>         Environment: Ubuntu server
>            Reporter: Fede Bar
>
> We are experiencing an issue with automatic /tmp folder deletion after 
> framework completes. Completing a M/R job using Spark 1.5.2 (same behavior as 
> Spark 1.5.1) over Mesos will not delete some temporary folders causing free 
> disk space on server to exhaust. 
> Behavior of M/R job using Spark 1.4.1 over Mesos cluster:
> - Launched using spark-submit on one cluster node.
> - Following folders are created: */tmp/mesos/slaves/id#* , */tmp/spark-#/*  , 
>  */tmp/spark-#/blockmgr-#*
> - When task is completed */tmp/spark-#/* gets deleted along with 
> */tmp/spark-#/blockmgr-#* sub-folder.
> Behavior of M/R job using Spark 1.5.2 over Mesos cluster (same identical job):
> - Launched using spark-submit on one cluster node.
> - Following folders are created: */tmp/mesos/mesos/slaves/id** * , 
> */tmp/spark-***/ *  ,{color:red} /tmp/blockmgr-***{color}
> - When task is completed */tmp/spark-***/ * gets deleted but NOT shuffle 
> container folder {color:red} /tmp/blockmgr-***{color}
> Unfortunately, {color:red} /tmp/blockmgr-***{color} can account for several 
> GB depending on the job that ran. Over time this causes disk space to become 
> full with consequences that we all know. 
> Running a shell script would probably work but it is difficult to identify 
> folders in use by a running M/R or stale folders. I did notice similar issues 
> opened by other users marked as "resolved", but none seems to exactly match 
> the above behavior. 
> I really hope someone has insights on how to fix it.
> Thank you very much!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to