[
https://issues.apache.org/jira/browse/SPARK-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15128935#comment-15128935
]
Mridul Muralidharan commented on SPARK-11293:
---------------------------------------------
Not iterating to the end has a bunch of issues IIRC - including what you
mention above. For example, m'mapped buffers are not released, etc.
Unfortunately, I dont think there is a general clean solution for it. Would be
good to see what alternatives exist to resolve this.
> Spillable collections leak shuffle memory
> -----------------------------------------
>
> Key: SPARK-11293
> URL: https://issues.apache.org/jira/browse/SPARK-11293
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 1.3.1, 1.4.1, 1.5.1, 1.6.0
> Reporter: Josh Rosen
> Assignee: Josh Rosen
> Priority: Critical
>
> I discovered multiple leaks of shuffle memory while working on my memory
> manager consolidation patch, which added the ability to do strict memory leak
> detection for the bookkeeping that used to be performed by the
> ShuffleMemoryManager. This uncovered a handful of places where tasks can
> acquire execution/shuffle memory but never release it, starving themselves of
> memory.
> Problems that I found:
> * {{ExternalSorter.stop()}} should release the sorter's shuffle/execution
> memory.
> * BlockStoreShuffleReader should call {{ExternalSorter.stop()}} using a
> {{CompletionIterator}}.
> * {{ExternalAppendOnlyMap}} exposes no equivalent of {{stop()}} for freeing
> its resources.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]