[ 
https://issues.apache.org/jira/browse/SPARK-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765096#comment-15765096
 ] 

Barry Becker commented on SPARK-11293:
--------------------------------------

Not sure if this is related, but I am running on spark 2.0.2 through spark 
job-server and see tons of messages like this:
{code}
[2016-12-20 11:49:28,662] WARN  he.spark.executor.Executor [] 
[akka://JobServer/user/context-supervisor/sql-context] - Managed memory leak 
detected; size = 5762976 bytes, TID = 42621
[2016-12-20 11:49:28,662] WARN  k.memory.TaskMemoryManager [] 
[akka://JobServer/user/context-supervisor/sql-context] - leak 5.5 MB memory 
from org.apache.spark.util.collection.ExternalSorter@35f81493
[2016-12-20 11:49:28,662] WARN  he.spark.executor.Executor [] 
[akka://JobServer/user/context-supervisor/sql-context] - Managed memory leak 
detected; size = 5762976 bytes, TID = 42622
[2016-12-20 11:49:28,702] WARN  k.memory.TaskMemoryManager [] 
[akka://JobServer/user/context-supervisor/sql-context] - leak 5.5 MB memory 
from org.apache.spark.util.collection.ExternalSorter@16da7c1a
[2016-12-20 11:49:28,702] WARN  he.spark.executor.Executor [] 
[akka://JobServer/user/context-supervisor/sql-context] - Managed memory leak 
detected; size = 5762976 bytes, TID = 42623
[2016-12-20 11:49:28,702] WARN  k.memory.TaskMemoryManager [] 
[akka://JobServer/user/context-supervisor/sql-context] - leak 5.5 MB memory 
from org.apache.spark.util.collection.ExternalSorter@151060cf
[2016-12-20 11:49:28,702] WARN  he.spark.executor.Executor [] 
[akka://JobServer/user/context-supervisor/sql-context] - Managed memory leak 
detected; size = 5762976 bytes, TID = 42624
[Stage 5700:=========================>                            (44 + 4) / 
92][2016-12-20 11:49:35,479] WARN  k.memory.TaskMemoryManager [] 
[akka://JobServer/user/context-supervisor/sql-context] - l
{code}
Are managed memory leaks ever expected behavior? Or do they always indicate a 
memory leak problem? I don't really see the memory going up much in jVisualVM.

> ExternalSorter and ExternalAppendOnlyMap should free shuffle memory in their 
> stop() methods
> -------------------------------------------------------------------------------------------
>
>                 Key: SPARK-11293
>                 URL: https://issues.apache.org/jira/browse/SPARK-11293
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.3.1, 1.4.1, 1.5.1, 1.6.0
>            Reporter: Josh Rosen
>            Assignee: Josh Rosen
>            Priority: Critical
>             Fix For: 1.6.0
>
>
> I discovered multiple leaks of shuffle memory while working on my memory 
> manager consolidation patch, which added the ability to do strict memory leak 
> detection for the bookkeeping that used to be performed by the 
> ShuffleMemoryManager. This uncovered a handful of places where tasks can 
> acquire execution/shuffle memory but never release it, starving themselves of 
> memory.
> Problems that I found:
> * {{ExternalSorter.stop()}} should release the sorter's shuffle/execution 
> memory.
> * BlockStoreShuffleReader should call {{ExternalSorter.stop()}} using a 
> {{CompletionIterator}}.
> * {{ExternalAppendOnlyMap}} exposes no equivalent of {{stop()}} for freeing 
> its resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to