[ 
https://issues.apache.org/jira/browse/SPARK-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15249679#comment-15249679
 ] 

Reynold Xin edited comment on SPARK-11293 at 5/3/16 4:31 AM:
-------------------------------------------------------------

I was using Apache spark 1.6 in EMR with spark streaming in yarn and saw memory 
leaks in one of the containers. Here are the logs

{code}
16/04/14 13:49:10 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 
2942916
16/04/14 13:49:10 INFO executor.Executor: Running task 22.0 in stage 35684.0 
(TID 2942915)
16/04/14 13:49:10 INFO executor.Executor: Running task 23.0 in stage 35684.0 
(TID 2942916)
16/04/14 13:49:10 INFO storage.ShuffleBlockFetcherIterator: Getting 94 
non-empty blocks out of 94 blocks
16/04/14 13:49:10 INFO storage.ShuffleBlockFetcherIterator: Getting 94 
non-empty blocks out of 94 blocks
16/04/14 13:49:10 INFO storage.ShuffleBlockFetcherIterator: Started 2 remote 
fetches in 1 ms
16/04/14 13:49:10 INFO storage.ShuffleBlockFetcherIterator: Started 2 remote 
fetches in 1 ms
16/04/14 13:49:10 INFO storage.MemoryStore: Block input-3-1460583424327 stored 
as values in memory (estimated size 244.7 KB, free 19.3 MB)
16/04/14 13:49:10 INFO receiver.BlockGenerator: Pushed block 
input-3-1460641750200
16/04/14 13:49:10 INFO storage.MemoryStore: 1 blocks selected for dropping
16/04/14 13:49:10 INFO storage.BlockManager: Dropping block 
input-1-1460615659379 from memory
16/04/14 13:49:10 INFO storage.MemoryStore: 1 blocks selected for dropping
16/04/14 13:49:10 INFO storage.BlockManager: Dropping block 
input-1-1460615659380 from memory
16/04/14 13:49:10 INFO memory.TaskMemoryManager: Memory used in task 2942915
16/04/14 13:49:10 INFO memory.TaskMemoryManager: Acquired by 
org.apache.spark.unsafe.map.BytesToBytesMap@34158d5f: 32.3 MB
16/04/14 13:49:10 INFO memory.TaskMemoryManager: 0 bytes of memory were used by 
task 2942915 but are not associated with specific consumers
16/04/14 13:49:10 INFO memory.TaskMemoryManager: 101247172 bytes of memory are 
used for execution and 3603881260 bytes of memory are used for storage
16/04/14 13:49:10 WARN memory.TaskMemoryManager: leak 32.3 MB memory from 
org.apache.spark.unsafe.map.BytesToBytesMap@34158d5f
16/04/14 13:49:10 ERROR executor.Executor: Managed memory leak detected; size = 
33816576 bytes, TID = 2942915
16/04/14 13:49:10 ERROR executor.Executor: Exception in task 22.0 in stage 
35684.0 (TID 2942915)
java.lang.OutOfMemoryError: Unable to acquire 262144 bytes of memory, got 220032
        at 
org.apache.spark.memory.MemoryConsumer.allocateArray(MemoryConsumer.java:91)
        at 
org.apache.spark.unsafe.map.BytesToBytesMap.allocate(BytesToBytesMap.java:735)
        at 
org.apache.spark.unsafe.map.BytesToBytesMap.<init>(BytesToBytesMap.java:197)
        at 
org.apache.spark.unsafe.map.BytesToBytesMap.<init>(BytesToBytesMap.java:212)
        at 
org.apache.spark.sql.execution.UnsafeFixedWidthAggregationMap.<init>(UnsafeFixedWidthAggregationMap.java:103)
        at 
org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.<init>(TungstenAggregationIterator.scala:483)
        at 
org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:95)
        at 
org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:86)
        at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
        at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
        at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:89)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
16/04/14 13:49:10 INFO executor.Executor: Finished task 23.0 in stage 35684.0 
(TID 2942916). 1921 bytes result sent to driver
16/04/14 13:49:10 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 
2942927
16/04/14 13:49:10 INFO executor.Executor: Running task 34.0 in stage 35684.0 
(TID 2942927)
16/04/14 13:49:10 ERROR util.SparkUncaughtExceptionHandler: Uncaught exception 
in thread Thread[Executor task launch worker-2,5,main]
java.lang.OutOfMemoryError: Unable to acquire 262144 bytes of memory, got 220032
        at 
org.apache.spark.memory.MemoryConsumer.allocateArray(MemoryConsumer.java:91)
        at 
org.apache.spark.unsafe.map.BytesToBytesMap.allocate(BytesToBytesMap.java:735)
        at 
org.apache.spark.unsafe.map.BytesToBytesMap.<init>(BytesToBytesMap.java:197)
        at 
org.apache.spark.unsafe.map.BytesToBytesMap.<init>(BytesToBytesMap.java:212)
        at 
org.apache.spark.sql.execution.UnsafeFixedWidthAggregationMap.<init>(UnsafeFixedWidthAggregationMap.java:103)
        at 
org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.<init>(TungstenAggregationIterator.scala:483)
        at 
org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:95)
        at 
org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:86)
        at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
        at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
        at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:89)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
{code}


was (Author: ibnipu...@gmail.com):
I was using Apache spark 1.6 in EMR with spark streaming in yarn and saw memory 
leaks in one of the containers. Here are the logs

16/04/14 13:49:10 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 
2942916
16/04/14 13:49:10 INFO executor.Executor: Running task 22.0 in stage 35684.0 
(TID 2942915)
16/04/14 13:49:10 INFO executor.Executor: Running task 23.0 in stage 35684.0 
(TID 2942916)
16/04/14 13:49:10 INFO storage.ShuffleBlockFetcherIterator: Getting 94 
non-empty blocks out of 94 blocks
16/04/14 13:49:10 INFO storage.ShuffleBlockFetcherIterator: Getting 94 
non-empty blocks out of 94 blocks
16/04/14 13:49:10 INFO storage.ShuffleBlockFetcherIterator: Started 2 remote 
fetches in 1 ms
16/04/14 13:49:10 INFO storage.ShuffleBlockFetcherIterator: Started 2 remote 
fetches in 1 ms
16/04/14 13:49:10 INFO storage.MemoryStore: Block input-3-1460583424327 stored 
as values in memory (estimated size 244.7 KB, free 19.3 MB)
16/04/14 13:49:10 INFO receiver.BlockGenerator: Pushed block 
input-3-1460641750200
16/04/14 13:49:10 INFO storage.MemoryStore: 1 blocks selected for dropping
16/04/14 13:49:10 INFO storage.BlockManager: Dropping block 
input-1-1460615659379 from memory
16/04/14 13:49:10 INFO storage.MemoryStore: 1 blocks selected for dropping
16/04/14 13:49:10 INFO storage.BlockManager: Dropping block 
input-1-1460615659380 from memory
16/04/14 13:49:10 INFO memory.TaskMemoryManager: Memory used in task 2942915
16/04/14 13:49:10 INFO memory.TaskMemoryManager: Acquired by 
org.apache.spark.unsafe.map.BytesToBytesMap@34158d5f: 32.3 MB
16/04/14 13:49:10 INFO memory.TaskMemoryManager: 0 bytes of memory were used by 
task 2942915 but are not associated with specific consumers
16/04/14 13:49:10 INFO memory.TaskMemoryManager: 101247172 bytes of memory are 
used for execution and 3603881260 bytes of memory are used for storage
16/04/14 13:49:10 WARN memory.TaskMemoryManager: leak 32.3 MB memory from 
org.apache.spark.unsafe.map.BytesToBytesMap@34158d5f
16/04/14 13:49:10 ERROR executor.Executor: Managed memory leak detected; size = 
33816576 bytes, TID = 2942915
16/04/14 13:49:10 ERROR executor.Executor: Exception in task 22.0 in stage 
35684.0 (TID 2942915)
java.lang.OutOfMemoryError: Unable to acquire 262144 bytes of memory, got 220032
        at 
org.apache.spark.memory.MemoryConsumer.allocateArray(MemoryConsumer.java:91)
        at 
org.apache.spark.unsafe.map.BytesToBytesMap.allocate(BytesToBytesMap.java:735)
        at 
org.apache.spark.unsafe.map.BytesToBytesMap.<init>(BytesToBytesMap.java:197)
        at 
org.apache.spark.unsafe.map.BytesToBytesMap.<init>(BytesToBytesMap.java:212)
        at 
org.apache.spark.sql.execution.UnsafeFixedWidthAggregationMap.<init>(UnsafeFixedWidthAggregationMap.java:103)
        at 
org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.<init>(TungstenAggregationIterator.scala:483)
        at 
org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:95)
        at 
org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:86)
        at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
        at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
        at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:89)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
16/04/14 13:49:10 INFO executor.Executor: Finished task 23.0 in stage 35684.0 
(TID 2942916). 1921 bytes result sent to driver
16/04/14 13:49:10 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 
2942927
16/04/14 13:49:10 INFO executor.Executor: Running task 34.0 in stage 35684.0 
(TID 2942927)
16/04/14 13:49:10 ERROR util.SparkUncaughtExceptionHandler: Uncaught exception 
in thread Thread[Executor task launch worker-2,5,main]
java.lang.OutOfMemoryError: Unable to acquire 262144 bytes of memory, got 220032
        at 
org.apache.spark.memory.MemoryConsumer.allocateArray(MemoryConsumer.java:91)
        at 
org.apache.spark.unsafe.map.BytesToBytesMap.allocate(BytesToBytesMap.java:735)
        at 
org.apache.spark.unsafe.map.BytesToBytesMap.<init>(BytesToBytesMap.java:197)
        at 
org.apache.spark.unsafe.map.BytesToBytesMap.<init>(BytesToBytesMap.java:212)
        at 
org.apache.spark.sql.execution.UnsafeFixedWidthAggregationMap.<init>(UnsafeFixedWidthAggregationMap.java:103)
        at 
org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.<init>(TungstenAggregationIterator.scala:483)
        at 
org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:95)
        at 
org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:86)
        at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
        at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
        at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:89)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

> Spillable collections leak shuffle memory
> -----------------------------------------
>
>                 Key: SPARK-11293
>                 URL: https://issues.apache.org/jira/browse/SPARK-11293
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.3.1, 1.4.1, 1.5.1, 1.6.0, 1.6.1
>            Reporter: Josh Rosen
>            Assignee: Josh Rosen
>            Priority: Critical
>
> I discovered multiple leaks of shuffle memory while working on my memory 
> manager consolidation patch, which added the ability to do strict memory leak 
> detection for the bookkeeping that used to be performed by the 
> ShuffleMemoryManager. This uncovered a handful of places where tasks can 
> acquire execution/shuffle memory but never release it, starving themselves of 
> memory.
> Problems that I found:
> * {{ExternalSorter.stop()}} should release the sorter's shuffle/execution 
> memory.
> * BlockStoreShuffleReader should call {{ExternalSorter.stop()}} using a 
> {{CompletionIterator}}.
> * {{ExternalAppendOnlyMap}} exposes no equivalent of {{stop()}} for freeing 
> its resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to