Interesting. After experimenting with various parameters increasing
spark.sql.shuffle.partitions and decreasing spark.buffer.pageSize helped my
job go through. BTW I will be happy to help getting this issue fixed.

Nezih

On Tue, Mar 22, 2016 at 1:07 AM james <yiaz...@gmail.com> wrote:

Hi,
> I also found 'Unable to acquire memory' issue using Spark 1.6.1 with
> Dynamic
> allocation on YARN. My case happened with setting
> spark.sql.shuffle.partitions larger than 200. From error stack, it has a
> diff with issue reported by Nezih and not sure if these has same root
> cause.
>
> Thanks
> James
>
> 16/03/17 16:02:11 INFO spark.MapOutputTrackerMaster: Size of output
> statuses
> for shuffle 0 is 1912805 bytes
> 16/03/17 16:02:12 INFO spark.MapOutputTrackerMasterEndpoint: Asked to send
> map output locations for shuffle 1 to hw-node3:55062
> 16/03/17 16:02:12 INFO spark.MapOutputTrackerMaster: Size of output
> statuses
> for shuffle 0 is 1912805 bytes
> 16/03/17 16:02:16 INFO scheduler.TaskSetManager: Starting task 280.0 in
> stage 153.0 (TID 9390, hw-node5, partition 280,PROCESS_LOCAL, 2432 bytes)
> 16/03/17 16:02:16 WARN scheduler.TaskSetManager: Lost task 170.0 in stage
> 153.0 (TID 9280, hw-node5): java.lang.OutOfMemoryError: Unable to acquire
> 1073741824 bytes of memory, got 1060110796
>         at
>
> org.apache.spark.memory.MemoryConsumer.allocateArray(MemoryConsumer.java:91)
>         at
>
> org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.growPointerArrayIfNecessary(UnsafeExternalSorter.java:295)
>         at
>
> org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.insertRecord(UnsafeExternalSorter.java:330)
>         at
>
> org.apache.spark.sql.execution.UnsafeExternalRowSorter.insertRow(UnsafeExternalRowSorter.java:91)
>         at
>
> org.apache.spark.sql.execution.UnsafeExternalRowSorter.sort(UnsafeExternalRowSorter.java:168)
>         at
> org.apache.spark.sql.execution.Sort$anonfun$1.apply(Sort.scala:90)
>         at
> org.apache.spark.sql.execution.Sort$anonfun$1.apply(Sort.scala:64)
>         at
>
> org.apache.spark.rdd.RDD$anonfun$mapPartitionsInternal$1$anonfun$apply$21.apply(RDD.scala:728)
>         at
>
> org.apache.spark.rdd.RDD$anonfun$mapPartitionsInternal$1$anonfun$apply$21.apply(RDD.scala:728)
>         at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>         at
>
> org.apache.spark.rdd.ZippedPartitionsRDD2.compute(ZippedPartitionsRDD.scala:88)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>         at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>         at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>         at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>         at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
>         at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>         at org.apache.spark.scheduler.Task.run(Task.scala:89)
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>         at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>
> --
> View this message in context:
> http://apache-spark-developers-list.1001551.n3.nabble.com/java-lang-OutOfMemoryError-Unable-to-acquire-bytes-of-memory-tp16773p16787.html
> Sent from the Apache Spark Developers List mailing list archive at
> Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>
> ​

Reply via email to