[ https://issues.apache.org/jira/browse/SPARK-6056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14339861#comment-14339861 ]
SaintBacchus commented on SPARK-6056: ------------------------------------- Hi,[~adav] [~lianhuiwang][~zzcclp] I had read your discuss at https://issues.apache.org/jira/browse/SPARK-2468. I meet the similiar problem again. No matter set the `preferDirectBufs` or limit the number of thread or not can not limit the use of offheap memory. At line 269 of the class 'AbstractNioByteChannel' in netty-4.0.23.Final, Netty had allocated a offheap memory buffer with the same size in heap. So how many buffer you want to transfor, the same size offheap memory will be allocated. But once the allocated memory size reach the capacity of the overhead momery set in yarn, this executor will be killed. > Unlimit offHeap memory use cause RM killing the container > --------------------------------------------------------- > > Key: SPARK-6056 > URL: https://issues.apache.org/jira/browse/SPARK-6056 > Project: Spark > Issue Type: Bug > Components: Shuffle, Spark Core > Affects Versions: 1.2.1 > Reporter: SaintBacchus > -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org