[ https://issues.apache.org/jira/browse/CASSANDRA-11818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15295617#comment-15295617 ]
Robert Stupp commented on CASSANDRA-11818: ------------------------------------------ Hm - one thing that I noted, is how Java handles failed direct memory allocations in {{java.nio.Bits#reserveMemory}}: It first tries an "optimistic" {{tryReserveMemory}} and exits immediately on success. However, if that somehow doesn't succeed, it triggers a {{System.gc()}}, which is bad IMO (however, kind of how direct buffers work in Java). After that GC it sleeps and tries to reserve the memory up to 9 times - up to 511 ms - and then throws {{OutOfMemoryError("Direct buffer memory")}}. I've opened CASSANDRA-11870 to think about that. > C* does neither recover nor trigger stability inspector on direct memory OOM > ---------------------------------------------------------------------------- > > Key: CASSANDRA-11818 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11818 > Project: Cassandra > Issue Type: Bug > Reporter: Robert Stupp > Attachments: 11818-direct-mem-unpooled.png, 11818-direct-mem.png, > oom-histo-live.txt, oom-stack.txt > > > The following stack trace is not caught by {{JVMStabilityInspector}}. > Situation was caused by a load test with a lot of parallel writes and reads > against a single node. > {code} > ERROR [SharedPool-Worker-1] 2016-05-17 18:38:44,187 Message.java:611 - > Unexpected exception during request; channel = [id: 0x1e02351b, > L:/127.0.0.1:9042 - R:/127.0.0.1:51087] > java.lang.OutOfMemoryError: Direct buffer memory > at java.nio.Bits.reserveMemory(Bits.java:693) ~[na:1.8.0_92] > at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) > ~[na:1.8.0_92] > at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) > ~[na:1.8.0_92] > at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:672) > ~[netty-all-4.0.36.Final.jar:4.0.36.Final] > at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:234) > ~[netty-all-4.0.36.Final.jar:4.0.36.Final] > at io.netty.buffer.PoolArena.allocate(PoolArena.java:218) > ~[netty-all-4.0.36.Final.jar:4.0.36.Final] > at io.netty.buffer.PoolArena.allocate(PoolArena.java:138) > ~[netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:270) > ~[netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:177) > ~[netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:168) > ~[netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:105) > ~[netty-all-4.0.36.Final.jar:4.0.36.Final] > at > org.apache.cassandra.transport.Message$ProtocolEncoder.encode(Message.java:349) > ~[main/:na] > at > org.apache.cassandra.transport.Message$ProtocolEncoder.encode(Message.java:314) > ~[main/:na] > at > io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:89) > ~[netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:619) > ~[netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:676) > ~[netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:612) > ~[netty-all-4.0.36.Final.jar:4.0.36.Final] > at > org.apache.cassandra.transport.Message$Dispatcher$Flusher.run(Message.java:445) > ~[main/:na] > at > io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38) > ~[netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120) > ~[netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:358) > ~[netty-all-4.0.36.Final.jar:4.0.36.Final] > at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:374) > ~[netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112) > ~[netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) > ~[netty-all-4.0.36.Final.jar:4.0.36.Final] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_92] > {code} > The situation does not get better when the load driver is stopped. > I can reproduce this scenario at will. Managed to get histogram, stack traces > and heap dump. Already increased {{-XX:MaxDirectMemorySize}} to {{2g}}. > A {{nodetool flush}} causes the daemon to exit (as that direct-memory OOM is > caught by {{JVMStabilityInspector}}). -- This message was sent by Atlassian JIRA (v6.3.4#6332)