I note your on a Linux platform and epoll is activated as such buffers are 
direct (offheap), On the acceptors can you set useEpoll to false, you set this 
a url param. And see what this does for you?

 This should change the heap memory usage as buffers will be on heap but will 
help in finding the leak.

Sent from my iPhone

> On 6 Jun 2017, at 09:33, hwaastad <he...@waastad.org> wrote:
> 
> Hi,
> did'nt help my setup.
> 
> I've done similar testing with activemq 5.14.2 and I have no issues.
> 
> Testing last 2.2.0-SNAPSHOT, with:
> JAVA_ARGS=" -XX:+PrintClassHistogram -XX:+AggressiveOpts
> -XX:+UseFastAccessorMethods -Xms512M -Xmx512M"
> 
> Used Heap is kept low (max ~147M)
> 
> But still:
> 10:44:56,616 WARN  [io.netty.channel.DefaultChannelPipeline] An
> exceptionCaught() event was fired, and it reached at the tail of the
> pipeline. It usually means the last handler in the pipeline did not handle
> the exception.: io.netty.util.internal.OutOfDirectMemoryError: failed to
> allocate 16777216 byte(s) of direct memory (used: 503818240, max: 514850816)
>        at
> io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:585)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
>        at
> io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:539)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
>        at
> io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:760)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
>        at
> io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:736)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
>        at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:244)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
>        at io.netty.buffer.PoolArena.allocate(PoolArena.java:214)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
>        at io.netty.buffer.PoolArena.allocate(PoolArena.java:146)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
>        at
> io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:320)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
>        at
> io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:181)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
>        at
> io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:172)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
>        at
> io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:133)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
>        at
> io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:80)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
>        at
> io.netty.channel.epoll.EpollRecvByteAllocatorHandle.allocate(EpollRecvByteAllocatorHandle.java:71)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
> 
> 
> Right now I'll have to rollback to activemq-5.14.2 until this is resolved.
> I'll still try debug in staging.
> 
> /hw
> 
> 
> 
> 
> --
> View this message in context: 
> http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-tp4726922p4727065.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Reply via email to