[ 
https://issues.apache.org/jira/browse/QPIDJMS-543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17380488#comment-17380488
 ] 

Robbie Gemmell commented on QPIDJMS-543:
----------------------------------------

As a starting point, it seems there are likely several different routes you can 
tune the Netty behaviour, e.g per the referenced issue as you know there is 
changing the number of arenas or chunk size:
https://github.com/netty/netty/issues/9768#issuecomment-817753394

Or based on your description, another I wonder about is perhaps toggling the 
Netty's default allocator:
https://github.com/netty/netty/blob/netty-4.1.65.Final/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java#L76-L91

Also, according to the comments this apparently wont affect people using 
Netty's OpenSSL/BoringSSL etc based bits for TLS. Have you considered try that? 
http://qpid.apache.org/releases/qpid-jms-1.1.0/docs/index.html#enabling-openssl-support


I wouldnt really consider this a bug in Qpid JMS as such if it is Netty 
specifically allocating the pooled heap memory deliberately for its SSL usage. 
The client specifically tries to allocate direct-memory for its IO buffers 
where possible, and defaults to nettys pooled allocator as pooling those makes 
sense. In the main other place I can currently think of where we allocate 
buffers inside the client it is specifically a non-pooled buffer that is 
created. Whilst I can certainly see an argument that memory surviving beyond a 
connection is 'leaking' beyond its life, it is specifically in a shared Netty 
buffer pool aimed at doing exactly that, so its really functioning as intended 
just not how you want.

For Qpid JMS, it seems like the main option if anything would be config to 
allow not using the default Netty allocator in order to more specifically 
govern what is used for the connection, and allow e.g disabling [heap] buffer 
pooling done by Netty itself (which you can already do with the above toggles 
it seems). So essentially a toggle for making a specific heap-memory vs 
performance tradeoff most likely. One everyone likely has a different answer 
for.

> Memory usage has increased significantly because the PooledByteBuffer(netty) 
> are not released
> ---------------------------------------------------------------------------------------------
>
>                 Key: QPIDJMS-543
>                 URL: https://issues.apache.org/jira/browse/QPIDJMS-543
>             Project: Qpid JMS
>          Issue Type: Bug
>          Components: qpid-jms-client
>    Affects Versions: 0.56.0
>         Environment: java - 11
> qpid-jms-client - 0.56.0
> netty - 4.1.65.Final
>            Reporter: Rishabh Handa
>            Priority: Major
>              Labels: memory-analysis, memory-bug, memory-dump, memory-leak, 
> memorymanager
>         Attachments: 1.png, 2.png
>
>
> * We are using JMS to send and receive messages from queue asynchronously
>  * We have observed a high heap memory usage by *io.netty.buffer.PoolChunk* - 
> (heap dump: screenshot attached 1)
>  * These PoolChunks are allocated during creation of a connection to the 
> queue - 
>  ** From stacktrace(screenshot attached 2) - when we invoke 
> *jmsConnectionFactory.createConnection()*
>  internally the call goes to *FailoverProvider* -> *AmqpProvider*, which 
> invokes *ByteBufferUtil* to create *PooledByteBuffer*
>  ** These buffers are created on the heap memory (io.netty.buffer.PoolChunk 
> of *16 MB)*
>  * These PooledByteBuffer are not released from the heap memory even after 
> the connection is closed. Due to which, the memory usage has increased.
>  
> For more details - please go through this issue: [Increased memory footprint 
> in 4.1.43.Final · Issue #9768 · netty/netty 
> (github.com)|https://github.com/netty/netty/issues/9768]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to