lhotari commented on issue #25021:
URL: https://github.com/apache/pulsar/issues/25021#issuecomment-3584736093

   > 1. When message size is large with high filtering ratio, due to Pulsar's 
default configuration of `dispatcherMaxReadSizeBytes=5MB` and 
`dispatcherMaxBatchSize=100`, a single `readMoreEntries()` batch can easily 
reach the maximum read size limit of 5MB. With the current `E:Qw:Qa = 2:2:1` 
mode, a single read operation only requests from two channel eventLoop threads. 
Since the default netty chunk size is 4MB(`DEFAULT_PAGE_SIZE << 
DEFAULT_MAX_ORDER`), it's easy for the existing chunks in the netty `poolArena` 
to have insufficient space, requiring allocation of new chunks from native 
memory.
   
   Thanks for bringing this up to discussion. Makes sense. I've been wondering 
why we haven't received more feedback about this. Increasing the default chunk 
size to 8MB (`-Dio.netty.allocator.maxOrder=10`) should be something to 
consider first since that would be the minimal change to mitigate the issue. I 
wouldn't increase `maxCachedBufferCapacity` at all.
   
   > When chunks are too frequently allocated and released (maybe at a rate 
20times/s), I find that the OS memory cleanup speed cannot keep up with the 
allocation speed.
   
   That's true. This problem is worse on the client since it becomes worse if 
the client VM isn't configured in a way that Netty can release buffers in an 
optimized way 
https://pulsar.apache.org/docs/4.1.x/client-libraries-java-setup/#java-client-performance
 .
   
   Regarding direct memory OOM, there's also #24926 which contributes to this 
situation.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to