AuroraTwinkle opened a new issue, #4465: URL: https://github.com/apache/bookkeeper/issues/4465
**BUG REPORT** ***Describe the bug*** The maxPendingAddRequestsPerThread configuration in bookkeeper.conf is inconsistent with the actual behavior, resulting in netty direct memory OOM. ***To Reproduce*** Steps to reproduce the behavior: 1) set config: maxPendingAddRequestsPerThread=1000 2. simulate rocksdb failure, flush is blocked for a long time 3. addEntry requests are queued in the thread pool, and the size of queues is twice the maxPendingAddRequestsPerThread configuration 4. the size of queued requests doubled, resulting in netty direct memory OOM ***Expected behavior*** If we configure maxPendingAddRequestsPerThread to 1000, the maximum number of thread pool queues should not exceed 1000, otherwise it will cause unexpected memory usage and may cause OOM. By reading the source code, I found that the root cause of the problem is that there is a localTasks logic in SingleThreadExecutor. Before each execution, it tries to drain all tasks from the thread pool queue to localTasks. At this time, the thread pool queue is equivalent to being cleared, but the tasks in localTasks have not yet been executed. In extreme cases, the number of tasks queued by each SingleThreadExecutor will double. I guess the purpose of using localTasks is to avoid lock contention and improve performance, which is fine. But perhaps we should consider solving the problem of doubling the number of queued tasks. ***Screenshots***    ***Additional context*** Add any other context about the problem here. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
