TakaHiR07 opened a new pull request, #4609: URL: https://github.com/apache/bookkeeper/pull/4609
Main Issue: https://github.com/apache/pulsar/issues/24355 ### Motivation There is a huge memory leak risk in netty recycler of bookie client . The issue and analysis is show here: https://github.com/apache/pulsar/issues/24355 ### Changes Currently, each perChannelBookieClient has its own recycler. After modification, each perChannelBookieClient share the same recycler. Notice: after the modification, it is more important to set a appropriate `io.netty.recycler.maxCapacityPerThread` config. Because I have tested that if we do not rely on cache object, the read and write performance would have a little decrease. Since all the channel in bkClient share the same recycler, we should make the config = add entry qps * add entry average latency. > --- > In order to uphold a high standard for quality for code contributions, Apache BookKeeper runs various precommit > checks for pull requests. A pull request can only be merged when it passes precommit checks. > > --- > Be sure to do all the following to help us incorporate your contribution > quickly and easily: > > If this PR is a BookKeeper Proposal (BP): > > - [ ] Make sure the PR title is formatted like: > `<BP-#>: Description of bookkeeper proposal` > `e.g. BP-1: 64 bits ledger is support` > - [ ] Attach the master issue link in the description of this PR. > - [ ] Attach the google doc link if the BP is written in Google Doc. > > Otherwise: > > - [ ] Make sure the PR title is formatted like: > `<Issue #>: Description of pull request` > `e.g. Issue 123: Description ...` > - [ ] Make sure tests pass via `mvn clean apache-rat:check install spotbugs:check`. > - [ ] Replace `<Issue #>` in the title with the actual Issue number. > > --- -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
