Harvey-13 opened a new issue, #4614: URL: https://github.com/apache/bookkeeper/issues/4614
my enterpise use BK for metadata sync, and need high performance. however we found severe lantency spikes during testing. we seeked for the reason and found out that the problem is caused by SkipListArena. Here are the reasons: 1. Every time a Chunk is retired and arena asks an another Chunk from JVM, a big lantency gap appears 2. besides, when the memtable reaches its limitation, snapshot() func is called, but why snapshot request a new arena [at last](https://github.dev/apache/bookkeeper/blob/542ea098a6cab5b98acb5ffafcfac98722506c19/bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/EntryMemTable.java#L182-L183) ? a new arena does not hold any Chunk so when alloc happens it goes back to reason 1 So here are my questions: 1. Why there is no mechanism like chunk recycle? from the version 4.5.0 we use, to the latest version, SkipListArena is not modified. in my understanding, a chunk can be recycled when the entries on it are all flushed into disk, and now the chunk itself already has a allocator counter. this would cut down alloc from JVM in a large scale. 2. Why snapshot() needs to claim a new allocator of arena? regardless of the current chunk free size, i dont get its necessity. And here is my log results, the print out item begin with "entryId xxxx" are the ones slower than avg 1.5x times above, as we can see all of them are cause by "chunk retire" or "flush memtable"  I'm not very familiar with Java and BK source codes, so i didnt make PR, I want to know if it is a design shortcoming? or i didnt get the idea or i got some conf args wrong? I use 4MB Chunk, 64MB memtable limit, and 128KB arena max alloc size, my req entry size is 12KB. Thanks for your time and reply. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
