wenbingshen commented on code in PR #3976:
URL: https://github.com/apache/bookkeeper/pull/3976#discussion_r1211339844
##########
bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/GarbageCollectorThread.java:
##########
@@ -619,9 +622,10 @@ void doCompactEntryLogs(double threshold, long
maxTimeMillis) throws EntryLogMet
meta.getEntryLogId(), meta.getUsage(),
threshold);
}
- long priorRemainingSize = meta.getRemainingSize();
+ long compactSize = meta.getTotalSize() -
meta.getRemainingSize();
+ compactionReadByteRateLimiter.acquire((int) (compactSize));
compactEntryLog(meta);
Review Comment:
If I understand correctly, when the compact is in progress, if
compactionRateBytes is configured, the compact will be limited when writing the
disk. Since read and write are processed strictly serially, the next read needs
to wait for the write operation to complete. Does it mean that read is also
limited to compactionRateByBytes?
I suspect that your problem here is because the disk read rate is too fast
caused by getting entryLogMeta from entryLogFile, you can look at this PR:
#2963
@hangc0276 Please help take a look. Thanks.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]