apoorvmittal10 commented on code in PR #17739:
URL: https://github.com/apache/kafka/pull/17739#discussion_r1842597238
##########
core/src/main/java/kafka/server/share/SharePartition.java:
##########
@@ -1602,8 +1602,6 @@ protected void
updateFetchOffsetMetadata(Optional<LogOffsetMetadata> fetchOffset
protected Optional<LogOffsetMetadata> fetchOffsetMetadata() {
lock.readLock().lock();
try {
- if (findNextFetchOffset.get())
Review Comment:
>It seems that fetchOffsetMetadata() needs to take in nextFetchOffset too.
This way, if the offset matches, we can avoid the readFromLog call.
Sure we can have it but I was just thinking that if a share fetch request
reads the nextFetchOffset as latest (endOffset) and waits in tryComplete as
there are not enought bytes. But before next tryComplete iteration an offset
for respective share-partition is released then nextFetchOffset will be true.
In the next iteration of tryComplete, fetchOffsetMetadata will be read as
Optional.empty from fetchOffsetMetadata() call, which will be incorrect. Hence
I was thinking to not have this check rather work with fetchOffsetMetadata's
messageOffset.
>However, a user could initialize an arbitrary offset in the middle of a
batch.
Hmmm, I am just thinking if it could ever occur. As we always align on batch
boundaries i.e. either fetch with max fetch records or without, we acquire
through full produce batch. But in corner case, if you mean that user sets the
offset to a specific offset or resets offset by duration then it could be
arbitrary i.e. in between the batch itself. In that case we should treat that
with local stored fetchOffsetAndMetadata copy in DelayedShareFetch.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]