chirag-wadhwa5 commented on code in PR #18696:
URL: https://github.com/apache/kafka/pull/18696#discussion_r1935351367
##########
core/src/main/java/kafka/server/share/SharePartition.java:
##########
@@ -641,13 +669,26 @@ public ShareAcquiredRecords acquire(
// The fetched records are already part of the in-flight records.
The records might
// be available for re-delivery hence try acquiring same. The
request batches could
// be an exact match, subset or span over multiple already fetched
batches.
+ long nextBatchStartOffset = baseOffset;
for (Map.Entry<Long, InFlightBatch> entry : subMap.entrySet()) {
// If the acquired count is equal to the max fetch records
then break the loop.
if (acquiredCount >= maxFetchRecords) {
break;
}
InFlightBatch inFlightBatch = entry.getValue();
+
+ // If nextBatchStartOffset is less than the key of the entry,
this means the fetch happened for a gap in the cachedState.
+ // Thus, a new batch needs to be acquired for the gap.
+ if (nextBatchStartOffset < entry.getKey()) {
+ ShareAcquiredRecords shareAcquiredRecords =
acquireNewBatchRecords(memberId, fetchPartitionData.records.batches(),
+ nextBatchStartOffset, entry.getKey() - 1, batchSize,
maxFetchRecords);
+ result.addAll(shareAcquiredRecords.acquiredRecords());
+ acquiredCount += shareAcquiredRecords.count();
+ }
+ // Set nextBatchStartOffset as the last offset of the current
in-flight batch + 1
+ nextBatchStartOffset = inFlightBatch.lastOffset() + 1;
Review Comment:
Thanks for the review. I didn't understand this question, could you pls be a
little more specific ? I believe the discussion was that the gaps in the
initialReadGapOffset window are all considered as acquirable and we would let
them be acquired.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]