apoorvmittal10 commented on code in PR #18804:
URL: https://github.com/apache/kafka/pull/18804#discussion_r1946804279


##########
core/src/main/java/kafka/server/share/SharePartition.java:
##########
@@ -678,6 +680,9 @@ public ShareAcquiredRecords acquire(
             for (Map.Entry<Long, InFlightBatch> entry : subMap.entrySet()) {
                 // If the acquired count is equal to the max fetch records 
then break the loop.
                 if (acquiredCount >= maxFetchRecords) {
+                    // If the limit to acquire records is reached then it 
means there exists additional
+                    // fetch batches which cannot be acquired.
+                    subsetAcquired = true;

Review Comment:
   Yeah, agree. Maintaining the subset boolean is harder than just iterating 
and verifying. I have updated the code.



##########
core/src/test/java/kafka/server/share/ShareFetchUtilsTest.java:
##########
@@ -369,49 +379,58 @@ public void testProcessFetchResponseWithMaxFetchRecords() 
{
         ShareFetch shareFetch = new ShareFetch(FETCH_PARAMS, groupId, 
memberId.toString(),
             new CompletableFuture<>(), partitionMaxBytes, BATCH_SIZE, 10, 
BROKER_TOPIC_STATS);
 
-        MemoryRecords records1 = MemoryRecords.withRecords(Compression.NONE,
-            new SimpleRecord("0".getBytes(), "v".getBytes()),
-            new SimpleRecord("1".getBytes(), "v".getBytes()),
-            new SimpleRecord("2".getBytes(), "v".getBytes()),
-            new SimpleRecord(null, "value".getBytes()));
+        LinkedHashMap<Long, Integer> offsetValues = new LinkedHashMap<>();
+        offsetValues.put(0L, 1);
+        offsetValues.put(1L, 1);
+        offsetValues.put(2L, 1);
+        offsetValues.put(3L, 1);
+        Records records1 = createFileRecords(offsetValues);
+
+        offsetValues.clear();
+        offsetValues.put(100L, 4);
+        Records records2 = createFileRecords(offsetValues);
 
         FetchPartitionData fetchPartitionData1 = new 
FetchPartitionData(Errors.NONE, 0L, 0L,
             records1, Optional.empty(), OptionalLong.empty(), Optional.empty(),
             OptionalInt.empty(), false);
         FetchPartitionData fetchPartitionData2 = new 
FetchPartitionData(Errors.NONE, 0L, 0L,
-            records1, Optional.empty(), OptionalLong.empty(), Optional.empty(),
+            records2, Optional.empty(), OptionalLong.empty(), Optional.empty(),
             OptionalInt.empty(), false);
 
         when(sp0.acquire(memberId.toString(), BATCH_SIZE, 10, 
fetchPartitionData1)).thenReturn(
-            ShareAcquiredRecords.fromAcquiredRecords(new 
ShareFetchResponseData.AcquiredRecords()
-                .setFirstOffset(0).setLastOffset(1).setDeliveryCount((short) 
1)));
+            createShareAcquiredRecords(new 
ShareFetchResponseData.AcquiredRecords()
+                .setFirstOffset(0).setLastOffset(1).setDeliveryCount((short) 
1), true));
         when(sp1.acquire(memberId.toString(), BATCH_SIZE, 8, 
fetchPartitionData2)).thenReturn(
-            ShareAcquiredRecords.fromAcquiredRecords(new 
ShareFetchResponseData.AcquiredRecords()
-                
.setFirstOffset(100).setLastOffset(103).setDeliveryCount((short) 1)));
+            createShareAcquiredRecords(new 
ShareFetchResponseData.AcquiredRecords()
+                
.setFirstOffset(100).setLastOffset(103).setDeliveryCount((short) 1), true));
 
         // Send the topic partitions in order so can validate if correct mock 
is called, accounting
         // the offset count for the acquired records from the previous share 
partition acquire.
         Map<TopicIdPartition, FetchPartitionData> responseData1 = new 
LinkedHashMap<>();
         responseData1.put(tp0, fetchPartitionData1);
         responseData1.put(tp1, fetchPartitionData2);
 
-        Map<TopicIdPartition, ShareFetchResponseData.PartitionData> 
resultData1 =
+        Map<TopicIdPartition, ShareFetchResponseData.PartitionData> resultData 
=

Review Comment:
   Done.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to