adixitconfluent commented on code in PR #17870:
URL: https://github.com/apache/kafka/pull/17870#discussion_r1912169880
##########
core/src/test/java/kafka/server/share/DelayedShareFetchTest.java:
##########
@@ -559,13 +561,18 @@ public void testCombineLogReadResponse() {
.withSharePartitions(sharePartitions)
.build();
- LinkedHashMap<TopicIdPartition, FetchRequest.PartitionData>
topicPartitionData = new LinkedHashMap<>();
- topicPartitionData.put(tp0, mock(FetchRequest.PartitionData.class));
- topicPartitionData.put(tp1, mock(FetchRequest.PartitionData.class));
+ LinkedHashMap<TopicIdPartition, Long> topicPartitionData = new
LinkedHashMap<>();
+ topicPartitionData.put(tp0, 0L);
+ topicPartitionData.put(tp1, 0L);
// Case 1 - logReadResponse contains tp0.
LinkedHashMap<TopicIdPartition, LogReadResult> logReadResponse = new
LinkedHashMap<>();
- logReadResponse.put(tp0, mock(LogReadResult.class));
+ LogReadResult logReadResult = mock(LogReadResult.class);
+ Records records = mock(Records.class);
+ when(records.sizeInBytes()).thenReturn(2);
+ FetchDataInfo fetchDataInfo = new
FetchDataInfo(mock(LogOffsetMetadata.class), records);
+ when(logReadResult.info()).thenReturn(fetchDataInfo);
+ logReadResponse.put(tp0, logReadResult);
Review Comment:
> Should we add a test when partitionMaxBytesStrategy.maxBytes throws an
exception?
sure I'll do that
> Also do we have a test which validates fetch bytes are in accordance with
the share fetch max bytes?
Yes, we have a test
[testShareFetchRequestSuccessfulSharingBetweenMultipleConsumers](https://github.com/apache/kafka/pull/17870/files#diff-784cd94373b734f49edc71a0b70d4f2d6d11dbf8b345746db7340b9a5f4fedbdR1386)
in `ShareFetchAcknowledgeRequestTest.scala`
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]