[ 
https://issues.apache.org/jira/browse/KAFKA-19020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18008245#comment-18008245
 ] 

Jimmy Wang commented on KAFKA-19020:
------------------------------------

[~apoorvmittal10] I think the reason maxFetchRecords is not currently enforced 
as a strict limit is due to in-flight requests. Hence My idea is to modify the 
maxFetchRecords argument passed to acquireNewBatchRecords()—instead of using 
the original maxFetchRecords value, we could pass alreadyFetchedRecords minus 
maxFetchRecords. This could ensure that the total count of ShareAcquiredRecords 
never exceeds the actual remaining records needed. Could you confirm if my 
understanding is correct? Additionally, do you think it is necessary to 
forcefully complete the DelayedShareFetch when the maxFetchRecords limit is 
satisfied (similar to the logic in isMinBytesSatisfied())?

These are my early thoughts—I’ll run some tests to see how it works.

> Handle strict max fetch records in share fetch
> ----------------------------------------------
>
>                 Key: KAFKA-19020
>                 URL: https://issues.apache.org/jira/browse/KAFKA-19020
>             Project: Kafka
>          Issue Type: Sub-task
>            Reporter: Apoorv Mittal
>            Assignee: Jimmy Wang
>            Priority: Major
>             Fix For: 4.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to