[ 
https://issues.apache.org/jira/browse/KAFKA-19020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18008245#comment-18008245
 ] 

Jimmy Wang edited comment on KAFKA-19020 at 7/20/25 8:00 AM:
-------------------------------------------------------------

[~apoorvmittal10]  The current {{maxFetchRecords}} limit isn't strict because 
{{lastOffsetFromBatchWithRequestOffset()}} may return oversized offsets. I 
think tightening this could cause batch splits, am I on the right track? 

Additionally, do you think it is necessary to forcefully complete the 
DelayedShareFetch when the maxFetchRecords limit is satisfied (similar to the 
logic in isMinBytesSatisfied())?

These are my early thoughts—I’ll run some tests to see how it works.


was (Author: JIRAUSER300327):
[~apoorvmittal10] I think the reason maxFetchRecords is not currently enforced 
as a strict limit is due to in-flight requests. Hence My idea is to modify the 
maxFetchRecords argument passed to acquireNewBatchRecords()—instead of using 
the original maxFetchRecords value, we could pass alreadyFetchedRecords minus 
maxFetchRecords. This could ensure that the total count of ShareAcquiredRecords 
never exceeds the actual remaining records needed. Could you confirm if my 
understanding is correct? Additionally, do you think it is necessary to 
forcefully complete the DelayedShareFetch when the maxFetchRecords limit is 
satisfied (similar to the logic in isMinBytesSatisfied())?

These are my early thoughts—I’ll run some tests to see how it works.

> Handle strict max fetch records in share fetch
> ----------------------------------------------
>
>                 Key: KAFKA-19020
>                 URL: https://issues.apache.org/jira/browse/KAFKA-19020
>             Project: Kafka
>          Issue Type: Sub-task
>            Reporter: Apoorv Mittal
>            Assignee: Jimmy Wang
>            Priority: Major
>             Fix For: 4.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to