junrao commented on code in PR #17322:
URL: https://github.com/apache/kafka/pull/17322#discussion_r1809328431
##########
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/modern/share/ShareGroupConfig.java:
##########
@@ -71,15 +72,21 @@ public class ShareGroupConfig {
public static final int
SHARE_FETCH_PURGATORY_PURGE_INTERVAL_REQUESTS_DEFAULT = 1000;
public static final String
SHARE_FETCH_PURGATORY_PURGE_INTERVAL_REQUESTS_DOC = "The purge interval (in
number of requests) of the share fetch request purgatory";
+ // Broker temporary configuration to limit the number of records fetched
by a share fetch request.
+ public static final String SHARE_FETCH_MAX_FETCH_RECORDS_CONFIG =
"share.fetch.max.fetch.records";
Review Comment:
Not sure if this should be tied the the max.poll.records on the client side.
It seems that our goal is to bound the number of un-acked records per consumer
instance, which may not always match max.poll.records on the client side. For
example, if the client polls two max.poll.records without acking, it doesn't
mean that we should allow the client to have 2 * max.poll.records un-acked
records. Perhaps we could add a new group level config like
`group.max.record.locks.per.member`?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]