kamalcph commented on code in PR #19336:
URL: https://github.com/apache/kafka/pull/19336#discussion_r2022640703
##########
clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java:
##########
@@ -220,7 +223,9 @@ public class ConsumerConfig extends AbstractConfig {
"partition of the fetch is larger than this limit, the " +
"batch will still be returned to ensure that the consumer can make
progress. The maximum record batch size " +
"accepted by the broker is defined via
<code>message.max.bytes</code> (broker config) or " +
- "<code>max.message.bytes</code> (topic config). See " +
FETCH_MAX_BYTES_CONFIG + " for limiting the consumer request size.";
+ "<code>max.message.bytes</code> (topic config). See " +
FETCH_MAX_BYTES_CONFIG + " for limiting the consumer request size. " +
+ "Consider increasing this limit especially in the cases of remote
storage reads (KIP-405), because currently only " +
Review Comment:
> Consider increasing this limit
Could this be changed to?
```
Consider increasing the <code>max.partition.fetch.bytes</code> limit ...
```
If the user increased the `max.partition.fetch.bytes` from 1 -> 50 MB to
match with the `fetch.max.bytes`, then it might cause GC pressure on the broker
due to holding of large continuous byte array.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]