[ https://issues.apache.org/jira/browse/KAFKA-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16915214#comment-16915214 ]
Lee Dongjin commented on KAFKA-8832: ------------------------------------ Hi [~LordChen], Since this issue includes public API changes, you need to file and make a discussion on this feature. Please refer here: [https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals] > We should limit the maximum size read by a fetch request on the kafka server. > ----------------------------------------------------------------------------- > > Key: KAFKA-8832 > URL: https://issues.apache.org/jira/browse/KAFKA-8832 > Project: Kafka > Issue Type: Bug > Components: core > Affects Versions: 2.3.0, 2.2.1 > Reporter: ChenLin > Priority: Major > Labels: needs-kip > Attachments: image-2019-08-25-15-31-56-707.png, > image-2019-08-25-15-42-24-379.png > > > I found that kafka is not on the server side, limiting the amount of data > read per fetch request. This may cause the kafka server program to report an > error: OutOfMemory. Due to unreasonable client configuration, > fetch.message.max.bytes configuration is too large, such as 100M, because the > kafka server receives a lot of fetch requests at a certain moment, causing > the server to report an error: OutOfMemory。So I think this is a bug。 > !image-2019-08-25-15-42-24-379.png! > !image-2019-08-25-15-31-56-707.png! -- This message was sent by Atlassian Jira (v8.3.2#803003)