[ 
https://issues.apache.org/jira/browse/KAFKA-19519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18007996#comment-18007996
 ] 

Lan Ding commented on KAFKA-19519:
----------------------------------

Hi [~showuon], if you're not working on this, may I take it? Thanks.

> Introduce a new config for group coordinator max record size
> ------------------------------------------------------------
>
>                 Key: KAFKA-19519
>                 URL: https://issues.apache.org/jira/browse/KAFKA-19519
>             Project: Kafka
>          Issue Type: Improvement
>            Reporter: Luke Chen
>            Priority: Major
>
> In KAFKA-19427, there's a use case that when there is a consumer group 
> subscribes huge amount of topics/partitions, when this group is rebalanced, 
> and then the coordinator broker stores the assignment of this group to 
> __consumer_offsets, it will throw an error RecordTooLargeException .
>  
> Currently, to resolve this issue, the only solution is to increase the 
> broker-level `message.max.bytes` config. But the side effect of this change 
> is it potentially allows all topics without override the topic level 
> {{max.message.bytes}} config, will now allow higher message size.
>  
> We could introduce a new config to drive the value used by the group 
> coordinator - e.g group.coordinator.append.max.bytes - instead of relying on 
> the broker message.max.bytes. This would be used to set the max bytes at the 
> topic level when the topic is created.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to