[ 
https://issues.apache.org/jira/browse/KAFKA-3979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15420917#comment-15420917
 ] 

ASF GitHub Bot commented on KAFKA-3979:
---------------------------------------

Github user nepal closed the pull request at:

    https://github.com/apache/kafka/pull/1642


> Optimize memory used by replication process by using adaptive fetch message 
> size
> --------------------------------------------------------------------------------
>
>                 Key: KAFKA-3979
>                 URL: https://issues.apache.org/jira/browse/KAFKA-3979
>             Project: Kafka
>          Issue Type: Improvement
>          Components: replication
>    Affects Versions: 0.10.0.0
>            Reporter: Andrey Neporada
>
> Current replication process fetches messages in replica.fetch.max.bytes-sized 
> chunks.
> Since replica.fetch.max.bytes should be bigger than max.message.bytes for 
> replication to work, one can face big memory consumption for replication 
> process, especially for installations with big number of partitions.
> Proposed solution is to try to fetch messages in smaller chunks (say 
> replica.fetch.base.bytes).
> If we encounter message bigger than current fetch chunk, we increase chunk 
> (f.e. twofold) and retry. After replicating this bigger message, we shrunk 
> fetch chunk size back until it reaches replica.fetch.base.bytes
> replica.fetch.base.bytes should be chosen big enough not to affect throughput 
> and to be bigger than most of messages.
> However, it can be much less than replica.fetch.max.bytes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to