[jira] [Commented] (KAFKA-3979) Optimize memory used by replication process by using adaptive fetch message size

2016-08-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15420917#comment-15420917
 ] 

ASF GitHub Bot commented on KAFKA-3979:
---

Github user nepal closed the pull request at:

https://github.com/apache/kafka/pull/1642


> Optimize memory used by replication process by using adaptive fetch message 
> size
> 
>
> Key: KAFKA-3979
> URL: https://issues.apache.org/jira/browse/KAFKA-3979
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.10.0.0
>Reporter: Andrey Neporada
>
> Current replication process fetches messages in replica.fetch.max.bytes-sized 
> chunks.
> Since replica.fetch.max.bytes should be bigger than max.message.bytes for 
> replication to work, one can face big memory consumption for replication 
> process, especially for installations with big number of partitions.
> Proposed solution is to try to fetch messages in smaller chunks (say 
> replica.fetch.base.bytes).
> If we encounter message bigger than current fetch chunk, we increase chunk 
> (f.e. twofold) and retry. After replicating this bigger message, we shrunk 
> fetch chunk size back until it reaches replica.fetch.base.bytes
> replica.fetch.base.bytes should be chosen big enough not to affect throughput 
> and to be bigger than most of messages.
> However, it can be much less than replica.fetch.max.bytes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3979) Optimize memory used by replication process by using adaptive fetch message size

2016-07-20 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385882#comment-15385882
 ] 

Ismael Juma commented on KAFKA-3979:


Thanks for the JIRA and PR. Since this introduces a new config, it technically 
needs a simple KIP 
(https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals).
 I suggest starting a mailing list discussion on the subject first to get input 
from the community before creating the KIP.

> Optimize memory used by replication process by using adaptive fetch message 
> size
> 
>
> Key: KAFKA-3979
> URL: https://issues.apache.org/jira/browse/KAFKA-3979
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.10.0.0
>Reporter: Andrey Neporada
>
> Current replication process fetches messages in replica.fetch.max.bytes-sized 
> chunks.
> Since replica.fetch.max.bytes should be bigger than max.message.bytes for 
> replication to work, one can face big memory consumption for replication 
> process, especially for installations with big number of partitions.
> Proposed solution is to try to fetch messages in smaller chunks (say 
> replica.fetch.base.bytes).
> If we encounter message bigger than current fetch chunk, we increase chunk 
> (f.e. twofold) and retry. After replicating this bigger message, we shrunk 
> fetch chunk size back until it reaches replica.fetch.base.bytes
> replica.fetch.base.bytes should be chosen big enough not to affect throughput 
> and to be bigger than most of messages.
> However, it can be much less than replica.fetch.max.bytes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3979) Optimize memory used by replication process by using adaptive fetch message size

2016-07-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385865#comment-15385865
 ] 

ASF GitHub Bot commented on KAFKA-3979:
---

GitHub user nepal opened a pull request:

https://github.com/apache/kafka/pull/1642

[KAFKA-3979] Optimize memory used by replication process by using ada…

…ptive fetch message size

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/nepal/kafka adaptive-replica-fetch-size

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1642.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1642


commit 300be1adb0089df4990c0bc2f8aaf0a50556fc8b
Author: Andrey L. Neporada 
Date:   2016-07-20T13:37:15Z

[KAFKA-3979] Optimize memory used by replication process by using adaptive 
fetch message size




> Optimize memory used by replication process by using adaptive fetch message 
> size
> 
>
> Key: KAFKA-3979
> URL: https://issues.apache.org/jira/browse/KAFKA-3979
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.10.0.0
>Reporter: Andrey Neporada
>
> Current replication process fetches messages in replica.fetch.max.bytes-sized 
> chunks.
> Since replica.fetch.max.bytes should be bigger than max.message.bytes for 
> replication to work, one can face big memory consumption for replication 
> process, especially for installations with big number of partitions.
> Proposed solution is to try to fetch messages in smaller chunks (say 
> replica.fetch.base.bytes).
> If we encounter message bigger than current fetch chunk, we increase chunk 
> (f.e. twofold) and retry. After replicating this bigger message, we shrunk 
> fetch chunk size back until it reaches replica.fetch.base.bytes
> replica.fetch.base.bytes should be chosen big enough not to affect throughput 
> and to be bigger than most of messages.
> However, it can be much less than replica.fetch.max.bytes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)