[jira] [Commented] (KAFKA-3932) Consumer fails to consume in a round robin fashion
[ https://issues.apache.org/jira/browse/KAFKA-3932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844818#comment-16844818 ] ASF GitHub Bot commented on KAFKA-3932: --- chienhsingwu commented on pull request #5838: KAFKA-3932 - Consumer fails to consume in a round robin fashion URL: https://github.com/apache/kafka/pull/5838 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Consumer fails to consume in a round robin fashion > -- > > Key: KAFKA-3932 > URL: https://issues.apache.org/jira/browse/KAFKA-3932 > Project: Kafka > Issue Type: Bug > Components: consumer >Affects Versions: 0.10.0.0 >Reporter: Elias Levy >Assignee: CHIENHSING WU >Priority: Major > > The Java consumer fails consume messages in a round robin fashion. This can > lead to an unbalance consumption. > In our use case we have a set of consumer that can take a significant amount > of time consuming messages off a topic. For this reason, we are using the > pause/poll/resume pattern to ensure the consumer session is not timeout. The > topic that is being consumed has been preloaded with message. That means > there is a significant message lag when the consumer is first started. To > limit how many messages are consumed at a time, the consumer has been > configured with max.poll.records=1. > The first initial observation is that the client receive a large batch of > messages for the first partition it decides to consume from and will consume > all those messages before moving on, rather than returning a message from a > different partition for each call to poll. > We solved this issue by configuring max.partition.fetch.bytes to be small > enough that only a single message will be returned by the broker on each > fetch, although this would not be feasible if message size were highly > variable. > The behavior of the consumer after this change is to largely consume from a > small number of partitions, usually just two, iterating between them, until > it exhausts them, before moving to another partition. This behavior is > problematic if the messages have some rough time semantics and need to be > process roughly time ordered across all partitions. > It would be useful if the consumer has a pluggable API that allowed custom > logic to select which partition to consume from next, thus enabling the > creation of a round robin partition consumer. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-3932) Consumer fails to consume in a round robin fashion
[ https://issues.apache.org/jira/browse/KAFKA-3932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16675610#comment-16675610 ] Elias Levy commented on KAFKA-3932: --- [~chienhsw] Alas, I reported the issue two years ago and have not had the opportunity to revisit it as I've moved on to other things. That said, it seems like your proposal would have addressed the fairness issue I raised. > Consumer fails to consume in a round robin fashion > -- > > Key: KAFKA-3932 > URL: https://issues.apache.org/jira/browse/KAFKA-3932 > Project: Kafka > Issue Type: Bug > Components: consumer >Affects Versions: 0.10.0.0 >Reporter: Elias Levy >Assignee: CHIENHSING WU >Priority: Major > > The Java consumer fails consume messages in a round robin fashion. This can > lead to an unbalance consumption. > In our use case we have a set of consumer that can take a significant amount > of time consuming messages off a topic. For this reason, we are using the > pause/poll/resume pattern to ensure the consumer session is not timeout. The > topic that is being consumed has been preloaded with message. That means > there is a significant message lag when the consumer is first started. To > limit how many messages are consumed at a time, the consumer has been > configured with max.poll.records=1. > The first initial observation is that the client receive a large batch of > messages for the first partition it decides to consume from and will consume > all those messages before moving on, rather than returning a message from a > different partition for each call to poll. > We solved this issue by configuring max.partition.fetch.bytes to be small > enough that only a single message will be returned by the broker on each > fetch, although this would not be feasible if message size were highly > variable. > The behavior of the consumer after this change is to largely consume from a > small number of partitions, usually just two, iterating between them, until > it exhausts them, before moving to another partition. This behavior is > problematic if the messages have some rough time semantics and need to be > process roughly time ordered across all partitions. > It would be useful if the consumer has a pluggable API that allowed custom > logic to select which partition to consume from next, thus enabling the > creation of a round robin partition consumer. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-3932) Consumer fails to consume in a round robin fashion
[ https://issues.apache.org/jira/browse/KAFKA-3932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16670366#comment-16670366 ] CHIENHSING WU commented on KAFKA-3932: -- [~elevy], from the description what you are asking for is for the poll call to return messages from available partitions in a fair/balanced pattern. The version you tested was fairly old. Did you get a chance to test the current version and if so, is the problem still there? I used version 2.0 and had similar issues. Also, could you take a look at the comment I put previously and see if the change I propose would address the issue? > Consumer fails to consume in a round robin fashion > -- > > Key: KAFKA-3932 > URL: https://issues.apache.org/jira/browse/KAFKA-3932 > Project: Kafka > Issue Type: Bug > Components: consumer >Affects Versions: 0.10.0.0 >Reporter: Elias Levy >Assignee: CHIENHSING WU >Priority: Major > > The Java consumer fails consume messages in a round robin fashion. This can > lead to an unbalance consumption. > In our use case we have a set of consumer that can take a significant amount > of time consuming messages off a topic. For this reason, we are using the > pause/poll/resume pattern to ensure the consumer session is not timeout. The > topic that is being consumed has been preloaded with message. That means > there is a significant message lag when the consumer is first started. To > limit how many messages are consumed at a time, the consumer has been > configured with max.poll.records=1. > The first initial observation is that the client receive a large batch of > messages for the first partition it decides to consume from and will consume > all those messages before moving on, rather than returning a message from a > different partition for each call to poll. > We solved this issue by configuring max.partition.fetch.bytes to be small > enough that only a single message will be returned by the broker on each > fetch, although this would not be feasible if message size were highly > variable. > The behavior of the consumer after this change is to largely consume from a > small number of partitions, usually just two, iterating between them, until > it exhausts them, before moving to another partition. This behavior is > problematic if the messages have some rough time semantics and need to be > process roughly time ordered across all partitions. > It would be useful if the consumer has a pluggable API that allowed custom > logic to select which partition to consume from next, thus enabling the > creation of a round robin partition consumer. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-3932) Consumer fails to consume in a round robin fashion
[ https://issues.apache.org/jira/browse/KAFKA-3932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662625#comment-16662625 ] ASF GitHub Bot commented on KAFKA-3932: --- chienhsingwu opened a new pull request #5838: KAFKA-3932 - Consumer fails to consume in a round robin fashion URL: https://github.com/apache/kafka/pull/5838 I think the issue is the statement in [PIP](https://cwiki.apache.org/confluence/display/KAFKA/KIP-41%3A+KafkaConsumer+Max+Records): "As before, we'd keep track of **which partition we left off** at so that the next iteration **would begin there**." I think it should **NOT** use the last partition in the next iteration; **it should pick the next one instead**. The simplest solution to impose the order to pick the next one is to use the order the consumer.internals.Fetcher receives the partition messages, as determined by **completedFetches** queue in that class. To avoid parsing the partition messages repeatedly. we can **save those parsed fetches to a list and maintain the next partition to get messages there.** @hachikuji, @lindong28, @guozhangwang and others, please review. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Consumer fails to consume in a round robin fashion > -- > > Key: KAFKA-3932 > URL: https://issues.apache.org/jira/browse/KAFKA-3932 > Project: Kafka > Issue Type: Bug > Components: consumer >Affects Versions: 0.10.0.0 >Reporter: Elias Levy >Priority: Major > > The Java consumer fails consume messages in a round robin fashion. This can > lead to an unbalance consumption. > In our use case we have a set of consumer that can take a significant amount > of time consuming messages off a topic. For this reason, we are using the > pause/poll/resume pattern to ensure the consumer session is not timeout. The > topic that is being consumed has been preloaded with message. That means > there is a significant message lag when the consumer is first started. To > limit how many messages are consumed at a time, the consumer has been > configured with max.poll.records=1. > The first initial observation is that the client receive a large batch of > messages for the first partition it decides to consume from and will consume > all those messages before moving on, rather than returning a message from a > different partition for each call to poll. > We solved this issue by configuring max.partition.fetch.bytes to be small > enough that only a single message will be returned by the broker on each > fetch, although this would not be feasible if message size were highly > variable. > The behavior of the consumer after this change is to largely consume from a > small number of partitions, usually just two, iterating between them, until > it exhausts them, before moving to another partition. This behavior is > problematic if the messages have some rough time semantics and need to be > process roughly time ordered across all partitions. > It would be useful if the consumer has a pluggable API that allowed custom > logic to select which partition to consume from next, thus enabling the > creation of a round robin partition consumer. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-3932) Consumer fails to consume in a round robin fashion
[ https://issues.apache.org/jira/browse/KAFKA-3932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16662386#comment-16662386 ] CHIENHSING WU commented on KAFKA-3932: -- I am going ahead to implement the change I propose. I don't have the right to assign this ticket to myself. *This comment hopefully alert anyone that plans to work on it.* > Consumer fails to consume in a round robin fashion > -- > > Key: KAFKA-3932 > URL: https://issues.apache.org/jira/browse/KAFKA-3932 > Project: Kafka > Issue Type: Bug > Components: consumer >Affects Versions: 0.10.0.0 >Reporter: Elias Levy >Priority: Major > > The Java consumer fails consume messages in a round robin fashion. This can > lead to an unbalance consumption. > In our use case we have a set of consumer that can take a significant amount > of time consuming messages off a topic. For this reason, we are using the > pause/poll/resume pattern to ensure the consumer session is not timeout. The > topic that is being consumed has been preloaded with message. That means > there is a significant message lag when the consumer is first started. To > limit how many messages are consumed at a time, the consumer has been > configured with max.poll.records=1. > The first initial observation is that the client receive a large batch of > messages for the first partition it decides to consume from and will consume > all those messages before moving on, rather than returning a message from a > different partition for each call to poll. > We solved this issue by configuring max.partition.fetch.bytes to be small > enough that only a single message will be returned by the broker on each > fetch, although this would not be feasible if message size were highly > variable. > The behavior of the consumer after this change is to largely consume from a > small number of partitions, usually just two, iterating between them, until > it exhausts them, before moving to another partition. This behavior is > problematic if the messages have some rough time semantics and need to be > process roughly time ordered across all partitions. > It would be useful if the consumer has a pluggable API that allowed custom > logic to select which partition to consume from next, thus enabling the > creation of a round robin partition consumer. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-3932) Consumer fails to consume in a round robin fashion
[ https://issues.apache.org/jira/browse/KAFKA-3932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655279#comment-16655279 ] CHIENHSING WU commented on KAFKA-3932: -- I encountered the same issue as well. Upon studying the source code and the [PIP|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-41%3A+KafkaConsumer+Max+Records]|https://cwiki.apache.org/confluence/display/KAFKA/KIP-41%3A+KafkaConsumer+Max+Records].], I think the issues is the statement in PIP: "As before, we'd keep track of *which partition we left off* at so that the next iteration would *begin there*." I think it should *NOT* use the last partition in the next iteration; *it should pick the next one instead.* The simplest solution to impose the order to pick the next one is to use the order the consumer.internals.Fetcher receives the partition messages, as determined by *completedFetches* queue in that class. To avoid parsing the partition messages repeatedly. we can *save those parsed fetches to a list and maintain the next partition to get messages there.* Does it sound like a good approach? If this is not the right place to discuss the design please let me know where to engage. If this is agreeable I can contribute the implementation. > Consumer fails to consume in a round robin fashion > -- > > Key: KAFKA-3932 > URL: https://issues.apache.org/jira/browse/KAFKA-3932 > Project: Kafka > Issue Type: Bug > Components: consumer >Affects Versions: 0.10.0.0 >Reporter: Elias Levy >Priority: Major > > The Java consumer fails consume messages in a round robin fashion. This can > lead to an unbalance consumption. > In our use case we have a set of consumer that can take a significant amount > of time consuming messages off a topic. For this reason, we are using the > pause/poll/resume pattern to ensure the consumer session is not timeout. The > topic that is being consumed has been preloaded with message. That means > there is a significant message lag when the consumer is first started. To > limit how many messages are consumed at a time, the consumer has been > configured with max.poll.records=1. > The first initial observation is that the client receive a large batch of > messages for the first partition it decides to consume from and will consume > all those messages before moving on, rather than returning a message from a > different partition for each call to poll. > We solved this issue by configuring max.partition.fetch.bytes to be small > enough that only a single message will be returned by the broker on each > fetch, although this would not be feasible if message size were highly > variable. > The behavior of the consumer after this change is to largely consume from a > small number of partitions, usually just two, iterating between them, until > it exhausts them, before moving to another partition. This behavior is > problematic if the messages have some rough time semantics and need to be > process roughly time ordered across all partitions. > It would be useful if the consumer has a pluggable API that allowed custom > logic to select which partition to consume from next, thus enabling the > creation of a round robin partition consumer. -- This message was sent by Atlassian JIRA (v7.6.3#76005)