I’ll assume when you say load, you mean data rate flowing into your kafka 
topic(s).

One instance can consume from multiple partitions, so on a variable load 
workflow, it’s a good idea to have more partitions than your average workload 
will require. When the data rate is low, fewer consumers will be able to handle 
multiple partitions each, with the group coordinator handling the distribution 
among them. When load spikes, more consumers will join the group and partitions 
will be reassigned across the larger pool.

-- Peter (from phone)

> On Feb 21, 2019, at 10:12 PM, Ali Nazemian <alinazem...@gmail.com> wrote:
> 
> Hi All,
> 
> I was wondering how an application can be auto-scalable if only a single
> instance can read from the single Kafka partition and two instances cannot
> read from the single partition at the same time with the same consumer
> group.
> 
> Suppose there is an application that has 10 instances running on Kubernetes
> in production at this moment (using the same consumer group) and we have
> got a Kafka topic with 10 partitions. Due to the increase in load,
> Kubernetes provision more instance to take the extra load. However, since
> the maximum number of consumers with the same consumer group can be 10 in
> this example, no matter how many new instances are created they are not
> able to address the extra load until partition number increases. Is there
> any out of the box solution to address this situation?
> 
> Thanks,
> Ali

Reply via email to