There is no way to automatically scale it, but you could write a script to
increase the partition count using the command line tools, and trigger it
on certain metrics.

One thing to consider is that any *keyed* events would need to be rewritten
to topics that have their partition count increased. This is to ensure that
keyed data locality is preserved within each partition, such that all
events of a single key stay in a single partition. If you don't care about
data-locality, then you can increase the partition count without concern.





On Tue, Jan 21, 2020 at 11:35 AM Pushkar Deole <pdeole2...@gmail.com> wrote:

> Hello Dev community,
>
> Got no response from user community on below query. Can you respond back on
> this please?
>
> ---------- Forwarded message ---------
> From: Pushkar Deole <pdeole2...@gmail.com>
> Date: Fri, Jan 17, 2020 at 1:46 PM
> Subject: Is there a way to auto scale topic partitions in kafka?
> To: <us...@kafka.apache.org>
>
>
> Hello,
>
> I am working on developing a microservice based system which uses kafka as
> a messaging infrastructure. The microservices application are mainly kafka
> consumers and kafka streams applications and are deployed as docker
> containers on kubernetes.
>
> The system should be designed to be auto scalable for which we are using
> Horizontal Pod Autoscaler feature of kubernetes which allows to instantiate
> more number of pods if a certain metric (e.g. cpu utilization) touches the
> threshold or reduce the pods in case the metric is way below the threshold.
> However, the problem is number of partitions in kafka are fixed so even if
> load on the system increases and the number of consumer pods are
> autoscaled, it could not be scaled beyond the number of partitions.
> So, after a point where number of pods is equal to number of partitions,
> the system can't be scaled beyond that.
> Is there a way to autoscale number of partitions also in kafka so the
> system can be auto scaled in cloud?
>

Reply via email to