Hello Luke, and thank you for your answer.
What I would have hoped for is something more automatic, something that
will spread the load when a Kafka broker goes down without any human
intervention. The reassign script is a bit complicated, you need to
generate the topics and partitions list, then get the current assignment
and rework it to force a new leader.

On Tue, Nov 15, 2022 at 5:18 AM Luke Chen <show...@gmail.com> wrote:

> Hi Pierre,
>
> Try using kafka-reassign-partitions.sh to reassign partitions to different
> replicas you like.
> ref:  https://kafka.apache.org/documentation/#basic_ops_automigrate
>
> Luke
>
> On Mon, Nov 14, 2022 at 3:55 PM Pierre Coquentin <
> pierre.coquen...@gmail.com>
> wrote:
>
> > Hello,
> > We have a Kafka cluster (2.4.1) with a replication factor of 3. I notice
> > when we stop a broker that only one broker takes all the load from the
> > missing broker and becomes the leader to all partitions.
> > I would have thought that Kafka would split the load evenly among the
> > remaining brokers.
> >
> > So if I have this kind of configuration
> > Topic: test
> > Partition 0 - Leader: 1 - Replicas: 1,2,3 - Isr: 1,2,3
> > Partition 1 - Leader: 2 - Replicas: 2,3,1 - Isr: 1,2,3
> > Partition 2 - Leader: 3 - Replicas: 3,1,2 - Isr: 1,2,3
> > Partition 3 - Leader: 1 - Replicas: 1,2,3 - Isr: 1,2,3
> > Partition 4 - Leader: 2 - Replicas: 2,3,1 - Isr: 1,2,3
> > Partition 5 - Leader: 3 - Replicas: 3,1,2 - Isr: 1,2,3
> >
> > If I stop broker 1, I want something like this (load is split evenly
> among
> > broker 2 and 3):
> > Topic: test
> > Partition 0 - Leader: 2 - Replicas: 1,2,3 - Isr: 2,3
> > Partition 1 - Leader: 2 - Replicas: 2,3,1 - Isr: 2,3
> > Partition 2 - Leader: 3 - Replicas: 3,1,2 - Isr: 2,3
> > Partition 3 - Leader: 3 - Replicas: 1,2,3 - Isr: 2,3
> > Partition 4 - Leader: 2 - Replicas: 2,3,1 - Isr: 2,3
> > Partition 5 - Leader: 3 - Replicas: 3,1,2 - Isr: 2,3
> >
> > What I observe is currently this (broker 2 takes all the load from broker
> > 1):
> > Partition 0 - Leader: 2 - Replicas: 1,2,3 - Isr: 2,3
> > Partition 1 - Leader: 2 - Replicas: 2,3,1 - Isr: 2,3
> > Partition 2 - Leader: 3 - Replicas: 3,1,2 - Isr: 2,3
> > Partition 3 - Leader: 2 - Replicas: 1,2,3 - Isr: 2,3
> > Partition 4 - Leader: 2 - Replicas: 2,3,1 - Isr: 2,3
> > Partition 5 - Leader: 3 - Replicas: 3,1,2 - Isr: 2,3
> >
> > My concern here is that at all times, a broker should not exceed 50% of
> its
> > network bandwidth which could be a problem in my case.
> > Is there a way to change this behavior (manually by forcing a leader,
> > programmatically, or by configuration)?
> > From my understanding, the script kafka-leader-election.sh allows only to
> > set the preferred (the first in the list of replicas) or uncleaned
> > (replicas not in sync can become a leader).
> > Regards,
> >
> > Pierre
> >
>

Reply via email to