ookeeper server.
>> > > > Connectivity loss can happen due to many reasons…zookeeper servers
>> > > getting
>> > > > bounced, due to some network glitch etc…
>> > > >
>> > > > After the brokers reconnect to zookeeper server I expect the kafka
ork glitch etc…
> > > >
> > > > After the brokers reconnect to zookeeper server I expect the kafka
> > > cluster
> > > > to come back in stable state by itself without any manual
> intervention.
> > > > but instead few partitions re
> > pasted earlier.
> > >
> > > I feel this is some kind of bug. I am going to file a bug.
> > >
> > >
> > >
> > > Thanks,
> > >
> > > Dhirendra.
> > >
> > >
> > >
> > > From: Thomas Coop
gt;
> > Thanks,
> >
> > Dhirendra.
> >
> >
> >
> > From: Thomas Cooper
> > Sent: Friday, March 4, 2022 7:01 PM
> > To: Dhirendra Singh
> > Cc: users@kafka.apache.org
> > Subject: Re: Few partitions stuck in under replication
&g
s some kind of bug. I am going to file a bug.
>
>
>
> Thanks,
>
> Dhirendra.
>
>
>
> From: Thomas Cooper
> Sent: Friday, March 4, 2022 7:01 PM
> To: Dhirendra Singh
> Cc: users@kafka.apache.org
> Subject: Re: Few partitions stuck in under replication
>
>
: Dhirendra Singh
Cc: users@kafka.apache.org
Subject: Re: Few partitions stuck in under replication
Do you roll the controller last?
I suspect this is more to do with the way you are rolling the cluster (which I
am still not clear on the need for) rather than some kind of bug in Kafka
(though
Do you roll the controller last?
I suspect this is more to do with the way you are rolling the cluster (which I
am still not clear on the need for) rather than some kind of bug in Kafka
(though that could of course be the case).
Tom
On 04/03/2022 01:59, Dhirendra Singh wrote:
> Hi Tom,
> Duri
Hi Tom,
During the rolling restart we check for under replicated partition count to
be zero in the readiness probe before restarting the next POD in order.
This issue never occurred before. It started after we upgraded kafka
version from 2.5.0 to 2.7.1.
So i suspect some bug introduced in the versi
I suspect this nightly rolling will have something to do with your issues. If
you are just rolling the stateful set in order, with no dependence on
maintaining minISR and other Kafka considerations you are going to hit issues.
If you are running on Kubernetes I would suggest using an Operator li
Hi Tom,
Doing the nightly restart is the decision of the cluster admin. I have no
control on it.
We have implementation using stateful set. restart is triggered by updating
a annotation in the pod.
Issue is not triggered by kafka cluster restart but the zookeeper servers
restart.
Thanks,
Dhirendra
Hi Dhirenda,
Firstly, I am interested in why are you restarting the ZK and Kafka cluster
every night?
Secondly, how are you doing the restarts. For example, in
[Strimzi](https://strimzi.io/), when we roll the Kafka cluster we leave the
designated controller broker until last. For each of the o
I don't know about the root cause, but if you're trying to solve the issue,
restarting the controller broker pod should do the trick.
Fares
On Thu, Mar 3, 2022 at 8:38 AM Dhirendra Singh
wrote:
> Hi All,
>
> We have kafka cluster running in kubernetes. kafka version we are using is
> 2.7.1.
> E
Hi All,
We have kafka cluster running in kubernetes. kafka version we are using is
2.7.1.
Every night zookeeper servers and kafka brokers are restarted.
After the nightly restart of the zookeeper servers some partitions remain
stuck in under replication. This happens randomly but not at every nigh
13 matches
Mail list logo