Re: Convert Standalone zookeeper to A 3 node Quorum

2023-06-20 Thread Gaurav Pande
Hi Guys,

Just to add one more thing which I forgot to ask earlier is there some kind
of rollback concept valid for rolling back to single zookeeper and removing
the quorum if the quorum did'nt work ?

 If so what are the steps please.

Regards,
Gaurav

On Tue, 20 Jun, 2023, 01:12 Gaurav Pande,  wrote:

> Indeed I went through this Admin guide for my version 3.5.8 and iam a bit
> confused with this attribute standaloneEnabled it doesn't seems to present
> as a part of Pristine/out of the box config but docs says its value is true
> by default, what does it exactly govern , more specific is this be part of
> my ensemble zk server config?
>
> PS thanks for the GitHub repo link
>
> Regards,
> Gaurav
>
> On Tue, 20 Jun, 2023, 01:04 Patrick Hunt,  wrote:
>
>> It really depends on your requirements. You should read the admin docs for
>> insight and examples, start here:
>> https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_designing
>>
>> That said, I have a project here which I use for config
>> generation/testing.
>> YMMV:
>> https://github.com/phunt/zkconf
>>
>> Patrick
>>
>> On Mon, Jun 19, 2023 at 11:32 AM Gaurav Pande 
>> wrote:
>>
>> > Okay thanks, and could you share what's the standard config for
>> ensemble as
>> > example? I mean what all parameters it includes.
>> >
>> >  Also do I need to add standaloneEnabled=false on these 2 new zk servers
>> > config file as a part of initial base ensemble config as by default this
>> > parameter value  is true?  So that Making 2 zk's  run in replicated mode
>> > from starting (starting one at a time)
>> >
>> > Regards,
>> > Gaurav
>> >
>> >
>> > On Mon, 19 Jun, 2023, 22:03 Patrick Hunt,  wrote:
>> >
>> > > On Mon, Jun 19, 2023 at 8:47 AM Gaurav Pande 
>> > wrote:
>> > >
>> > > > Hi Patrick,
>> > > >
>> > > > Thanks for guidance here , based on below I only have  presently 1
>> zk
>> > > node
>> > > > so if I provision 2 new VM and install same version of zookeeper on
>> > them
>> > > > should I start them as standalone zookeepers first and then make
>> > changes
>> > > to
>> > > > there server config file?
>> > > >
>> > > >
>> > > No, you specifically want to add them as part of the ensemble, one at
>> a
>> > > time in sequence (see 2).
>> > >
>> > >
>> > > > Also what are the valid zk configs that I would need to add on
>> these 2
>> > > new
>> > > > VM's zk server config file?
>> > > >
>> > > >
>> > > You'll need a regular ensemble config, not a standalone.
>> > >
>> > > Patrick
>> > >
>> > >
>> > > > Regards,
>> > > > Gaurav
>> > > >
>> > > > On Mon, 19 Jun, 2023, 21:06 Patrick Hunt,  wrote:
>> > > >
>> > > > > Two ways to do it come to mind, which I've used in the past:
>> > > > >
>> > > > > 1) most straightforward is to "clone" the repos for the two new
>> > members
>> > > > of
>> > > > > the ensemble in their respective configs/datadirs. Just make sure
>> to
>> > > > update
>> > > > > the configs appropriately. Including the "myid" for each server.
>> Then
>> > > > > restart the ensemble and verify.
>> > > > >
>> > > > > 2) You can't go from 1->3 servers just by adding two new servers
>> to
>> > the
>> > > > > ensemble as they may form a quorum on the "zero" zxid, rather than
>> > the
>> > > > zxid
>> > > > > of the existing member. Rather you would need to go from 1->2,
>> with
>> > > > quorum,
>> > > > > and then from 2->3 with quorum. This will ensure that the true
>> state
>> > of
>> > > > the
>> > > > > original quorum is maintained (this is what we implemented for
>> > Cloudera
>> > > > > Manager to ensure proper functioning when increasing quorum size).
>> > > > >
>> > > > > Good luck,
>> > > > >
>> > > > > Patrick
>> > > > >
>> > > > >
>> > > > > On Mon, Jun 19, 2023 at 12:02 AM Gaurav Pande <
>> gaupand...@gmail.com>
>> > > > > wrote:
>> > > > >
>> > > > > > Hi tison,
>> > > > > >
>> > > > > > When you say stop services you mean existing standalone Zk
>> service
>> > > > right?
>> > > > > > , if that's the case then yes we can. But what's the process ?
>> Also
>> > > can
>> > > > > it
>> > > > > > be done without re-start I didn't know , can you share both
>> > > > > process/steps?
>> > > > > >
>> > > > > > On Mon, 19 Jun, 2023, 10:15 tison, 
>> wrote:
>> > > > > >
>> > > > > > > Can you stop the services for reconfig, or you need an online
>> > > > reconfig?
>> > > > > > >
>> > > > > > > Best,
>> > > > > > > tison.
>> > > > > > >
>> > > > > > >
>> > > > > > > Gaurav Pande  于2023年6月19日周一 11:22写道:
>> > > > > > >
>> > > > > > > > Hi Guys,
>> > > > > > > >
>> > > > > > > > Any help on this thread please?
>> > > > > > > >
>> > > > > > > > Regards,
>> > > > > > > > Gaurav
>> > > > > > > >
>> > > > > > > > On Sun, 18 Jun, 2023, 20:14 Gaurav Pande, <
>> > gaupand...@gmail.com>
>> > > > > > wrote:
>> > > > > > > >
>> > > > > > > > > Hello Guys,
>> > > > > > > > >
>> > > > > > > > > Iam new in this space,  I wanted to know process/steps to
>> > > > convert a
>> > > > > > > > single
>> > > > > > > > > Zk node in standalone presently to a 3 

Re: Re: Any change from 3.6.3 -> 3.6.4 would cause hostname unresolved issue?

2023-06-20 Thread Paolo Patierno
Hi Enrico,
we are working on the Strimzi project (deploying Apache Kafka on
Kubernetes, so together with ZooKeeper).
It has been working fine until Apache Kafka was using ZooKeeper 3.6.3 (or
any other previous version).
With 3.6.4 we are facing the issue I described.

Thanks,
Paolo

On Mon, 19 Jun 2023 at 23:29, Enrico Olivelli  wrote:

> Paolo,
>
> Il Lun 19 Giu 2023, 16:43 Paolo Patierno  ha
> scritto:
>
> > Hi all,
> > We were able to overcome the binding issue by setting
> > quorumListenOnAllIPs=true but from there we are getting a new issue that
> is
> > preventing leader election completion on first start-up.
> >
> > Getting the log of the current ZooKeeper leader (ID=3) we see the
> > following.
> > (Starting with ** you can see some additional logs added to
> > org.apache.zookeeper.server.quorum.Leader#getDesignatedLeader in order to
> > get more information.)
> >
> > 2023-06-19 12:32:51,990 INFO Have quorum of supporters, sids: [[1, 3],[1,
> > 3]]; starting up and setting last processed zxid: 0x1
> > (org.apache.zookeeper.server.quorum.Leader)
> > [QuorumPeer[myid=3](plain=127.0.0.1:12181)(secure=0.0.0.0:2181)]
> > 2023-06-19 12:32:51,990 INFO **
> >
> >
> newQVAcksetPair.getQuorumVerifier().getVotingMembers().get(self.getId()).addr
> > = my-cluster-zookeeper-2.my-cluster-zookeeper-nodes.default.svc/
> > 172.17.0.6:2888 (org.apache.zookeeper.server.quorum.Leader)
> > [QuorumPeer[myid=3](plain=127.0.0.1:12181)(secure=0.0.0.0:2181)]
> > 2023-06-19 12:32:51,990 INFO ** self.getQuorumAddress() =
> >
> >
> my-cluster-zookeeper-2.my-cluster-zookeeper-nodes.default.svc/:2888
> > (org.apache.zookeeper.server.quorum.Leader)
> > [QuorumPeer[myid=3](plain=127.0.0.1:12181)(secure=0.0.0.0:2181)]
> > 2023-06-19 12:32:51,992 INFO ** qs.addr
> > my-cluster-zookeeper-2.my-cluster-zookeeper-nodes.default.svc/
> > 172.17.0.6:2888, qs.electionAddr
> > my-cluster-zookeeper-2.my-cluster-zookeeper-nodes.default.svc/
> > 172.17.0.6:3888, qs.clientAddr/127.0.0.1:12181
> > (org.apache.zookeeper.server.quorum.QuorumPeer)
> > [QuorumPeer[myid=3](plain=127.0.0.1:12181)(secure=0.0.0.0:2181)]
> > 2023-06-19 12:32:51,992 DEBUG zookeeper
> > (org.apache.zookeeper.common.PathTrie)
> > [QuorumPeer[myid=3](plain=127.0.0.1:12181)(secure=0.0.0.0:2181)]
> > 2023-06-19 12:32:51,993 WARN Restarting Leader Election
> > (org.apache.zookeeper.server.quorum.QuorumPeer)
> > [QuorumPeer[myid=3](plain=127.0.0.1:12181)(secure=0.0.0.0:2181)]
> >
> > So the leader is ZooKeeper with ID=3 and it was ACKed by the ZooKeeper
> node
> > ID=1.
> > As you can see we are in the Leader#startZkServer method, and because of
> > the reconfiguration enabled, the designatedLeader is processed. The
> problem
> > is that the Leader#getDesignatedLeader is not returning “self” as leader
> > but another one (ID=1), because of the difference in the quorum address.
> > From the above log, it’s not an actual difference in terms of addresses
> but
> > the self.getQuorumAddress() is returning an  one (even if
> it’s
> > still the same hostname related to ZooKeeper-2 instance). This difference
> > causes the allowedToCommit=false, meanwhile the ZooKeeper-2 is still
> > reported as leader but it’s not able to commit, so prevents any requests
> > and the ZooKeeper ensemble gets stuck.
> >
> > 2023-06-19 12:32:51,996 WARN Suggested leader: 1
> > (org.apache.zookeeper.server.quorum.QuorumPeer)
> > [QuorumPeer[myid=3](plain=127.0.0.1:12181)(secure=0.0.0.0:2181)]
> > 2023-06-19 12:32:51,996 WARN This leader is not the designated leader, it
> > will be initialized with allowedToCommit = false
> > (org.apache.zookeeper.server.quorum.Leader)
> > [QuorumPeer[myid=3](plain=127.0.0.1:12181)(secure=0.0.0.0:2181)]
> >
> > The overall issue could be related to DNS problems, with DNS records not
> > registered yet during pod initialization (where ZooKeeper is running on
> > Kubernetes). But we don’t understand why it’s not able to recover
> somehow.
> >
> > Instead of using the quorumListenOnAllIPs=true we also tried a different
> > approach by using the 0.0.0.0 address for the binding, so something like:
> >
> > # Zookeeper nodes configuration
> > server.1=0.0.0.0:2888:3888:participant;127.0.0.1:12181
> >
> >
> server.2=my-cluster-zookeeper-1.my-cluster-zookeeper-nodes.default.svc:2888:3888:participant;
> > 127.0.0.1:12181
> >
> >
> server.3=my-cluster-zookeeper-2.my-cluster-zookeeper-nodes.default.svc:2888:3888:participant;
> > 127.0.0.1:12181
> >
> > This way, the self.getQuorumAddress() is not suffering the same problem,
> it
> > doesn’t return an  address but always an actual one. No new
> > leader election is needed and everything works fine.
> >
>
> This is the release notes page for 3.6.4.
> https://zookeeper.apache.org/doc/r3.6.4/releasenotes.html
>
> As you are running on k8s, I guess you are using a statefulset, maybe with
> a service with ClusterIP?
>
> Is the readyness probe failing? In that case the dns name should not be
> available
>
> What are you using to