Josef,

Consider your redundancy model carefully. 3 voting (participant) nodes are 
required for quorum, however given the 2N+1 redundancy model, this means that 
you can lose one node and continue to function. If a second node is lost, you 
will lose quorum.

It’s recommended to have 5 participant nodes in your quorum, such that you can 
lose 2 nodes and continue to operate the system. This is helpful if you need to 
remove a node for service, as it still gives you the ability to lose another 
node due to failure and continue operating.

Observer nodes are helpful in that they can still be used to scale read load 
(clients can connect to them), even if they aren’t voting members.

Also  take a look at the “dynamic reconfiguration” feature of ZK. This allows 
you to change the quorum configuration on the fly. Observer nodes can double as 
“hot failover” nodes, so if you need to remove a participant node from service, 
you can always upgrade an observer node to be a participant node. This is a 
manual process, however.

/Ryan

From: <[email protected]>
Reply-To: <[email protected]>
Date: Monday, July 24, 2023 at 4:18 AM
To: <[email protected]>
Subject: Re: Spread Zookeeper nodes from one datacenter to two datacenters

I wasn’t aware of the observer concept. I just read the documentation about it, 
but I’m not 100% whether I’m getting it right. The document tells that 
observers are non-voting members, but do observer extend the quorum or not? 
Because, you said you would do 1 observer per location, but what happens if the 
datacenter with 2 zookeeper nodes (and 1 observer) goes down?


From: shrikant kalani <[email protected]>
Date: Monday, 24 July 2023 at 09:04
To: [email protected] <[email protected]>
Subject: Re: Spread Zookeeper nodes from one datacenter to two datacenters
I don’t think you need a third data Center. You can still go with 2 DC with
3 and 2 ZK nodes. A cluster with 5 nodes. You can keep 1 node in each dc as
observer node. This will make sure only 3 nodes are participating in leader
election process and hence a quorum of 3 will work.



On Mon, 24 Jul 2023 at 2:58 PM, <[email protected]> wrote:

> Hi guys
>
> Today we have just one datacenter with a few NiFi clusters, so we use a
> dedicated 3-node zookeeper cluster in that datacenter. We are now planning
> to expand to another datacenter, so we would like to split the NiFi nodes
> as well as zookeeper nodes to the two datacenters. However 2 zookeeper
> nodes is not a good quorum number, so we had the idea to do the following
> regarding zookeeper:
>
>    - Datacenter 1: 2 zookeeper nodes
>    - Datacenter 2: 2 zookeeper nodes
>    - Location 3 (another small DC): 1 zookeper node -> no NiFis
>
> All locations are connected via dark fiber, however the third location is
> bit more far away from the others (everything withing 100km). Now, as we
> anyway split the NiFi clusters over the two datacenters. Shall we limit the
> NiFi zookeeper client (state-management.xml) to the zookeeper nodes located
> within the same datacenter? Any comments to our design idea? What’s the
> best way to configure zookeeper clients in a way that local (same
> datacenter) zookeepers are preferred?
>
> Any other ideas how we should configure this related to zookeeper? Shall
> we use just one zookeeper per location and distribute the load over all 3
> nodes/datacenters evenly? This would then cause load between the
> datacenters under normal circumstances…
>
> Cheers Josef
>
>
>
>
>

Reply via email to