Awesome, thanks guys.  Your patience and input is greatly appreciated.




On Wed, 2010-08-25 at 21:30 -0700, Henry Robinson wrote:

> Todd - 
> 
> 
> 
> No, this is not the case. There are no 'backup' or 'failover' nodes in
> ZooKeeper. All servers that can vote are working as part of the
> cluster until they fail. You need a majority of your voting servers
> alive. 
> 
> 
> If you have three servers, a majority is of size two. The number of
> nodes that can fail before a majority is no longer alive is one. 
> If you have four servers, a majority is of size three. The number of
> nodes that can fail before a majority is no longer alive is one. 
> If you have five servers, a majority is of size three. The number of
> nodes that can fail before a majority is no longer alive is two. 
> 
> 
> This is why four servers is worse than three for availability. In both
> cases, two servers have to fail before the cluster is no longer
> available. However if failures are independently distributed, this is
> more likely to happen in a cluster of four nodes than a cluster of
> three (think of it as 'more things available to go wrong'). 
> 
> 
> If you have four servers and one dies, the 'majority' that still needs
> to be alive is still three - it doesn't drop down to two. The majority
> is of all voting servers, alive or dead. 
> 
> 
> Hope this helps - 
> 
> 
> Henry
> 
> 
> On 25 August 2010 21:01, Todd Nine <t...@spidertracks.co.nz> wrote:
> 
>         Thanks Dave.  I've been using Cassandra, so I'm trying to get
>         my head
>         around the configuration/operational differences with ZK.  You
>         state
>         that using 4 would actually decrease my reliability.  Can you
>         explain
>         that further?  I was under the impression that a 4th node
>         would act as a
>         non voting read only node until one of the other 3 fails.  I
>         thought
>         that this extra node would give me some breathing room by
>         allowing any
>         node to fail and still have 3 voting nodes.  Is this not the
>         case?
>         
>         Thanks,
>         
>         Todd
>         
>         
>         
>         
>         
>         
>         On Wed, 2010-08-25 at 21:13 -0600, Ted Dunning wrote:
>         
>         > Just use 3 nodes.  Life will be better.
>         >
>         >
>         >
>         > You can configure the fourth node in the event of one of the
>         first
>         > three failing and bring it on line.  Then you can
>         re-configure and
>         > restart each of the others one at a time.  This gives you
>         flexibility
>         > because you have 4 nodes, but doesn't decrease your
>         reliability the
>         > way that using a four node cluster would.  If you need to do
>         > maintenance on one node, just configure that node out as if
>         it had
>         > failed.
>         >
>         >
>         > On Wed, Aug 25, 2010 at 4:26 PM, Dave Wright
>         <wrig...@gmail.com>
>         > wrote:
>         >
>         >         You can certainly serve more reads with a 4th node,
>         but I'm
>         >         not sure
>         >         what you mean by "it won't have a voting role". It
>         still
>         >         participates
>         >         in voting for leaders as do all non-observers
>         regardless of
>         >         whether it
>         >         is an even or odd number. With zookeeper there is no
>         voting on
>         >         each
>         >         transaction, only leader changes.
>         >
>         >         -Dave Wright
>         >
>         >
>         >
>         >         On Wed, Aug 25, 2010 at 6:22 PM, Todd Nine
>         >         <t...@spidertracks.co.nz> wrote:
>         >         > Do I get any read performance increase (similar to
>         an
>         >         observer) since
>         >         > the node will not have a voting role?
>         >         >
>         >         >
>         >
>         >
>         >
>         >
>         
> 
> 
> 
> 
> -- 
> Henry Robinson
> Software Engineer
> Cloudera
> 415-994-6679
> 

Reply via email to