DNS RR is good. I had good experiences using that for my client configs for exactly the reasons you are listing.
On Wed, Dec 21, 2011 at 8:43 PM, Neha Narkhede <[email protected]> wrote: > Thanks for the responses! > >>> How are your clients configured to find the zks now? > > Our clients currently use the list of hostnames and ports that > comprise the zookeeper cluster. For example, > zoo1:port1,zoo2:port2,zoo3:port3 > >>> > - switch DNS, >> - wait for caches to die, > > This is something we thought about however, if I understand it > correctly, doesn't JVM cache DNS entries forever until it is restarted > ? We haven't specifically turned DNS caching off on our clients. So > this solution would require us to restart the clients to see the new > list of zookeeper hosts. > > Another thought is to use DNS RR and have the client zk url have one > name that resolves to and returns a list of IPs to the zookeeper > client. This has the advantage of being able to perform hardware > migration without changing the client connection url, in the future. > Do people have thoughts about using a DNS RR ? > > Thanks, > Neha > > On Tue, Dec 20, 2011 at 1:06 PM, Ted Dunning <[email protected]> wrote: >> In particular, aren't you using DNS names? If you are, then you can >> >> - expand the quorum with the new hardware on new IP addresses, >> - switch DNS, >> - wait for caches to die, >> - restart applications without reconfig or otherwise force new connections, >> - decrease quorum size again >> >> On Tue, Dec 20, 2011 at 12:26 PM, Camille Fournier <[email protected]>wrote: >> >>> How are your clients configured to find the zks now? How many clients do >>> you have? >>> >>> From my phone >>> On Dec 20, 2011 3:14 PM, "Neha Narkhede" <[email protected]> wrote: >>> >>> > Hi, >>> > >>> > As part of upgrading to Zookeeper 3.3.4, we also have to migrate our >>> > zookeeper cluster to new hardware. I'm trying to figure out the best >>> > strategy to achieve that with no downtime. >>> > Here are some possible solutions I see at the moment, I could have >>> > missed a few though - >>> > >>> > 1. Swap each machine out with a new machine, but with the same host/IP. >>> > >>> > Pros: No client side config needs to be changed. >>> > Cons: Relatively tedious task for Operations >>> > >>> > 2. Add new machines, with different host/IPs to the existing cluster, >>> > and remove the older machines, taking care to maintain the quorum at >>> > all times >>> > >>> > Pros: Easier for Operations >>> > Cons: Client side configs need to be changed and clients need to be >>> > restarted/bounced. Another problem is having a large quorum for >>> > sometime (potentially 9 nodes). >>> > >>> > 3. Hide the new cluster behind either a Hardware load balancer or a >>> > DNS server resolving to all host ips. >>> > >>> > Pros: Makes it easier to move hardware around in the future >>> > Cons: Possible timeout issues with load balancers messing with >>> > zookeeper functionality or performance >>> > >>> > Read this and found it helpful - >>> > >>> > >>> http://apache.markmail.org/message/44tbj53q2jufplru?q=load+balancer+list:org%2Eapache%2Ehadoop%2Ezookeeper-user&page=1 >>> > But would like to hear from the authors and the users who might have >>> > tried this in a real production setup. >>> > >>> > I'm very interested in finding a long term solution for masking the >>> > zookeeper host names. Any inputs here are appreciated ! >>> > >>> > In addition to this, it will also be great to know what people think >>> > about options 1 and 2, as a solution for hardware changes in >>> > Zookeeper. >>> > >>> > Thanks, >>> > Neha >>> > >>>
