Thank you very much for your response.. Maybe the read only mode can be an option in this case. But how does it works ? It olny allows conections that are read only ? or allows any connection but fails on write operations ?
On Thu, Dec 8, 2016 at 10:05 PM, Michael Han <[email protected]> wrote: > If "keep running" is defined as serving both read and write requests then > no there is no way to build a reliable ZooKeeper ensemble across only two > data centers simply because we can't guarantee that the single data center > remaining after another one is lost contains the majority number of servers > that can form a quorum. In this case you need at least 3 DC. If "keep > running" is defined as serving only read requests then you could enable > read only mode of ZK server so if a DC failed and remaining servers fail to > form a quorum, your application can still read data out of ZK, but no write > would be done after the point. > > In general cross DC deployment design depends on your use case and trade > offs you'd like to make. Similar topic was well discussed before [1], and > there is a recent paper [2] which also provides good reference on this > topic. > > [1] > http://zookeeper-user.578899.n2.nabble.com/zookeeper- > deployment-strategy-for-multi-data-centers-td7582358.html#a7582364 > [2] > https://www.usenix.org/system/files/conference/atc16/atc16_ > paper-lev-ari.pdf > > On Thu, Dec 8, 2016 at 10:16 AM, Alvaro Gareppe <[email protected]> > wrote: > > > I have Zookeeper running in a PRD environment with 3 instances running. 2 > > in 1 data center and 1 in other. But as zookeeper ensamble runs with a > > majority if a loose the data center that runs 2 nodes of zookeeper, I > loose > > the cluster. > > > > Is there any way to configure zookeeper (even adding more nodes is an > > option) to keep running if some of the data center is offline (any one of > > them). The only restiction, of course, is the number of datacenters > > > > -- > > Ing. Alvaro Gareppe > > [email protected] > > > > > > -- > Cheers > Michael. > -- Ing. Alvaro Gareppe [email protected]
