Pushkar... I love this solution
thanks
I'd just go with 3 zk nodes on each side
2015-10-29 23:46 GMT+01:00 Pushkar Raste :
> How about having let's say 4 nodes on each side and make one node in one of
> data centers a observer. When data center with majority of the
We need bounce it, but outage will be very short and you don't have to take
down rest of the zookeeper instances.
On 30 October 2015 at 11:00, Daniel Collins wrote:
> Aren't you asking for dynamic ZK configuration which isn't supported yet
> (ZOOKEEPER-107, only in in
Aren't you asking for dynamic ZK configuration which isn't supported yet
(ZOOKEEPER-107, only in in 3.5.0-alpha)? How do you swap a zookeeper
instance from being an observer to a voting member?
On 30 October 2015 at 09:34, Matteo Grolla wrote:
> Pushkar... I love this
I'm designing a solr cloud installation where nodes from a single cluster
are distributed on 2 datacenters which are close and very well connected.
let's say that zk nodes zk1, zk2 are on DC1 and zk2 is on DC2 and let's say
that DC1 goes down and the cluster is left with zk3.
how can I restore a
You can't. Zookeeper needs a majority. One node is not a majority of a three
node ensemble.
There is no way to split a Solr Cloud cluster across two datacenters and have
high availability. You can do that with three datacenters.
You can probably bring up a new Zookeeper ensemble and configure
Hi Walter,
it's not a problem to take down zk for a short (1h) time and
reconfigure it. Meanwhile solr would go in readonly mode.
I'd like feedback on the fastest way to do this. Would it work to just
reconfigure the cluster with other 2 empty zk nodes? Would they correctly
sync from the
How about having let's say 4 nodes on each side and make one node in one of
data centers a observer. When data center with majority of the nodes go
down, bounce the observer by reconfiguring it as a voting member.
You will have to revert back the observer back to being one.
There will be a short