maintenance on the primary side).
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Thursday, July 25, 2013 7:21 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr 4.3.0 - SolrCloud lost all documents when leaders got
rebuilt
Picking up on what Donimique
Picking up on what Donimique mentioned. Your ZK configuration
isn't doing you much good. Not only do you have an even number
6 (which is actually _less_ robust than having 5), but by splitting
them among two data centers you're effectively requiring the data
center with 4 nodes to always be up. If
/fail-over primary to backup if we lost or
otherwise needed to do maintenance on the primary side).
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Thursday, July 25, 2013 7:21 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr 4.3.0 - SolrCloud lost all
We have SolrCloud cluster (5 shards and 2 replicas) on 10 dynamic compute boxes
(cloud), where 5 machines (leaders) are in datacenter1 and replicas on
datacenter2. We have 6 zookeeper instances - 4 on datacenter1 and 2 on
datacenter2. The zookeeper instances are on same hosts as Solr nodes.
Solr 4.4 is already released!!!
http://lucene.apache.org/solr/
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-4-3-0-SolrCloud-lost-all-documents-when-leaders-got-rebuilt-tp4080185p4080188.html
Sent from the Solr - User mailing list archive at Nabble.com.
With 6 zookeeper instances you need at least 4 instances running at the same
time. How can you decide to stop 4 instances and have only 2 instances running
? Zookeeper can't work anymore in these conditions.
Dominique
Le 25 juil. 2013 à 00:16, Joshi, Shital shital.jo...@gs.com a écrit :
We