Agree, Erick.  Since this setup is in our test env, haven't really invested
to add another DC but for Prod sure, will go by DC3 if we do go with this
setup.

On Tue, May 23, 2017 at 12:38 PM, Erick Erickson <erickerick...@gmail.com>
wrote:

> Susheel:
>
> The issue is that if, for any reason at all, the connection between
> dc1 and dc2 is broken, there will be no indexing on dc2 since the Solr
> servers there will not sense ZK quorum. You'll have to do something
> manual to reconfigure.....
>
> That's not a flaw in your setup, just the way things work ;).
>
> Putting one of the ZKs on a third DC would change that...
>
> Best,
> Erick
>
> On Tue, May 23, 2017 at 9:12 AM, Susheel Kumar <susheel2...@gmail.com>
> wrote:
> > Hi Jan, FYI - Since last year, I have been running a Solr 6.0 cluster in
> > one of lower env with 6 shards/replica in dc1 & 6 shard/replica in dc2
> (each
> > shard replicated cross data center) with 3 ZK in dc1 and 2 ZK in dc2.  (I
> > didn't have the availability of 3rd data center for ZK so went with only
> 2
> > data center with above configuration)  and so far no issues.  Its been
> > running fine, indexing, replicating data, serving queries etc. So in my
> > test, setting up single cluster across two zones/data center works
> without
> > any issue when there is no or very minimal latency (in my case around
> 30ms
> > one way)
> >
> > Thanks,
> > Susheel
> >
> > On Tue, May 23, 2017 at 9:20 AM, Jan Høydahl <jan....@cominvent.com>
> wrote:
> >
> >> I.e. tell the customer that in order to have automatic failover and
> >> recovery in a 2-location setup we require at least one ZK instance in a
> >> separate third location. Kind of a tough requirement but necessary to
> >> safe-guard against split brain during network partition.
> >>
> >> If a third location is not an option, how would you setup ZK for manual
> >> reconfiguration?
> >> Two ZK in DC1 and one in DC2 would give you automatic recovery in case
> DC2
> >> falls out, but if DC1 falls out, WRITE would be disabled and to resume
> >> write in DC2 only, one would need to stop Solr + ZK, reconfigure ZK in
> DC2
> >> as standalone (or setup two more) and then start Solr again with only
> one
> >> ZK.
> >>
> >> --
> >> Jan Høydahl, search solution architect
> >> Cominvent AS - www.cominvent.com
> >>
> >> > 23. mai 2017 kl. 11.14 skrev Markus Jelsma <
> markus.jel...@openindex.io>:
> >> >
> >> > I would probably start by renting a VM at a third location to run
> >> Zookeeper.
> >> >
> >> > Markus
> >> >
> >> > -----Original message-----
> >> >> From:Jan Høydahl <jan....@cominvent.com>
> >> >> Sent: Tuesday 23rd May 2017 11:09
> >> >> To: solr-user <solr-user@lucene.apache.org>
> >> >> Subject: Spread SolrCloud across two locations
> >> >>
> >> >> Hi,
> >> >>
> >> >> A customer has two locations (a few km apart) with super-fast
> >> networking in-between, so for day-to-day operation they view all VMs in
> >> both locations as a pool of servers. They typically spin up redundant
> >> servers for various services in each zone and if a zone should fail
> (they
> >> are a few km apart), the other will just continue working.
> >> >>
> >> >> How can we best support such a setup with Cloud and Zookeeper?
> >> >> They do not need (or want) CDCR since latency and bandwidth is no
> >> problem, and CDCR is active-passive only so it anyway requires manual
> >> intervention to catch up if indexing is switched to the passive DC
> >> temporarily.
> >> >> If it was not for ZK I would setup one Cloud cluster and make sure
> each
> >> shard was replicated cross zones and all would be fine.
> >> >> But ZK really requires a third location in order to tolerate loss of
> an
> >> entire location/zone.
> >> >> All solutions I can think of involves manual intervention,
> >> re-configuring of ZK followed by a restart of the surviving Solr nodes
> in
> >> order to point to the “new” ZK.
> >> >>
> >> >> How have you guys solved such setups?
> >> >>
> >> >> --
> >> >> Jan Høydahl, search solution architect
> >> >> Cominvent AS - www.cominvent.com
> >> >>
> >> >>
> >>
> >>
>

Reply via email to