Thanks for checking Shawn.
So rolling ZK restart is bad, and ZK nodes with different config is bad,
Guess this could still work if
* All ZK config changes are done by stopping ALL zk nodes
* All config changes are done controlled and manual so DC1 don’t come up by
surprise with old config
PS: I
On 5/29/2017 8:57 AM, Jan Høydahl wrote:
> And if you start all three in DC1, you have 3+3 voting, what would
> then happen? Any chance of state corruption?
>
> I believe that my solution isolates manual change to two ZK nodes in
> DC2, while your requires config change to 1 in DC2 and manual
>
> I believe that my solution isolates manual change to two ZK nodes in DC2,
> while your requires config change to 1 in DC2 and manual start/stop of 1 in
> DC1.
Answering my own statement here. Turns out that in order to flip the observer
bit for one ZK node, you need to touch the config of
> In my setup once DC1 comes back up make sure you start only two nodes.
And if you start all three in DC1, you have 3+3 voting, what would then happen?
Any chance of state corruption?
I believe that my solution isolates manual change to two ZK nodes in DC2, while
your requires config change to
In my setup once DC1 comes back up make sure you start only two nodes.
Now bring down the original observer and make it observer again.
Bring back the third node
It seems like lot of work compared to Jan's setup but you get 5 voting
members instead of 3 in normal situation.
On May 26, 2017
Thanks, Pushkar, Make sense. Trying to understand the difference between
your setup vs Jan's proposed setup.
- Seems like when DC1 goes down, in your setup we have to bounce *one* from
observer to non-observer while in Jan's setup *two* observers to
non-observers. Anything else I am missing
-
Damn,
Math is hard
DC1 : 3 non observers
DC2 : 2 non observers
3 + 2 = 5 non observers
Observers don't participate in voting = non observers participate in voting
5 non observers = 5 votes
In addition to the 2 non observer, DC2 also has an observer, which as you
pointed out does not
>From ZK documentation, observers do not participate in vote, so Pushkar,
when you said 5 nodes participate in voting, what exactly you mean?
-- Observers are non-voting members of an ensemble which only hear the
results of votes, not the agreement protocol that leads up to them.
Per ZK
Jan, Shawn, Susheel
First steps first. First, let's do a fault-tolerant cluster, then maybe
a _geographically_ fault-tolerant cluster.
Add another server in either DC1 or DC2, in a separate rack, with
independent power etc. As Shawn says below, install the third ZK there.
You would satisfy
ZK 3.5 isn't officially released. It is alpha/beta for years. I wouldn't
use it in production.
The setup I proposed :
DC1 : 3 nodes, all are non observer's.
DC2 : 3 nodes, 2 are non observer's and 1 is observer
This means only 5 nodes participate in voting and 3 nodes make quorum. If
DC1 goes
Thanks for the tip Pushkar,
> A setup I have used in the past was to have an observer I DC2. If DC1 one
I was not aware that ZK 3.4 supports observers, thought it was a 3.5 feature.
So do you setup followers only in DC1 (3x), and observers only in DC2 (3x) and
then point each Solr node to all 6
> Hi - Again, hiring a simple VM at a third location without a Solr cloud
> sounds like the simplest solution. It keeps the quorum tight and sound. This
> simple solution is the one i would try first.
My only worry here is latency, and perhaps complexity, security and cost.
- Latency if you
On 5/24/2017 4:14 PM, Jan Høydahl wrote:
> Sure, ZK does by design not support a two-node/two-location setup. But still,
> users may want/need to deploy that,
> and my question was if there are smart ways to make such a setup as little
> painful as possible in case of failure.
>
> Take the
Latest zk supports auto reconfigure.
Keep one DC as quorum and another as observers.
When a DC goes down initiate a zk reconfigure action. To flip quorum and
observers.
When I tested this solr survived just fine, but it been a while.
Ani
On Wed, May 24, 2017 at 6:35 PM Pushkar Raste
A setup I have used in the past was to have an observer I DC2. If DC1 one
goes boom you need manual intervention to change observer's role to make it
a follower.
When DC1 comes back up change on instance in DC2 to make it a observer
again
On May 24, 2017 6:15 PM, "Jan Høydahl"
-Original message-
> From:Jan Høydahl <jan@cominvent.com>
> Sent: Thursday 25th May 2017 0:15
> To: solr-user@lucene.apache.org
> Subject: Re: Spread SolrCloud across two locations
>
> Sure, ZK does by design not support a two-node/two-location setup. But
Sure, ZK does by design not support a two-node/two-location setup. But still,
users may want/need to deploy that,
and my question was if there are smart ways to make such a setup as little
painful as possible in case of failure.
Take the example of DC1: 3xZK and DC2: 2xZK again. And then DC1
On 5/23/2017 10:12 AM, Susheel Kumar wrote:
Hi Jan, FYI - Since last year, I have been running a Solr 6.0 cluster
in one of lower env with 6 shards/replica in dc1 & 6 shard/replica in
dc2 (each shard replicated cross data center) with 3 ZK in dc1 and 2
ZK in dc2. (I didn't have the
Agree, Erick. Since this setup is in our test env, haven't really invested
to add another DC but for Prod sure, will go by DC3 if we do go with this
setup.
On Tue, May 23, 2017 at 12:38 PM, Erick Erickson
wrote:
> Susheel:
>
> The issue is that if, for any reason at
Susheel:
The issue is that if, for any reason at all, the connection between
dc1 and dc2 is broken, there will be no indexing on dc2 since the Solr
servers there will not sense ZK quorum. You'll have to do something
manual to reconfigure.
That's not a flaw in your setup, just the way things
Hi Jan, FYI - Since last year, I have been running a Solr 6.0 cluster in
one of lower env with 6 shards/replica in dc1 & 6 shard/replica in dc2 (each
shard replicated cross data center) with 3 ZK in dc1 and 2 ZK in dc2. (I
didn't have the availability of 3rd data center for ZK so went with only 2
I.e. tell the customer that in order to have automatic failover and recovery in
a 2-location setup we require at least one ZK instance in a separate third
location. Kind of a tough requirement but necessary to safe-guard against split
brain during network partition.
If a third location is not
I would probably start by renting a VM at a third location to run Zookeeper.
Markus
-Original message-
> From:Jan Høydahl
> Sent: Tuesday 23rd May 2017 11:09
> To: solr-user
> Subject: Spread SolrCloud across two locations
>
> Hi,
>
23 matches
Mail list logo