Re: [ClusterLabs] Recovering after split-brain

2016-06-21 Thread Nikhil Utane
Hmm. I will then work towards bringing this in. Thanks for your input.

On Wed, Jun 22, 2016 at 10:44 AM, Digimer  wrote:

> On 22/06/16 01:07 AM, Nikhil Utane wrote:
> > I don't get it.  Pacemaker + Corosync is providing me so much of
> > functionality.
> > For e.g. if we leave out the condition of split-brain for a while, then
> > it provides:
> > 1) Discovery and cluster formation
> > 2) Synchronization of data
> > 3) Heartbeat mechanism
> > 4) Swift failover of the resource
> > 5) Guarantee that one resource will be started on only 1 node
> >
> > So in case of normal fail-over we need the basic functionality of
> > resource being migrated to a standby node.
> > And it is giving me all that.
> > So I don't agree that it needs to be as black and white as you say. Our
> > solution has different requirements than a typical HA solution. But that
> > is only now. In the future we might have to implement all the things. So
> > in that sense Pacemaker gives us a good framework that we can extend.
> >
> > BTW, we are not even using a virtual IP resource which again I believe
> > is something that everyone employs.
> > Because of the nature of the service a small glitch is going to happen.
> > Using virtual IPs is not giving any real benefit for us.
> > And with regard to the question, why even have a standby and let it be
> > active all the time, two-node cluster is one of the possible
> > configuration, but main requirement is to support N + 1. So standby node
> > doesn't know which active it has to take over until a failover occurs.
> >
> > Your comments however has made me re-consider using fencing. It was not
> > that we didn't want to do it.
> > Just that I felt it may not be needed. So I'll definitely explore this
> > further.
>
> It is needed, and it is that black and white. Ask yourself, for your
> particular installation; Can I run X in two places at the same time
> without coordination?
>
> If the answer is "yes", then just do that and be done with it.
>
> If the answer is "no", then you need fencing to allow pacemaker to know
> the state of all nodes (otherwise, the ability to coordinate is lost).
>
> I've never once seen a valid HA setup where fencing was not needed. I
> don't claim to be the best by any means, but I've been around long
> enough to say this with some confidence.
>
> digimer
>
> > Thanks everyone for the comments.
> >
> > -Regards
> > Nikhil
> >
> > On Tue, Jun 21, 2016 at 10:17 PM, Digimer  > > wrote:
> >
> > On 21/06/16 10:57 AM, Dmitri Maziuk wrote:
> > > On 2016-06-20 17:19, Digimer wrote:
> > >
> > >> Nikhil indicated that they could switch where traffic went
> up-stream
> > >> without issue, if I understood properly.
> > >
> > > They have some interesting setup, but that notwithstanding: if
> split
> > > brain happens some clients will connect to "old master" and some:
> to
> > > "new master", dep. on arp update. If there's a shared resource
> > > unavailable on one node, clients going there will error out. The
> other
> > > ones will not. It will work for some clients.
> > >
> > > Cf. both nodes going into stonith deathmatch and killing each
> other: the
> > > service now is not available for all clients. What I don't get is
> the
> > > blanket assertion that this "more highly" available that option #1.
> > >
> > > Dimitri
> >
> > As I've explained many times (here and on IRC);
> >
> > If you don't need to coordinate services/access, you don't need HA.
> >
> > If you do need to coordinate services/access, you need fencing.
> >
> > So if Nikhil really believes s/he doesn't need fencing and that
> > split-brains are OK, then drop HA. If that is not the case, then s/he
> > needs to implement fencing in pacemaker. It's pretty much that
> simple.
> >
> > --
> > Digimer
> > Papers and Projects: https://alteeve.ca/w/
> > What if the cure for cancer is trapped in the mind of a person
> without
> > access to education?
> >
> > ___
> > Users mailing list: Users@clusterlabs.org  Users@clusterlabs.org>
> > http://clusterlabs.org/mailman/listinfo/users
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> >
> >
> >
> >
> > ___
> > Users mailing list: Users@clusterlabs.org
> > http://clusterlabs.org/mailman/listinfo/users
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> >
>
>
> --
> Digimer
> Papers and Projects: https://alteeve.ca/w/
> What if the cure for cancer is trapped in the mind of a person without
> access to education?
>
> ___
> 

Re: [ClusterLabs] Recovering after split-brain

2016-06-21 Thread Digimer
On 22/06/16 01:09 AM, Nikhil Utane wrote:
> We are not using virtual IP. There is a separate discovery mechanism
> between the server and client. The client will reach out to new server
> only if it is incommunicado with the old one.

That's fine, but it really doesn't change anything. Whether you're using
a shared IP, shared storage or something else, it's all the same to
pacemaker in the end.

> On Tue, Jun 21, 2016 at 8:27 PM, Dmitri Maziuk  > wrote:
> 
> On 2016-06-20 17:19, Digimer wrote:
> 
> Nikhil indicated that they could switch where traffic went up-stream
> without issue, if I understood properly.
> 
> 
> They have some interesting setup, but that notwithstanding: if split
> brain happens some clients will connect to "old master" and some: to
> "new master", dep. on arp update. If there's a shared resource
> unavailable on one node, clients going there will error out. The
> other ones will not. It will work for some clients.
> 
> Cf. both nodes going into stonith deathmatch and killing each other:
> the service now is not available for all clients. What I don't get
> is the blanket assertion that this "more highly" available that
> option #1.
> 
> Dimitri
> 
> 
> ___
> Users mailing list: Users@clusterlabs.org 
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 
> 
> 
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 


-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Recovering after split-brain

2016-06-21 Thread Nikhil Utane
I don't get it.  Pacemaker + Corosync is providing me so much of
functionality.
For e.g. if we leave out the condition of split-brain for a while, then it
provides:
1) Discovery and cluster formation
2) Synchronization of data
3) Heartbeat mechanism
4) Swift failover of the resource
5) Guarantee that one resource will be started on only 1 node

So in case of normal fail-over we need the basic functionality of resource
being migrated to a standby node.
And it is giving me all that.
So I don't agree that it needs to be as black and white as you say. Our
solution has different requirements than a typical HA solution. But that is
only now. In the future we might have to implement all the things. So in
that sense Pacemaker gives us a good framework that we can extend.

BTW, we are not even using a virtual IP resource which again I believe is
something that everyone employs.
Because of the nature of the service a small glitch is going to happen.
Using virtual IPs is not giving any real benefit for us.
And with regard to the question, why even have a standby and let it be
active all the time, two-node cluster is one of the possible configuration,
but main requirement is to support N + 1. So standby node doesn't know
which active it has to take over until a failover occurs.

Your comments however has made me re-consider using fencing. It was not
that we didn't want to do it.
Just that I felt it may not be needed. So I'll definitely explore this
further.

Thanks everyone for the comments.

-Regards
Nikhil

On Tue, Jun 21, 2016 at 10:17 PM, Digimer  wrote:

> On 21/06/16 10:57 AM, Dmitri Maziuk wrote:
> > On 2016-06-20 17:19, Digimer wrote:
> >
> >> Nikhil indicated that they could switch where traffic went up-stream
> >> without issue, if I understood properly.
> >
> > They have some interesting setup, but that notwithstanding: if split
> > brain happens some clients will connect to "old master" and some: to
> > "new master", dep. on arp update. If there's a shared resource
> > unavailable on one node, clients going there will error out. The other
> > ones will not. It will work for some clients.
> >
> > Cf. both nodes going into stonith deathmatch and killing each other: the
> > service now is not available for all clients. What I don't get is the
> > blanket assertion that this "more highly" available that option #1.
> >
> > Dimitri
>
> As I've explained many times (here and on IRC);
>
> If you don't need to coordinate services/access, you don't need HA.
>
> If you do need to coordinate services/access, you need fencing.
>
> So if Nikhil really believes s/he doesn't need fencing and that
> split-brains are OK, then drop HA. If that is not the case, then s/he
> needs to implement fencing in pacemaker. It's pretty much that simple.
>
> --
> Digimer
> Papers and Projects: https://alteeve.ca/w/
> What if the cure for cancer is trapped in the mind of a person without
> access to education?
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Recovering after split-brain

2016-06-21 Thread Ken Gaillot
On 06/20/2016 11:33 PM, Nikhil Utane wrote:
> Let me give the full picture about our solution. It will then make it
> easy to have the discussion.
> 
> We are looking at providing N + 1 Redundancy to our application servers,
> i.e. 1 standby for upto N active (currently N<=5). Each server will have
> some unique configuration. The standby will store the configuration of
> all the active servers such that whichever server goes down, the standby
> loads that particular configuration and becomes active. The server that
> went down will now become standby. 
> We have bundled all the configuration that every server has into a
> resource such that during failover the resource is moved to the newly
> active server, and that way it takes up the personality of the server
> that went down. To put it differently, every active server has a
> 'unique' resource that is started by Pacemaker whereas standby has none.
> 
> Our servers do not write anything to an external database, all the
> writing is done to the CIB file under the resource that it is currently
> managing. We also have some clients that connect to the active servers
> (1 client can connect to only 1 server, 1 server can have multiple
> clients) and provide service to end-users. Now the reason I say that
> split-brain is not an issue for us, is coz the clients can only connect
> to 1 of the active servers at any given time (we have to handle the case
> that all clients move together and do not get distributed). So even if
> two servers become active with same personality, the clients can only
> connect to 1 of them. (Initial plan was to go configure quorum but later
> I was told that service availability is of utmost importance and since
> impact of split-brain is limited, we are thinking of doing away with it).
> 
> Now the concern I have is, once the split is resolved, I would have 2
> actives, each having its own view of the resource, trying to synchronize
> the CIB. At this point I want the one that has the clients attached to
> it win.
> I am thinking I can implement a monitor function that can bring down the
> resource if it doesn't find any clients attached to it within a given
> period of time. But to understand the Pacemaker behavior, what exactly
> would happen if the same resource is found to be active on two nodes
> after recovery?
> 
> -Thanks
> Nikhil

In general, monitor actions should not change the state of the service
in any way.

Pacemaker's behavior when finding multiple instances of a resource
running when there should be only one is configurable via the
multiple-active property:

http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#_resource_meta_attributes

By default, it stops all the instances, and then starts one instance.
The alternatives are to stop all the instances and leave them stopped,
or to unmanage the resource (i.e. refuse to stop or start it).

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Recovering after split-brain

2016-06-21 Thread Digimer
On 21/06/16 01:27 PM, Dimitri Maziuk wrote:
> On 06/21/2016 12:13 PM, Andrei Borzenkov wrote:
> 
>> You should not run pacemaker without some sort of fencing. This need not
>> be network-controlled power socket (and tiebreaker is not directly
>> related to fencing).
> 
> Yes it can be sysadmin-controlled power socket. It has to be a power
> socket, if you don't trust me, read Dejan's list of fencing devices.

You can now use redundant and complex fencing configurations in pacemaker.

Our company always has this setup;

IPMI is the primary fence method (when it works, we can trust 'off'
100%, but it draws power from the host and is thus vulnerable)

Pair of switched PDUs as backup fencing (when it works, you are
confident that the outlets are opened, but you have to make sure the
cables are in the right place. However, it is entirely external to the
target).

> Tiebreaking is directly related to figuring out which of the two nodes
> is to be fenced. because neither of them can tell on its own.

See my comment on 'delay="15"'. You do NOT need a 3 node cluter/tie
breaker. We've run nothing but 2-node clusters for years all over north
america and we've heard of people running our system globally. With the
above fence setup and proper delay, it has never once been a problem.

>> I fail to see how heartbeat makes any difference here, sorry.
> 
> Third node and remote-controlled PDU were not a requirement for
> haresources mode. If I wanted to run it so that when it breaks I get to
> keep the pieces, I could.

You technically can in pacemaker, too, but it's dumb in any HA
environment. As soon as you make assumptions, you open up the chance of
being wrong.

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Recovering after split-brain

2016-06-21 Thread Andrei Borzenkov
21.06.2016 20:05, Dimitri Maziuk пишет:
> On 06/21/2016 11:47 AM, Digimer wrote:
> 
>> If you don't need to coordinate services/access, you don't need HA.
>>
>> If you do need to coordinate services/access, you need fencing.
> 
> So what you're saying is we *cannot* run a pacemaker cluster without a
> tiebreaker node *and* a network-controlled power socket.
> 

You should not run pacemaker without some sort of fencing. This need not
be network-controlled power socket (and tiebreaker is not directly
related to fencing).

If you do not care about fencing, why not simply start services on both
nodes at boot time and be done with it?

> I knew that, actually, that's why I hung on to heartbeat for as long as

I fail to see how heartbeat makes any difference here, sorry.

> I could. It'd be nice to have it spelled out in bold at the start of
> every "explained from scratch" document on clusterlabs.org for the young
> players.
> 
> 
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 




signature.asc
Description: OpenPGP digital signature
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Recovering after split-brain

2016-06-21 Thread Dimitri Maziuk
On 06/21/2016 11:47 AM, Digimer wrote:

> If you don't need to coordinate services/access, you don't need HA.
> 
> If you do need to coordinate services/access, you need fencing.

So what you're saying is we *cannot* run a pacemaker cluster without a
tiebreaker node *and* a network-controlled power socket.

I knew that, actually, that's why I hung on to heartbeat for as long as
I could. It'd be nice to have it spelled out in bold at the start of
every "explained from scratch" document on clusterlabs.org for the young
players.

-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu



signature.asc
Description: OpenPGP digital signature
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Recovering after split-brain

2016-06-21 Thread Digimer
On 21/06/16 10:57 AM, Dmitri Maziuk wrote:
> On 2016-06-20 17:19, Digimer wrote:
> 
>> Nikhil indicated that they could switch where traffic went up-stream
>> without issue, if I understood properly.
> 
> They have some interesting setup, but that notwithstanding: if split
> brain happens some clients will connect to "old master" and some: to
> "new master", dep. on arp update. If there's a shared resource
> unavailable on one node, clients going there will error out. The other
> ones will not. It will work for some clients.
> 
> Cf. both nodes going into stonith deathmatch and killing each other: the
> service now is not available for all clients. What I don't get is the
> blanket assertion that this "more highly" available that option #1.
> 
> Dimitri

As I've explained many times (here and on IRC);

If you don't need to coordinate services/access, you don't need HA.

If you do need to coordinate services/access, you need fencing.

So if Nikhil really believes s/he doesn't need fencing and that
split-brains are OK, then drop HA. If that is not the case, then s/he
needs to implement fencing in pacemaker. It's pretty much that simple.

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Recovering after split-brain

2016-06-21 Thread Dmitri Maziuk

On 2016-06-20 17:19, Digimer wrote:


Nikhil indicated that they could switch where traffic went up-stream
without issue, if I understood properly.


They have some interesting setup, but that notwithstanding: if split 
brain happens some clients will connect to "old master" and some: to 
"new master", dep. on arp update. If there's a shared resource 
unavailable on one node, clients going there will error out. The other 
ones will not. It will work for some clients.


Cf. both nodes going into stonith deathmatch and killing each other: the 
service now is not available for all clients. What I don't get is the 
blanket assertion that this "more highly" available that option #1.


Dimitri

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Recovering after split-brain

2016-06-20 Thread Nikhil Utane
Let me give the full picture about our solution. It will then make it easy
to have the discussion.

We are looking at providing N + 1 Redundancy to our application servers,
i.e. 1 standby for upto N active (currently N<=5). Each server will have
some unique configuration. The standby will store the configuration of all
the active servers such that whichever server goes down, the standby loads
that particular configuration and becomes active. The server that went down
will now become standby.
We have bundled all the configuration that every server has into a resource
such that during failover the resource is moved to the newly active server,
and that way it takes up the personality of the server that went down. To
put it differently, every active server has a 'unique' resource that is
started by Pacemaker whereas standby has none.

Our servers do not write anything to an external database, all the writing
is done to the CIB file under the resource that it is currently managing.
We also have some clients that connect to the active servers (1 client can
connect to only 1 server, 1 server can have multiple clients) and provide
service to end-users. Now the reason I say that split-brain is not an issue
for us, is coz the clients can only connect to 1 of the active servers at
any given time (we have to handle the case that all clients move together
and do not get distributed). So even if two servers become active with same
personality, the clients can only connect to 1 of them. (Initial plan was
to go configure quorum but later I was told that service availability is of
utmost importance and since impact of split-brain is limited, we are
thinking of doing away with it).

Now the concern I have is, once the split is resolved, I would have 2
actives, each having its own view of the resource, trying to synchronize
the CIB. At this point I want the one that has the clients attached to it
win.
I am thinking I can implement a monitor function that can bring down the
resource if it doesn't find any clients attached to it within a given
period of time. But to understand the Pacemaker behavior, what exactly
would happen if the same resource is found to be active on two nodes after
recovery?

-Thanks
Nikhil



On Tue, Jun 21, 2016 at 3:49 AM, Digimer  wrote:

> On 20/06/16 05:58 PM, Dimitri Maziuk wrote:
> > On 06/20/2016 03:58 PM, Digimer wrote:
> >
> >> Then wouldn't it be a lot better to just run your services on both nodes
> >> all the time and take HA out of the picture? Availability is predicated
> >> on building the simplest system possible. If you have no concerns about
> >> uncoordinated access, then make like simpler and remove pacemaker
> entirely.
> >
> > Obviously you'd have to remove the other node as well since you now
> > can't have the single service access point anymore.
>
> Nikhil indicated that they could switch where traffic went up-stream
> without issue, if I understood properly.
>
> --
> Digimer
> Papers and Projects: https://alteeve.ca/w/
> What if the cure for cancer is trapped in the mind of a person without
> access to education?
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Recovering after split-brain

2016-06-20 Thread Digimer
On 20/06/16 05:58 PM, Dimitri Maziuk wrote:
> On 06/20/2016 03:58 PM, Digimer wrote:
> 
>> Then wouldn't it be a lot better to just run your services on both nodes
>> all the time and take HA out of the picture? Availability is predicated
>> on building the simplest system possible. If you have no concerns about
>> uncoordinated access, then make like simpler and remove pacemaker entirely.
> 
> Obviously you'd have to remove the other node as well since you now
> can't have the single service access point anymore.

Nikhil indicated that they could switch where traffic went up-stream
without issue, if I understood properly.

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Recovering after split-brain

2016-06-20 Thread Dimitri Maziuk
On 06/20/2016 03:58 PM, Digimer wrote:

> Then wouldn't it be a lot better to just run your services on both nodes
> all the time and take HA out of the picture? Availability is predicated
> on building the simplest system possible. If you have no concerns about
> uncoordinated access, then make like simpler and remove pacemaker entirely.

Obviously you'd have to remove the other node as well since you now
can't have the single service access point anymore.

-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu



signature.asc
Description: OpenPGP digital signature
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Recovering after split-brain

2016-06-20 Thread Digimer
On 20/06/16 09:30 AM, Nikhil Utane wrote:
> Hi,
> 
> For our solution we are making a conscious choice to not use
> quorum/fencing as for us service availability is more important than
> having 2 nodes take up the same active role. Split-brain is not an issue
> for us (at least i think that way) since we have a second line of

Then wouldn't it be a lot better to just run your services on both nodes
all the time and take HA out of the picture? Availability is predicated
on building the simplest system possible. If you have no concerns about
uncoordinated access, then make like simpler and remove pacemaker entirely.

> defense. We have clients who can connect to only one of the two active
> nodes. So in that sense, even if we end up with 2 nodes becoming active,
> since the clients can connect to only 1 of the active node, we should
> not have any issue.
> 
> Now my question is what happens after recovering from split-brain since
> the resource will be active on both the nodes. From application point of
> view we want to be able to find out which node is servicing the clients
> and keep that operational and make the other one as standby.
> 
> Does Pacemaker make it easy to do this kind of thing through some means?
> Are there any issues that I am completely unaware due to letting
> split-brain occur?
> 
> -Thanks
> Nikhil


-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Recovering after split-brain

2016-06-20 Thread Ken Gaillot
On 06/20/2016 08:30 AM, Nikhil Utane wrote:
> Hi,
> 
> For our solution we are making a conscious choice to not use
> quorum/fencing as for us service availability is more important than
> having 2 nodes take up the same active role. Split-brain is not an issue
> for us (at least i think that way) since we have a second line of
> defense. We have clients who can connect to only one of the two active
> nodes. So in that sense, even if we end up with 2 nodes becoming active,
> since the clients can connect to only 1 of the active node, we should
> not have any issue.
> 
> Now my question is what happens after recovering from split-brain since
> the resource will be active on both the nodes. From application point of
> view we want to be able to find out which node is servicing the clients
> and keep that operational and make the other one as standby.
> 
> Does Pacemaker make it easy to do this kind of thing through some means?
> Are there any issues that I am completely unaware due to letting
> split-brain occur?
> 
> -Thanks
> Nikhil

Usually, split brain is most destructive when the two nodes need to
synchronize data in some way (DRBD, shared storage, cluster file
systems, replication, etc.). If both nodes attempt to write without
being able to coordinate with each other, it usually results in
incompatible data stores that cause big recovery headaches (sometimes
throw-it-away-and-restore-from-backup headaches). For a resource such as
a floating IP, the consequences are less severe, but it can result in
the service becoming unusable (if both nodes claim the IP, packets go
every which way).

In the scenario you describe, if a split brain occurs and then is
resolved, Pacemaker will likely stop the services on both nodes, then
start them on one node.

The main questions I see are (1) does your service require any sort of
coordination/synchronization between the two nodes, especially of data;
and (2) how do clients know which node to connect to?

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Recovering after split-brain

2016-06-20 Thread Dmitri Maziuk

On 2016-06-20 09:13, Jehan-Guillaume de Rorthais wrote:


I've heard multiple time this kind of argument on the field, but soon or later,
these clusters actually had a split brain scenario with clients connected on
both side, some very bad corruptions, data lost, etc.


I'm sure it's a very helpful answer but the question was about 
suspending pacemaker while I manually fix a problem with the resource.


I too would very much like to know how to get pacemaker to "unmonitor" 
my resources and not get in the way while I'm updating and/or fixing them.


In heartbeat mon was a completely separate component that could be moved 
out of the way when needed.


In pacemaker I now had to power-cycle the nodes several times because in 
a 2-node active/passive cluster without quorum and fencing set up like

- drbd master-slave
- drbd filesystem (colocated and ordered after the master)
- symlink (colocated and ordered after the fs)
- service (colocated and ordered after the symlink)
-- when the service fails to start due to user error, pacemaker fscks up 
everything up to and including the master-slave drbd and "clearing" 
errors on the service does not fix the symlink and the rest of it. (So 
far I've been unable to reliable reproduce it in testing environments, 
Murphy made sure it only happens on production clusters.)


Right now it seems to me for drbd split brain I'll have to stop the 
cluster on victim node, do manual split brain recovery, and restart the 
cluster after sync is complete. Is that correct?


Dimitri


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Recovering after split-brain

2016-06-20 Thread Jehan-Guillaume de Rorthais
Le Mon, 20 Jun 2016 19:00:12 +0530,
Nikhil Utane  a écrit :

> Hi,
> 
> For our solution we are making a conscious choice to not use quorum/fencing
> as for us service availability is more important than having 2 nodes take
> up the same active role. Split-brain is not an issue for us (at least i
> think that way) since we have a second line of defense. We have clients who
> can connect to only one of the two active nodes. So in that sense, even if
> we end up with 2 nodes becoming active, since the clients can connect to
> only 1 of the active node, we should not have any issue.

I've heard multiple time this kind of argument on the field, but soon or later,
these clusters actually had a split brain scenario with clients connected on
both side, some very bad corruptions, data lost, etc.

Do never under estimate the kaos. It will always find a way to surprise you. If
there is a breach somewhere, soon or later everything will blow up.

Regards,
-- 
Jehan-Guillaume de Rorthais
Dalibo

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org