Re: [ClusterLabs] Recovering after split-brain

2016-06-20 Thread Nikhil Utane
Let me give the full picture about our solution. It will then make it easy
to have the discussion.

We are looking at providing N + 1 Redundancy to our application servers,
i.e. 1 standby for upto N active (currently N<=5). Each server will have
some unique configuration. The standby will store the configuration of all
the active servers such that whichever server goes down, the standby loads
that particular configuration and becomes active. The server that went down
will now become standby.
We have bundled all the configuration that every server has into a resource
such that during failover the resource is moved to the newly active server,
and that way it takes up the personality of the server that went down. To
put it differently, every active server has a 'unique' resource that is
started by Pacemaker whereas standby has none.

Our servers do not write anything to an external database, all the writing
is done to the CIB file under the resource that it is currently managing.
We also have some clients that connect to the active servers (1 client can
connect to only 1 server, 1 server can have multiple clients) and provide
service to end-users. Now the reason I say that split-brain is not an issue
for us, is coz the clients can only connect to 1 of the active servers at
any given time (we have to handle the case that all clients move together
and do not get distributed). So even if two servers become active with same
personality, the clients can only connect to 1 of them. (Initial plan was
to go configure quorum but later I was told that service availability is of
utmost importance and since impact of split-brain is limited, we are
thinking of doing away with it).

Now the concern I have is, once the split is resolved, I would have 2
actives, each having its own view of the resource, trying to synchronize
the CIB. At this point I want the one that has the clients attached to it
win.
I am thinking I can implement a monitor function that can bring down the
resource if it doesn't find any clients attached to it within a given
period of time. But to understand the Pacemaker behavior, what exactly
would happen if the same resource is found to be active on two nodes after
recovery?

-Thanks
Nikhil



On Tue, Jun 21, 2016 at 3:49 AM, Digimer  wrote:

> On 20/06/16 05:58 PM, Dimitri Maziuk wrote:
> > On 06/20/2016 03:58 PM, Digimer wrote:
> >
> >> Then wouldn't it be a lot better to just run your services on both nodes
> >> all the time and take HA out of the picture? Availability is predicated
> >> on building the simplest system possible. If you have no concerns about
> >> uncoordinated access, then make like simpler and remove pacemaker
> entirely.
> >
> > Obviously you'd have to remove the other node as well since you now
> > can't have the single service access point anymore.
>
> Nikhil indicated that they could switch where traffic went up-stream
> without issue, if I understood properly.
>
> --
> Digimer
> Papers and Projects: https://alteeve.ca/w/
> What if the cure for cancer is trapped in the mind of a person without
> access to education?
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Recovering after split-brain

2016-06-20 Thread Digimer
On 20/06/16 05:58 PM, Dimitri Maziuk wrote:
> On 06/20/2016 03:58 PM, Digimer wrote:
> 
>> Then wouldn't it be a lot better to just run your services on both nodes
>> all the time and take HA out of the picture? Availability is predicated
>> on building the simplest system possible. If you have no concerns about
>> uncoordinated access, then make like simpler and remove pacemaker entirely.
> 
> Obviously you'd have to remove the other node as well since you now
> can't have the single service access point anymore.

Nikhil indicated that they could switch where traffic went up-stream
without issue, if I understood properly.

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Recovering after split-brain

2016-06-20 Thread Dimitri Maziuk
On 06/20/2016 03:58 PM, Digimer wrote:

> Then wouldn't it be a lot better to just run your services on both nodes
> all the time and take HA out of the picture? Availability is predicated
> on building the simplest system possible. If you have no concerns about
> uncoordinated access, then make like simpler and remove pacemaker entirely.

Obviously you'd have to remove the other node as well since you now
can't have the single service access point anymore.

-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu



signature.asc
Description: OpenPGP digital signature
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Recovering after split-brain

2016-06-20 Thread Digimer
On 20/06/16 09:30 AM, Nikhil Utane wrote:
> Hi,
> 
> For our solution we are making a conscious choice to not use
> quorum/fencing as for us service availability is more important than
> having 2 nodes take up the same active role. Split-brain is not an issue
> for us (at least i think that way) since we have a second line of

Then wouldn't it be a lot better to just run your services on both nodes
all the time and take HA out of the picture? Availability is predicated
on building the simplest system possible. If you have no concerns about
uncoordinated access, then make like simpler and remove pacemaker entirely.

> defense. We have clients who can connect to only one of the two active
> nodes. So in that sense, even if we end up with 2 nodes becoming active,
> since the clients can connect to only 1 of the active node, we should
> not have any issue.
> 
> Now my question is what happens after recovering from split-brain since
> the resource will be active on both the nodes. From application point of
> view we want to be able to find out which node is servicing the clients
> and keep that operational and make the other one as standby.
> 
> Does Pacemaker make it easy to do this kind of thing through some means?
> Are there any issues that I am completely unaware due to letting
> split-brain occur?
> 
> -Thanks
> Nikhil


-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Recovering after split-brain

2016-06-20 Thread Ken Gaillot
On 06/20/2016 08:30 AM, Nikhil Utane wrote:
> Hi,
> 
> For our solution we are making a conscious choice to not use
> quorum/fencing as for us service availability is more important than
> having 2 nodes take up the same active role. Split-brain is not an issue
> for us (at least i think that way) since we have a second line of
> defense. We have clients who can connect to only one of the two active
> nodes. So in that sense, even if we end up with 2 nodes becoming active,
> since the clients can connect to only 1 of the active node, we should
> not have any issue.
> 
> Now my question is what happens after recovering from split-brain since
> the resource will be active on both the nodes. From application point of
> view we want to be able to find out which node is servicing the clients
> and keep that operational and make the other one as standby.
> 
> Does Pacemaker make it easy to do this kind of thing through some means?
> Are there any issues that I am completely unaware due to letting
> split-brain occur?
> 
> -Thanks
> Nikhil

Usually, split brain is most destructive when the two nodes need to
synchronize data in some way (DRBD, shared storage, cluster file
systems, replication, etc.). If both nodes attempt to write without
being able to coordinate with each other, it usually results in
incompatible data stores that cause big recovery headaches (sometimes
throw-it-away-and-restore-from-backup headaches). For a resource such as
a floating IP, the consequences are less severe, but it can result in
the service becoming unusable (if both nodes claim the IP, packets go
every which way).

In the scenario you describe, if a split brain occurs and then is
resolved, Pacemaker will likely stop the services on both nodes, then
start them on one node.

The main questions I see are (1) does your service require any sort of
coordination/synchronization between the two nodes, especially of data;
and (2) how do clients know which node to connect to?

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Cluster reboot fro maintenance

2016-06-20 Thread Ken Gaillot
On 06/20/2016 07:45 AM, ma...@nucleus.it wrote:
> Hi,
> i have a two node cluster with some vms (pacemaker resources) running on
> the two hypervisors:
> pacemaker-1.0.10
> corosync-1.3.0
> 
> I need to do maintenance stuff , so i need to:
> - put on maintenance the cluster so the cluster doesn't
>   touch/start/stop/monitor the vms
> - update the vms
> - stop the vms
> - stop cluster stuff (corosync/pacemaker) so it do not
>   start/stop/monitor vms
> - reboot the hypervisors.
> - start cluster stuff
> - remove maintenance from the cluster stuff so it start all the vms
> 
> What is the corret way to do that ( corosync/pacemaker) side ?
> 
> 
> Best regards
> Marco

Maintenance mode provides this ability. Set the maintenance-mode cluster
proprerty to true, do whatever you want, then set it back to false when
done.

That said, I've never used pacemaker/corosync versions that old, so I'm
not 100% sure that applies to those versions, though I would guess it does.

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] restarting pacemakerd

2016-06-20 Thread Ken Gaillot
On 06/18/2016 05:15 AM, Ferenc Wágner wrote:
> Hi,
> 
> Could somebody please elaborate a little why the pacemaker systemd
> service file contains "Restart=on-failure"?  I mean that a failed node
> gets fenced anyway, so most of the time this would be a futile effort.
> On the other hand, one could argue that restarting failed services
> should be the default behavior of systemd (or any init system).  Still,
> it is not.  I'd be grateful for some insight into the matter.

To clarify one point, the configuration mentioned here is systemd
configuration, not part of pacemaker configuration or operation. Systemd
monitors the processes it launches. With "Restart=on-failure", system
will re-launch pacemaker in situations systemd considers "failure"
(exiting nonzero, exiting with core dump, etc.).

Systemd does have various rate-limiting options, which we leave as
default in the pacemaker unit file. Perhaps one day we could try to come
up with ideal values, but it should be a rare situation, and admins can
always tune them as desired for their system using an override file.

The goal of restart is of course to have a slightly better shot at
recovery. You're right, if fencing is configured and quorum is retained,
the node will almost certainly get fenced anyway, but those conditions
aren't always true.

Systemd upstream recommends Restart=on-failure or Restart=on-abnormal
for all long-running services. on-abnormal would probably be better for
pacemaker, but it's not supported in older systemd versions.

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Recovering after split-brain

2016-06-20 Thread Dmitri Maziuk

On 2016-06-20 09:13, Jehan-Guillaume de Rorthais wrote:


I've heard multiple time this kind of argument on the field, but soon or later,
these clusters actually had a split brain scenario with clients connected on
both side, some very bad corruptions, data lost, etc.


I'm sure it's a very helpful answer but the question was about 
suspending pacemaker while I manually fix a problem with the resource.


I too would very much like to know how to get pacemaker to "unmonitor" 
my resources and not get in the way while I'm updating and/or fixing them.


In heartbeat mon was a completely separate component that could be moved 
out of the way when needed.


In pacemaker I now had to power-cycle the nodes several times because in 
a 2-node active/passive cluster without quorum and fencing set up like

- drbd master-slave
- drbd filesystem (colocated and ordered after the master)
- symlink (colocated and ordered after the fs)
- service (colocated and ordered after the symlink)
-- when the service fails to start due to user error, pacemaker fscks up 
everything up to and including the master-slave drbd and "clearing" 
errors on the service does not fix the symlink and the rest of it. (So 
far I've been unable to reliable reproduce it in testing environments, 
Murphy made sure it only happens on production clusters.)


Right now it seems to me for drbd split brain I'll have to stop the 
cluster on victim node, do manual split brain recovery, and restart the 
cluster after sync is complete. Is that correct?


Dimitri


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Recovering after split-brain

2016-06-20 Thread Jehan-Guillaume de Rorthais
Le Mon, 20 Jun 2016 19:00:12 +0530,
Nikhil Utane  a écrit :

> Hi,
> 
> For our solution we are making a conscious choice to not use quorum/fencing
> as for us service availability is more important than having 2 nodes take
> up the same active role. Split-brain is not an issue for us (at least i
> think that way) since we have a second line of defense. We have clients who
> can connect to only one of the two active nodes. So in that sense, even if
> we end up with 2 nodes becoming active, since the clients can connect to
> only 1 of the active node, we should not have any issue.

I've heard multiple time this kind of argument on the field, but soon or later,
these clusters actually had a split brain scenario with clients connected on
both side, some very bad corruptions, data lost, etc.

Do never under estimate the kaos. It will always find a way to surprise you. If
there is a breach somewhere, soon or later everything will blow up.

Regards,
-- 
Jehan-Guillaume de Rorthais
Dalibo

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Recovering after split-brain

2016-06-20 Thread Nikhil Utane
Hi,

For our solution we are making a conscious choice to not use quorum/fencing
as for us service availability is more important than having 2 nodes take
up the same active role. Split-brain is not an issue for us (at least i
think that way) since we have a second line of defense. We have clients who
can connect to only one of the two active nodes. So in that sense, even if
we end up with 2 nodes becoming active, since the clients can connect to
only 1 of the active node, we should not have any issue.

Now my question is what happens after recovering from split-brain since the
resource will be active on both the nodes. From application point of view
we want to be able to find out which node is servicing the clients and keep
that operational and make the other one as standby.

Does Pacemaker make it easy to do this kind of thing through some means?
Are there any issues that I am completely unaware due to letting
split-brain occur?

-Thanks
Nikhil
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Cluster reboot fro maintenance

2016-06-20 Thread marco
Hi,
i have a two node cluster with some vms (pacemaker resources) running on
the two hypervisors:
pacemaker-1.0.10
corosync-1.3.0

I need to do maintenance stuff , so i need to:
- put on maintenance the cluster so the cluster doesn't
  touch/start/stop/monitor the vms
- update the vms
- stop the vms
- stop cluster stuff (corosync/pacemaker) so it do not
  start/stop/monitor vms
- reboot the hypervisors.
- start cluster stuff
- remove maintenance from the cluster stuff so it start all the vms

What is the corret way to do that ( corosync/pacemaker) side ?


Best regards
Marco

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org