Re: [ClusterLabs] Coming in Pacemaker 2.0.4: shutdown locks

2020-02-26 Thread Ken Gaillot
On Wed, 2020-02-26 at 06:52 +0200, Strahil Nikolov wrote:
> On February 26, 2020 12:30:24 AM GMT+02:00, Ken Gaillot <
> kgail...@redhat.com> wrote:
> > Hi all,
> > 
> > We are a couple of months away from starting the release cycle for
> > Pacemaker 2.0.4. I'll highlight some new features between now and
> > then.
> > 
> > First we have shutdown locks. This is a narrow use case that I
> > don't
> > expect a lot of interest in, but it helps give pacemaker feature
> > parity
> > with proprietary HA systems, which can help users feel more
> > comfortable
> > switching to pacemaker and open source.
> > 
> > The use case is a large organization with few cluster experts and
> > many
> > junior system administrators who reboot hosts for OS updates during
> > planned maintenance windows, without any knowledge of what the host
> > does. The cluster runs services that have a preferred node and take
> > a
> > very long time to start.
> > 
> > In this scenario, pacemaker's default behavior of moving the
> > service to
> > a failover node when the node shuts down, and moving it back when
> > the
> > node comes back up, results in needless downtime compared to just
> > leaving the service down for the few minutes needed for a reboot.
> > 
> > The goal could be accomplished with existing pacemaker features.
> > Maintenance mode wouldn't work because the node is being rebooted.
> > But
> > you could figure out what resources are active on the node, and use
> > a
> > location constraint with a rule to ban them on all other nodes
> > before
> > shutting down. That's a lot of work for something the cluster can
> > figure out automatically.
> > 
> > Pacemaker 2.0.4 will offer a new cluster property, shutdown-lock,
> > defaulting to false to keep the current behavior. If shutdown-lock
> > is
> > set to true, any resources active on a node when it is cleanly shut
> > down will be "locked" to the node (kept down rather than recovered
> > elsewhere). Once the node comes back up and rejoins the cluster,
> > they
> > will be "unlocked" (free to move again if circumstances warrant).
> > 
> > An additional cluster property, shutdown-lock-limit, allows you to
> > set
> > a timeout for the locks so that if the node doesn't come back
> > within
> > that time, the resources are free to be recovered elsewhere. This
> > defaults to no limit.
> > 
> > If you decide while the node is down that you need the resource to
> > be
> > recovered, you can manually clear a lock with "crm_resource --
> > refresh"
> > specifying both --node and --resource.
> > 
> > There are some limitations using shutdown locks with Pacemaker
> > Remote
> > nodes, so I'd avoid that with the upcoming release, though it is
> > possible.
> 
> Hi Ken,
> 
> Can it be 'shutdown-lock-timeout' instead of 'shutdown-lock-limit' ?

I thought about that, but I wanted to be clear that this is a maximum
bound. "timeout" could be a little ambiguous as to whether it is a
maximum or how long a lock will always last. On the other hand "limit"
is not obvious that it should be a time duration. I could see it going
either way.

> Also, I think that the default value could be something more
> reasonable - like 30min. Usually 30min are OK if you don't patch the
> firmware and 180min are the maximum if you do patch the firmware.

The primary goal is to ease the transition from other HA software,
which doesn't even offer the equivalent of shutdown-lock-limit, so I
wanted the default to match that behavior. Also "usually" is a mine
field :)

> The use case is odd. I have been in the same situation, and our
> solution was to train the team (internally) instead of using such
> feature.

Right, this is designed for situations where that isn't feasible :)

Though even with trained staff, this does make it easier, since you
don't have to figure out yourself what's active on the node.

> The interesting part will be the behaviour of the local cluster
> stack, when updates  happen. The risk is high for the node to be
> fenced due to unresponsiveness (during the update) or if
> corosync/pacemaker  use an old function changed in the libs.

That is a risk, but presumably one that a user transitioning from
another product would already be familiar with.

> Best Regards,
> Strahil Nikolov
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Coming in Pacemaker 2.0.4: shutdown locks

2020-02-26 Thread Ken Gaillot
On Wed, 2020-02-26 at 14:45 +0900, Ondrej wrote:
> Hi Ken,
> 
> On 2/26/20 7:30 AM, Ken Gaillot wrote:
> > The use case is a large organization with few cluster experts and
> > many
> > junior system administrators who reboot hosts for OS updates during
> > planned maintenance windows, without any knowledge of what the host
> > does. The cluster runs services that have a preferred node and take
> > a
> > very long time to start.
> > 
> > In this scenario, pacemaker's default behavior of moving the
> > service to
> > a failover node when the node shuts down, and moving it back when
> > the
> > node comes back up, results in needless downtime compared to just
> > leaving the service down for the few minutes needed for a reboot.
> 
> 1. Do I understand it correctly that scenario will be when system 
> gracefully reboots (pacemaker service is stopped by system shutting 
> down) and also in case that users for example manually stop cluster
> but 
> doesn't reboot the node - something like `pcs cluster stop`?

Exactly. The idea is the user wants HA for node or resource failures,
but not clean cluster stops.

> > If you decide while the node is down that you need the resource to
> > be
> > recovered, you can manually clear a lock with "crm_resource --
> > refresh"
> > specifying both --node and --resource.
> 
> 2. I'm interested how the situation will look like in the 'crm_mon' 
> output or in 'crm_simulate'. Will there be some indication why the 
> resources are not moving like 'blocked-shutdown-lock' or they will
> just 
> appear as not moving (Stopped)?

Yes, resources will be shown as "Stopped (LOCKED)".

> Will this look differently from situation where for example the
> resource 
> is just not allowed by constraint to run on other nodes?

Only in logs and cluster status; internally it is implemented as
implicit constraints banning the resources from every other node.

Another point I should clarify is that the lock/constraint remains in
place until the node rejoins the cluster *and* the resource starts
again on that node. That ensures that the node is preferred even if
stickiness was the only thing holding the resource to the node
previously.

However once the resource starts on the node, the lock/constraint is
lifted, and the resource could theoretically immediately move to
another node. An example would be if there were no stickiness and new
resources were added to the configuration while the node was down, so
load balancing calculations end up different. Another would be if a
time-based rule kicked in while the node was down. However this feature
is only expected or likely to be used in a cluster where there are
preferred nodes, enforced by stickiness and/or location constraints, so
it shouldn't be significant in practice.

Special care was taken in a number of corner cases:

* If the resource start on the rejoined node fails, the lock is lifted.

* If the node is fenced (e.g. manually via stonith_admin) while it is
down, the lock is lifted.

* If the resource somehow started on another node while the node was
down (which shouldn't be possible, but just as a fail-safe), the lock
is ignored when the node rejoins.

* Maintenance mode, unmanaged resources, etc., work the same with
shutdown locks as they would with any other constraint.

> Thanks for heads up
> 
> --
> Ondrej Famera
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Coming in Pacemaker 2.0.4: shutdown locks

2020-02-25 Thread Ondrej

Hi Ken,

On 2/26/20 7:30 AM, Ken Gaillot wrote:

The use case is a large organization with few cluster experts and many
junior system administrators who reboot hosts for OS updates during
planned maintenance windows, without any knowledge of what the host
does. The cluster runs services that have a preferred node and take a
very long time to start.

In this scenario, pacemaker's default behavior of moving the service to
a failover node when the node shuts down, and moving it back when the
node comes back up, results in needless downtime compared to just
leaving the service down for the few minutes needed for a reboot.


1. Do I understand it correctly that scenario will be when system 
gracefully reboots (pacemaker service is stopped by system shutting 
down) and also in case that users for example manually stop cluster but 
doesn't reboot the node - something like `pcs cluster stop`?



If you decide while the node is down that you need the resource to be
recovered, you can manually clear a lock with "crm_resource --refresh"
specifying both --node and --resource.


2. I'm interested how the situation will look like in the 'crm_mon' 
output or in 'crm_simulate'. Will there be some indication why the 
resources are not moving like 'blocked-shutdown-lock' or they will just 
appear as not moving (Stopped)?


Will this look differently from situation where for example the resource 
is just not allowed by constraint to run on other nodes?


Thanks for heads up

--
Ondrej Famera
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Coming in Pacemaker 2.0.4: shutdown locks

2020-02-25 Thread Strahil Nikolov
On February 26, 2020 12:30:24 AM GMT+02:00, Ken Gaillot  
wrote:
>Hi all,
>
>We are a couple of months away from starting the release cycle for
>Pacemaker 2.0.4. I'll highlight some new features between now and then.
>
>First we have shutdown locks. This is a narrow use case that I don't
>expect a lot of interest in, but it helps give pacemaker feature parity
>with proprietary HA systems, which can help users feel more comfortable
>switching to pacemaker and open source.
>
>The use case is a large organization with few cluster experts and many
>junior system administrators who reboot hosts for OS updates during
>planned maintenance windows, without any knowledge of what the host
>does. The cluster runs services that have a preferred node and take a
>very long time to start.
>
>In this scenario, pacemaker's default behavior of moving the service to
>a failover node when the node shuts down, and moving it back when the
>node comes back up, results in needless downtime compared to just
>leaving the service down for the few minutes needed for a reboot.
>
>The goal could be accomplished with existing pacemaker features.
>Maintenance mode wouldn't work because the node is being rebooted. But
>you could figure out what resources are active on the node, and use a
>location constraint with a rule to ban them on all other nodes before
>shutting down. That's a lot of work for something the cluster can
>figure out automatically.
>
>Pacemaker 2.0.4 will offer a new cluster property, shutdown-lock,
>defaulting to false to keep the current behavior. If shutdown-lock is
>set to true, any resources active on a node when it is cleanly shut
>down will be "locked" to the node (kept down rather than recovered
>elsewhere). Once the node comes back up and rejoins the cluster, they
>will be "unlocked" (free to move again if circumstances warrant).
>
>An additional cluster property, shutdown-lock-limit, allows you to set
>a timeout for the locks so that if the node doesn't come back within
>that time, the resources are free to be recovered elsewhere. This
>defaults to no limit.
>
>If you decide while the node is down that you need the resource to be
>recovered, you can manually clear a lock with "crm_resource --refresh"
>specifying both --node and --resource.
>
>There are some limitations using shutdown locks with Pacemaker Remote
>nodes, so I'd avoid that with the upcoming release, though it is
>possible.

Hi Ken,

Can it be 'shutdown-lock-timeout' instead of 'shutdown-lock-limit' ?
Also, I think that the default value could be something more reasonable - like 
30min. Usually 30min are OK if you don't patch the firmware and 180min are the 
maximum if you do patch the firmware.

The use case is odd. I have been in the same situation, and our solution was to 
train the team (internally) instead of using such feature.
The interesting part will be the behaviour of the local cluster stack, when 
updates  happen. The risk is high for the node to be fenced due to 
unresponsiveness (during the update) or if corosync/pacemaker  use an old 
function changed in the libs.

Best Regards,
Strahil Nikolov
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] Coming in Pacemaker 2.0.4: shutdown locks

2020-02-25 Thread Ken Gaillot
Hi all,

We are a couple of months away from starting the release cycle for
Pacemaker 2.0.4. I'll highlight some new features between now and then.

First we have shutdown locks. This is a narrow use case that I don't
expect a lot of interest in, but it helps give pacemaker feature parity
with proprietary HA systems, which can help users feel more comfortable
switching to pacemaker and open source.

The use case is a large organization with few cluster experts and many
junior system administrators who reboot hosts for OS updates during
planned maintenance windows, without any knowledge of what the host
does. The cluster runs services that have a preferred node and take a
very long time to start.

In this scenario, pacemaker's default behavior of moving the service to
a failover node when the node shuts down, and moving it back when the
node comes back up, results in needless downtime compared to just
leaving the service down for the few minutes needed for a reboot.

The goal could be accomplished with existing pacemaker features.
Maintenance mode wouldn't work because the node is being rebooted. But
you could figure out what resources are active on the node, and use a
location constraint with a rule to ban them on all other nodes before
shutting down. That's a lot of work for something the cluster can
figure out automatically.

Pacemaker 2.0.4 will offer a new cluster property, shutdown-lock,
defaulting to false to keep the current behavior. If shutdown-lock is
set to true, any resources active on a node when it is cleanly shut
down will be "locked" to the node (kept down rather than recovered
elsewhere). Once the node comes back up and rejoins the cluster, they
will be "unlocked" (free to move again if circumstances warrant).

An additional cluster property, shutdown-lock-limit, allows you to set
a timeout for the locks so that if the node doesn't come back within
that time, the resources are free to be recovered elsewhere. This
defaults to no limit.

If you decide while the node is down that you need the resource to be
recovered, you can manually clear a lock with "crm_resource --refresh"
specifying both --node and --resource.

There are some limitations using shutdown locks with Pacemaker Remote
nodes, so I'd avoid that with the upcoming release, though it is
possible.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/