Re: [ClusterLabs] trigger something at ?

2024-02-08 Thread lejeczek via Users



On 31/01/2024 16:37, lejeczek via Users wrote:



On 31/01/2024 16:06, Jehan-Guillaume de Rorthais wrote:

On Wed, 31 Jan 2024 16:02:12 +0100
lejeczek via Users  wrote:



On 29/01/2024 17:22, Ken Gaillot wrote:
On Fri, 2024-01-26 at 13:55 +0100, lejeczek via Users 
wrote:

Hi guys.

Is it possible to trigger some... action - I'm 
thinking specifically

at shutdown/start.
If not within the cluster then - if you do that - 
perhaps outside.
I would like to create/remove constraints, when 
cluster starts &

stops, respectively.

many thanks, L.

You could use node status alerts for that, but it's 
risky for alert
agents to change the configuration (since that may 
result in more

alerts and potentially some sort of infinite loop).

Pacemaker has no concept of a full cluster start/stop, 
only node
start/stop. You could approximate that by checking 
whether the node

receiving the alert is the only active node.

Another possibility would be to write a resource agent 
that does what
you want and order everything else after it. However 
it's even more

risky for a resource agent to modify the configuration.

Finally you could write a systemd unit to do what you 
want and order it

after pacemaker.

What's wrong with leaving the constraints permanently 
configured?

yes, that would be for a node start/stop
I struggle with using constraints to move pgsql (PAF) 
master

onto a given node - seems that co/locating paf's master
results in troubles (replication brakes) at/after node
shutdown/reboot (not always, but way too often)
What? What's wrong with colocating PAF's masters exactly? 
How does it brake any

replication? What's these constraints you are dealing with?

Could you share your configuration?
Constraints beyond/above of what is required by PAF agent 
itself, say...
you have multiple pgSQL cluster with PAF - thus multiple 
(separate, for each pgSQL cluster) masters and you want to 
spread/balance those across HA cluster
(or in other words - avoid having more that 1 pgsql master 
per HA node)
These below, I've tried, those move the master onto chosen 
node but.. then the issues I mentioned.


-> $ pcs constraint location PGSQL-PAF-5438-clone prefers 
ubusrv1=1002

or
-> $ pcs constraint colocation set PGSQL-PAF-5435-clone 
PGSQL-PAF-5434-clone PGSQL-PAF-5433-clone role=Master 
require-all=false setoptions score=-1000


Wanted to share an observation - not a measurement of 
anything, I did not take those - of different, latest pgSQL 
version which I put in place of version 14 which I've been 
using all this time.
(also with that upgrade -  from Postgres own repos - came 
update of PAF)
So, with pgSQL ver. 16  and the same of everything else - 
now paf/pgSQL resources behave a lot lot better, survives 
just fine all those cases - with ! extra constraints of 
course - where previously it had replication failures.

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] pacemaker resource configure issue

2024-02-08 Thread Ken Gaillot
On Thu, 2024-02-08 at 10:12 +0800, hywang via Users wrote:
> hello, everyone,
>  I want to make a node fenced or the cluster stopped after a
> resource start failed 3 times, how to make the resource configure to
> achive it?
> Thanks!
> 

The current design doesn't allow it. You can set start-failure-is-fatal 
to false to let the cluster reattempt the start and migration-threshold 
to 3 to have it try to start on a different node after three failures,
or you can set on-fail to fence to have it fence the node if the
(first) start fails, but you can't combine those approaches.

It's a longstanding goal to allow more flexibility in failure handling,
but there hasn't been time to deal with it.
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] pacemaker resource configure issue

2024-02-08 Thread hywang via Users
hello, everyone,
  I want to make a node fenced or the cluster stopped after a 
resource start failed 3 times, how to make the resource configure to achive it?
Thanks!___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/