On 31/01/2024 18:11, Ken Gaillot wrote:
On Wed, 2024-01-31 at 16:37 +0100, lejeczek via Users wrote:
On 31/01/2024 16:06, Jehan-Guillaume de Rorthais wrote:
On Wed, 31 Jan 2024 16:02:12 +0100
lejeczek via Users <users@clusterlabs.org> wrote:

On 29/01/2024 17:22, Ken Gaillot wrote:
On Fri, 2024-01-26 at 13:55 +0100, lejeczek via Users wrote:
Hi guys.

Is it possible to trigger some... action - I'm thinking
specifically
at shutdown/start.
If not within the cluster then - if you do that - perhaps
outside.
I would like to create/remove constraints, when cluster
starts &
stops, respectively.

many thanks, L.

You could use node status alerts for that, but it's risky for
alert
agents to change the configuration (since that may result in
more
alerts and potentially some sort of infinite loop).

Pacemaker has no concept of a full cluster start/stop, only
node
start/stop. You could approximate that by checking whether the
node
receiving the alert is the only active node.

Another possibility would be to write a resource agent that
does what
you want and order everything else after it. However it's even
more
risky for a resource agent to modify the configuration.

Finally you could write a systemd unit to do what you want and
order it
after pacemaker.

What's wrong with leaving the constraints permanently
configured?
yes, that would be for a node start/stop
I struggle with using constraints to move pgsql (PAF) master
onto a given node - seems that co/locating paf's master
results in troubles (replication brakes) at/after node
shutdown/reboot (not always, but way too often)
What? What's wrong with colocating PAF's masters exactly? How does
it brake any
replication? What's these constraints you are dealing with?

Could you share your configuration?
Constraints beyond/above of what is required by PAF agent
itself, say...
you have multiple pgSQL cluster with PAF - thus multiple
(separate, for each pgSQL cluster) masters and you want to
spread/balance those across HA cluster
(or in other words - avoid having more that 1 pgsql master
per HA node)
These below, I've tried, those move the master onto chosen
node but.. then the issues I mentioned.

-> $ pcs constraint location PGSQL-PAF-5438-clone prefers
ubusrv1=1002
or
-> $ pcs constraint colocation set PGSQL-PAF-5435-clone
PGSQL-PAF-5434-clone PGSQL-PAF-5433-clone role=Master
require-all=false setoptions score=-1000

Anti-colocation sets tend to be tricky currently -- if the first
resource can't be assigned to a node, none of them can. We have an idea
for a better implementation:

  https://projects.clusterlabs.org/T383

In the meantime, a possible workaround is to use placement-
strategy=balanced and define utilization for the clones only. The
promoted roles will each get a slight additional utilization, and the
cluster should spread them out across nodes whenever possible. I don't
know if that will avoid the replication issues but it may be worth a
try.
using _balanced_ causes a small mayhem to PAF/pgsql:

-> $ pcs property
Cluster Properties:
 REDIS-6380_REPL_INFO: ubusrv3
 REDIS-6381_REPL_INFO: ubusrv2
 REDIS-6382_REPL_INFO: ubusrv2
 REDIS-6385_REPL_INFO: ubusrv1
 REDIS_REPL_INFO: ubusrv1
 cluster-infrastructure: corosync
 cluster-name: ubusrv
 dc-version: 2.1.2-ada5c3b36e2
 have-watchdog: false
 last-lrm-refresh: 1706711588
 placement-strategy: default
 stonith-enabled: false

-> $ pcs resource utilization PGSQL-PAF-5438 cpu="20"

-> $ pcs property set placement-strategy=balanced # when resource stops:
I change it back:
-> $ pcs property set placement-strategy=default
and pgSQL/paf works again

I've not used _utilization_ nor _placement-strategy_ before, thus chance that I'm missing something is solid.
_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to