Hi!

IMHO there are better ways to do configuration changes than editing the CIB; I 
prefer the crm shell.
It’s also good for documentation.
Regarding the other question, I would never start a new configuration change 
when the CRM is not in “idle” state.
Mostly because debugging is hard when something goes wrong under such 
circumstances.

Kind regards,
Ulrich Windl

From: Users <[email protected]> On Behalf Of [email protected]
Sent: Friday, December 12, 2025 9:56 PM
To: [email protected]
Subject: [EXT] [ClusterLabs] Action scheduling on cib change


Sicherheits-Hinweis: Diese E-Mail wurde von einer Person außerhalb des UKR 
gesendet. Seien Sie vorsichtig vor gefälschten Absendern, wenn Sie auf Links 
klicken, Anhänge öffnen oder weitere Aktionen ausführen, bevor Sie die Echtheit 
überprüft haben.
Hi All,

We use pacemaker as cluster engine under the hood and manage it by getting CIB, 
change and then put. I can do a lot of changes at once.

The question is about internal action scheduler. As I found, the logic of 
applying new config sounds like:

  1.  Validate, analyze new config, build target resource layout with respect 
to config, location constraints and current state
  2.  Build action list for resource state change with respect to order 
constraints
  3.  Schedule actions into queue
  4.  Render actions from queue

The question is: if I put two config change very fast, one by one and second 
arrive before every scheduled action finished, will every scheduled action from 
the first config complete? Mean, all the actions coming from second config to 
be queued after all scheduled on the first push.

The background of the question: I use “constraint chains”, means order like 
start A => B =>C stop C=>B=>A and it is essential to keep the order. However, 
If A, B and C running and I push config where all of them deleted, pacemaker 
has no constraints in the new config and schedule A,B,C stop as independent 
action, and this leads to resource failures. I already asked in the group and 
answered that this is behavior by design, Paacemaker only use the new config 
for change planning, so A,B and C become unconstrained orphans. My workaround 
was to send intermediate config, keeping resource to delete in config, but set 
target state “stopped’. For now, I am sending the intermediate config and 
request status until every resource to delete is stopped. However, if the 
scheduler put new actions after already planned, I need no wait and request 
status, I only need send cib with target state stopped and then immediately 
send cib with deleted resource. As the stop sequence already in the scheduler 
queue, it should work correctly and faster then my first approach.

Thank you in advance for every advice and suggestion!

Sincerely,

Alex


_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to