>>> martin doc <db1...@hotmail.com> schrieb am 08.10.2021 um 09:24 in Nachricht
<ps2p216mb0546195efc2e0a1730bcf825c2...@ps2p216mb0546.korp216.prod.outlook.com>:

> Hi,
> 
> Yes, the suggestion to use a rule helped some. I had tried that but what I 
> got wrong is that the name for the score stored by ping is not ping but pingd 
> (yay backwards compat.) Thanks Ken for the pointer and getting me to go back 
> to that.

Actually the RA has a "name" parameter; I used params name=val_net_gw1, so I 
get:

Node Attributes:
  * Node: h16:
    * val_net_gw1                       : 1000
  * Node: h18:
    * val_net_gw1                       : 1000
  * Node: h19:
    * val_net_gw1                       : 1000

> 
> Now I'm stuck with the problem of getting resources to rebalance when all of 
> the clones are available. If I arbitrarily set the node utilization of cpu to 
> 8 and memory to 10000 and then assign cpu=5 and memory=5000 to each resource, 
> it does not rebalance once all of the pingd resources have a value > 0. A 

utilization basically does not "rebalance", but "load limit", also depending on 
the lacement strategy.
The other thing is whether you really want stickiness=0 for your resources. 
THEN the cluster whill reshuffle your resources frequently.

> "crm_simulate" shows one node with 10 cpu & 10000 memory free, one with 0 
> cpu/memory free and one half used. The utilization will prevent over 
> allocation but doesn't balance out resources.

Did you try "placement-strategy=balanced"?

> 
> A change in the state of pingd's value does cause the policy engine to do 
> something but it just decides to keep all of the resources where they are.

You need a constraint rule I guess.

> 
> I don't know anything about pcs colors.
> 
> I will keep trying variations.




_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to