Klaus,

yes, these constraints were defined by pcs after manual move (pcs resource
move) and help about this action is clear:

Usage: pcs resource move...
    move <resource id> [destination node] [--master] [lifetime=<lifetime>]
         [--wait[=n]]
        Move the resource off the node it is currently running on by
creating
        a -INFINITY location constraint to ban the node. If destination
node is
        specified the resource will be moved to that node by creating
        an INFINITY location constraint to prefer the destination node. If
        --master is used the scope of the command is limited to the master
role
        and you must use the promotable clone id (instead of the resource
id).

        If lifetime is specified then the constraint will expire after that
        time, otherwise it defaults to infinity and the constraint can be
        cleared manually with 'pcs resource clear' or 'pcs constraint
delete'.
        Lifetime is expected to be specified as ISO 8601 duration (see
        https://en.wikipedia.org/wiki/ISO_8601#Durations).

        If --wait is specified, pcs will wait up to 'n' seconds for the
        resource to move and then return 0 on success or 1 on error. If 'n'
is
        not specified it defaults to 60 minutes.

        If you want the resource to preferably avoid running on some nodes
but
        be able to failover to them use 'pcs constraint location avoids'.

It wasn't obvious, that move works just like constraint definition) I
should have read the help carefully.

Thank you for your help!


вт, 28 мая 2024 г. в 16:30, Klaus Wenninger <kwenn...@redhat.com>:

>
>
> On Tue, May 28, 2024 at 12:34 PM Александр Руденко <a.rud...@gmail.com>
> wrote:
>
>> Andrei, thank you!
>>
>> I tried to find node's scores and have found location constraints for
>> these 3 resources:
>>
>> pcs constraint
>> Location Constraints:
>>   Resource: fsmt-28085F00
>>     Enabled on:
>>       Node: vdc16 (score:INFINITY) (role:Started)
>>   Resource: fsmt-41CC55C0
>>     Enabled on:
>>       Node: vdc16 (score:INFINITY) (role:Started)
>>   Resource: fsmt-A7C0E2A0
>>     Enabled on:
>>       Node: vdc16 (score:INFINITY) (role:Started)
>>
>> but, I can't understand how these constraints were set. Can it be defined
>> by pacemaker in some conditions or it's only manual configuration?
>>
>
> Interesting: Didn't have that mail when I just answered you previous.
> Anyway - the constraints are probably leftovers from deliberately moving
> resources
> from ne node to another before using pcs commands.
> iirc there is meanwhile a way how pcs removes them automatically.
>
> Klaus
>
>
>>
>> BTW, how can I see the node's score?
>>
>> вт, 28 мая 2024 г. в 11:59, Andrei Borzenkov <arvidj...@gmail.com>:
>>
>>> On Tue, May 28, 2024 at 11:39 AM Александр Руденко <a.rud...@gmail.com>
>>> wrote:
>>> >
>>> > Hi!
>>> >
>>> > I can't understand this strange behavior, help me please.
>>> >
>>> > I have 3 nodes in my cluster, 4 vCPU/8GB RAM each. And about 70
>>> groups, 2 resources in each group. First one resource is our custom
>>> resource which configures Linux VRF and second one is systemd unit.
>>> Everything works fine.
>>> >
>>> > We have next defaults:
>>> > pcs resource defaults
>>> > Meta Attrs: rsc_defaults-meta_attributes
>>> >   resource-stickiness=100
>>> >
>>> > When I shutdown pacemaker service on NODE1, all the resources move to
>>> NODE2 and NODE3, it's okay. But when I start pacemaker service on NODE1, 3
>>> of 70 groups move back to NODE1.
>>> > But I expect that no one resource will be moved back to NODE1.
>>> >
>>> > I tried to set resource-stickiness=100 exactly for these 3 groups, but
>>> it didn't help.
>>> >
>>> > pcs resource config fsmt-41CC55C0
>>> >  Group: fsmt-41CC55C0
>>> >   Meta Attrs: resource-stickiness=100
>>> > ...
>>> >
>>> > Why are these 3 resource groups moving back?
>>> >
>>>
>>> Because NODE1 score is higher than NODE2 score + 100. E.g. NODE1 score
>>> may be infinity.
>>> _______________________________________________
>>> Manage your subscription:
>>> https://lists.clusterlabs.org/mailman/listinfo/users
>>>
>>> ClusterLabs home: https://www.clusterlabs.org/
>>>
>> _______________________________________________
>> Manage your subscription:
>> https://lists.clusterlabs.org/mailman/listinfo/users
>>
>> ClusterLabs home: https://www.clusterlabs.org/
>>
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
>
_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to