>>> Strahil Nikolov <hunter86...@yahoo.com> schrieb am 02.12.2020 um 22:42 in
Nachricht <311137659.2419591.1606945369...@mail.yahoo.com>:
> Constraints' values are varying from:
> infinity which equals to score of 1000000
> to:
> - infinity which equals to score of -1000000
> 
> You can usually set a positive score on the prefered node which is bigger 
> than on the other node.
> 
> For example setting a location constraint like this will prefer node1:
> node1 - score 10000
> node2 - score 5000
> 

The bad thing with those numbers is that you are never sure which number to
use:
Is 50 enough? 100 Maybe? 1000? 10000? 100000?

> In order to prevent unnecessary downtime , you should also consider setting

> stickiness.
> 
> For example a stickiness of 20000 will overwhelm the score of 10000 on the 
> recently recovered node1 and will prevent the resource of being stopped and

> relocated from node2 to node1 .

Playing all "what happens if" scenarios is rather hard to do, too.

> 
> Note: default stickiness is per resource , while the total stickiness score

> of a group is calculated based on the scores of all resources in it.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> В сряда, 2 декември 2020 г., 16:54:43 Гринуич+2, Dan Swartzendruber 
> <dswa...@druber.com> написа: 
> 
> 
> 
> 
> 
> On 2020-11-30 23:21, Petr Bena wrote:
>> Hello,
>> 
>> Is there a way to setup a preferred node for a service? I know how to
>> create constrain that will make it possible to run a service ONLY on
>> certain node, or constrain that will make it impossible to run 2
>> services on same node, but I don't want any of that, as in
>> catastrophical scenarios when services would have to be located 
>> together
>> on same node, this would instead disable it.
>> 
>> Essentially what I want is for service to be always started on 
>> preferred
>> node when it is possible, but if it's not possible (eg. node is down) 
>> it
>> would freely run on any other node, with no restrictions and when node
>> is back up, it would migrate back.
>> 
>> How can I do that?
> 
> I do precisely this for an active/passive NFS/ZFS storage appliance 
> pair.
> One of the VSA has more memory and is less used, so I have it set to 
> prefer
> that host.
> 
> https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html/Clusters_from

> _Scratch/_prefer_one_node_over_another.html
> 
> I believe I used the value infinity, so it will prefer the 2nd host over 
> the 1st if at all possible.  My 'pcs constraint':
> 
> [root@centos-vsa2 ~]# pcs constraint
> Location Constraints:
>   Resource: group-zfs
>     Enabled on: centos-vsa2 (score:INFINITY)
> Ordering Constraints:
> Colocation Constraints:
> Ticket Constraints:
> 
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users 
> 
> ClusterLabs home: https://www.clusterlabs.org/ 
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users 
> 
> ClusterLabs home: https://www.clusterlabs.org/ 



_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to