On 22.01.2018 16:44, Ken Gaillot wrote: > On Mon, 2018-01-22 at 14:18 +0100, Ulrich Windl wrote: >> Did you try meta clone-node-min=3? > clone-min does not affect the clone it's configured on, but rather > anything ordered relative to it via a constraint. It's for the case > where a resource needs a certain number of instances running before > other services can consider it effective. > > It could be helpful here if the reason you want the service to stop is > to get some other resource to stop. > >> "alu...@poczta.onet.pl" <alu...@poczta.onet.pl> schrieb am >>>>> 22.01.2018 um 13:29 >> in Nachricht <844fdf99-1680-3ed8-6afc-8b3e2ddea...@poczta.onet.pl>: >>> I need to create configuration when one resource is active on all >>> nodes >>> and is only active when all nodes are active. It should has the > There's no built-in capability for that. > > It's possible you could get something to work using clone > notifications. You could write a custom OCF agent that simply sets a > local node attribute to 0 or 1. If > OCF_RESKEY_CRM_meta_notify_inactive_resource is empty, it would set it > to 1, otherwise it would set it to 0. (You could use > ocf:pacemaker:attribute from a recent Pacemaker as a starting point.) > > Then you could use a rule to locate your desired resource where the > attribute is set to 1. > > > http://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html-single/Pa > cemaker_Explained/index.html#_clone_resource_agent_requirements > > http://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html-single/Pa > cemaker_Explained/index.html#idm140583511697312 It seems that idea is going to work :)
Thank you for the all propositions. >>> same >>> priority on all nodes. The major target is to have my service >>> started >>> when all nodes are active and to have my service stopped in >>> "degraded" >>> state. >>> >>> I tried clone resource with different requires options but when one >>> of >>> the node is suspended resource is still active on online node. >>> >>> Is there any options to have such type of resource? >> >> _______________________________________________ >> Users mailing list: Users@clusterlabs.org >> http://lists.clusterlabs.org/mailman/listinfo/users >> >> Project Home: http://www.clusterlabs.org >> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch. >> pdf >> Bugs: http://bugs.clusterlabs.org _______________________________________________ Users mailing list: Users@clusterlabs.org http://lists.clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org