Unstandby-ing a node automatically at some point after a failure on
certain resources actually fits our use cases well, but the problem is
that the automatic unstandby does not put DRBD into secondary mode once it
occurs.
A manual pcs cluster standby $(uname -n) and pcs cluster unstandby $(uname
-
'on-fail=standby' works well, however, setting a failure-timeout appears
to automatically bring the node out of standby after it expires.
--
Sam Gardner
Trustwave | SMART SECURITY ON DEMAND
On 3/28/16, 3:31 PM, "Ken Gaillot" wrote:
>On 03/28/2016 02:19 PM, Sam Gardner wrote:
>> Is there a
On 03/28/2016 02:19 PM, Sam Gardner wrote:
> Is there any way to modify the behavior of a resource group N of A, B, and C
> so that either A, B, and C are running on the same node, or none of them are?
>
> With Pacemaker 1.1.12 and Corosync 1.4.8, if a group N is defined via:
> pcs resource group
Is there any way to modify the behavior of a resource group N of A, B, and C so
that either A, B, and C are running on the same node, or none of them are?
With Pacemaker 1.1.12 and Corosync 1.4.8, if a group N is defined via:
pcs resource group N A B C
if resource C cannot run, A and B still do.
On 28/03/16 12:44 PM, Sam Gardner wrote:
> I have a simple resource defined:
>
> [root@ha-d1 ~]# pcs resource show dmz1
> Resource: dmz1 (class=ocf provider=internal type=ip-address)
> Attributes: address=172.16.10.192 monitor_link=true
> Meta Attrs: migration-threshold=3 failure-timeout=30s
I have a simple resource defined:
[root@ha-d1 ~]# pcs resource show dmz1
Resource: dmz1 (class=ocf provider=internal type=ip-address)
Attributes: address=172.16.10.192 monitor_link=true
Meta Attrs: migration-threshold=3 failure-timeout=30s
Operations: monitor interval=7s (dmz1-monitor-inter