Re: [ClusterLabs] stonith device locate on same host in active/passive cluster

2017-05-11 Thread Albert Weng
Hi Ken,

thank you for your comment.

i think this case can be closed, i use your suggestion of constraint and
then problem resolved.

thanks a lot~~

On Thu, May 4, 2017 at 10:28 PM, Ken Gaillot  wrote:

> On 05/03/2017 09:04 PM, Albert Weng wrote:
> > Hi Marek,
> >
> > Thanks your reply.
> >
> > On Tue, May 2, 2017 at 5:15 PM, Marek Grac  > > wrote:
> >
> >
> >
> > On Tue, May 2, 2017 at 11:02 AM, Albert Weng  > > wrote:
> >
> >
> > Hi Marek,
> >
> > thanks for your quickly responding.
> >
> > According to you opinion, when i type "pcs status" then i saw
> > the following result of fence :
> > ipmi-fence-node1(stonith:fence_ipmilan):Started cluaterb
> > ipmi-fence-node2(stonith:fence_ipmilan):Started clusterb
> >
> > Does it means both ipmi stonith devices are working correctly?
> > (rest of resources can failover to another node correctly)
> >
> >
> > Yes, they are working correctly.
> >
> > When it becomes important to run fence agents to kill the other
> > node. It will be executed from the other node, so the fact where
> > fence agent resides currently is not important
> >
> > Does "started on node" means which node is controlling fence behavior?
> > even all fence agents and resources "started on same node", the cluster
> > fence behavior still work correctly?
> >
> >
> > Thanks a lot.
> >
> > m,
>
> Correct. Fencing is *executed* independently of where or even whether
> fence devices are running. The node that is "running" a fence device
> performs the recurring monitor on the device; that's the only real effect.
>
> > should i have to use location constraint to avoid stonith device
> > running on same node ?
> > # pcs constraint location ipmi-fence-node1 prefers clustera
> > # pcs constraint location ipmi-fence-node2 prefers clusterb
> >
> > thanks a lot
>
> It's a good idea, so that a node isn't monitoring its own fence device,
> but that's the only reason -- it doesn't affect whether or how the node
> can be fenced. I would configure it as an anti-location, e.g.
>
>pcs constraint location ipmi-fence-node1 avoids node1=100
>
> In a 2-node cluster, there's no real difference, but in a larger
> cluster, it's the simplest config. I wouldn't use INFINITY (there's no
> harm in a node monitoring its own fence device if it's the last node
> standing), but I would use a score high enough to outweigh any stickiness.
>
> > On Tue, May 2, 2017 at 4:25 PM, Marek Grac  > > wrote:
> >
> > Hi,
> >
> >
> >
> > On Tue, May 2, 2017 at 3:39 AM, Albert Weng
> > mailto:weng.alb...@gmail.com>>
> wrote:
> >
> > Hi All,
> >
> > I have created active/passive pacemaker cluster on RHEL
> 7.
> >
> > here is my environment:
> > clustera : 192.168.11.1
> > clusterb : 192.168.11.2
> > clustera-ilo4 : 192.168.11.10
> > clusterb-ilo4 : 192.168.11.11
> >
> > both nodes are connected SAN storage for shared storage.
> >
> > i used the following cmd to create my stonith devices on
> > each node :
> > # pcs -f stonith_cfg stonith create ipmi-fence-node1
> > fence_ipmilan parms lanplus="ture"
> > pcmk_host_list="clustera" pcmk_host_check="static-list"
> > action="reboot" ipaddr="192.168.11.10"
> > login=adminsitrator passwd=1234322 op monitor
> interval=60s
> >
> > # pcs -f stonith_cfg stonith create ipmi-fence-node02
> > fence_ipmilan parms lanplus="true"
> > pcmk_host_list="clusterb" pcmk_host_check="static-list"
> > action="reboot" ipaddr="192.168.11.11" login=USERID
> > passwd=password op monitor interval=60s
> >
> > # pcs status
> > ipmi-fence-node1 clustera
> > ipmi-fence-node2 clusterb
> >
> > but when i failover to passive node, then i ran
> > # pcs status
> >
> > ipmi-fence-node1clusterb
> > ipmi-fence-node2clusterb
> >
> > why both fence device locate on the same node ?
> >
> >
> > When node 'clustera' is down, is there any place where
> > ipmi-fence-node* can be executed?
> >
> > If you are worrying that node can not self-fence itself you
> > are right. But if 'clustera' will become available then
> > attempt to fence clusterb will work as expected.
> >
> > m,
> >
> > ___
> > Users mailing 

Re: [ClusterLabs] stonith device locate on same host in active/passive cluster

2017-05-04 Thread Ken Gaillot
On 05/03/2017 09:04 PM, Albert Weng wrote:
> Hi Marek,
> 
> Thanks your reply.
> 
> On Tue, May 2, 2017 at 5:15 PM, Marek Grac  > wrote:
> 
> 
> 
> On Tue, May 2, 2017 at 11:02 AM, Albert Weng  > wrote:
> 
> 
> Hi Marek,
> 
> thanks for your quickly responding.
> 
> According to you opinion, when i type "pcs status" then i saw
> the following result of fence :
> ipmi-fence-node1(stonith:fence_ipmilan):Started cluaterb
> ipmi-fence-node2(stonith:fence_ipmilan):Started clusterb
> 
> Does it means both ipmi stonith devices are working correctly?
> (rest of resources can failover to another node correctly)
> 
> 
> Yes, they are working correctly. 
> 
> When it becomes important to run fence agents to kill the other
> node. It will be executed from the other node, so the fact where
> fence agent resides currently is not important
> 
> Does "started on node" means which node is controlling fence behavior?
> even all fence agents and resources "started on same node", the cluster
> fence behavior still work correctly?
>  
> 
> Thanks a lot.
> 
> m,

Correct. Fencing is *executed* independently of where or even whether
fence devices are running. The node that is "running" a fence device
performs the recurring monitor on the device; that's the only real effect.

> should i have to use location constraint to avoid stonith device
> running on same node ?
> # pcs constraint location ipmi-fence-node1 prefers clustera
> # pcs constraint location ipmi-fence-node2 prefers clusterb
> 
> thanks a lot

It's a good idea, so that a node isn't monitoring its own fence device,
but that's the only reason -- it doesn't affect whether or how the node
can be fenced. I would configure it as an anti-location, e.g.

   pcs constraint location ipmi-fence-node1 avoids node1=100

In a 2-node cluster, there's no real difference, but in a larger
cluster, it's the simplest config. I wouldn't use INFINITY (there's no
harm in a node monitoring its own fence device if it's the last node
standing), but I would use a score high enough to outweigh any stickiness.

> On Tue, May 2, 2017 at 4:25 PM, Marek Grac  > wrote:
> 
> Hi,
> 
> 
> 
> On Tue, May 2, 2017 at 3:39 AM, Albert Weng
> mailto:weng.alb...@gmail.com>> wrote:
> 
> Hi All,
> 
> I have created active/passive pacemaker cluster on RHEL 7.
> 
> here is my environment:
> clustera : 192.168.11.1
> clusterb : 192.168.11.2
> clustera-ilo4 : 192.168.11.10
> clusterb-ilo4 : 192.168.11.11
> 
> both nodes are connected SAN storage for shared storage.
> 
> i used the following cmd to create my stonith devices on
> each node :
> # pcs -f stonith_cfg stonith create ipmi-fence-node1
> fence_ipmilan parms lanplus="ture"
> pcmk_host_list="clustera" pcmk_host_check="static-list"
> action="reboot" ipaddr="192.168.11.10"
> login=adminsitrator passwd=1234322 op monitor interval=60s
> 
> # pcs -f stonith_cfg stonith create ipmi-fence-node02
> fence_ipmilan parms lanplus="true"
> pcmk_host_list="clusterb" pcmk_host_check="static-list"
> action="reboot" ipaddr="192.168.11.11" login=USERID
> passwd=password op monitor interval=60s
> 
> # pcs status
> ipmi-fence-node1 clustera
> ipmi-fence-node2 clusterb
> 
> but when i failover to passive node, then i ran
> # pcs status
> 
> ipmi-fence-node1clusterb
> ipmi-fence-node2clusterb
> 
> why both fence device locate on the same node ? 
> 
> 
> When node 'clustera' is down, is there any place where
> ipmi-fence-node* can be executed?
> 
> If you are worrying that node can not self-fence itself you
> are right. But if 'clustera' will become available then
> attempt to fence clusterb will work as expected.
> 
> m, 
> 
> ___
> Users mailing list: Users@clusterlabs.org
> 
> http://lists.clusterlabs.org/mailman/listinfo/users
> 
> 
> Project Home: http://www.clusterlabs.org
> Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> 

Re: [ClusterLabs] stonith device locate on same host in active/passive cluster

2017-05-03 Thread Albert Weng
Hi Marek,

Thanks your reply.

On Tue, May 2, 2017 at 5:15 PM, Marek Grac  wrote:

>
>
> On Tue, May 2, 2017 at 11:02 AM, Albert Weng 
> wrote:
>
>>
>> Hi Marek,
>>
>> thanks for your quickly responding.
>>
>> According to you opinion, when i type "pcs status" then i saw the
>> following result of fence :
>> ipmi-fence-node1(stonith:fence_ipmilan):Started cluaterb
>> ipmi-fence-node2(stonith:fence_ipmilan):Started clusterb
>>
>> Does it means both ipmi stonith devices are working correctly? (rest of
>> resources can failover to another node correctly)
>>
>
> Yes, they are working correctly.
>
> When it becomes important to run fence agents to kill the other node. It
> will be executed from the other node, so the fact where fence agent resides
> currently is not important
>
> Does "started on node" means which node is controlling fence behavior?
even all fence agents and resources "started on same node", the cluster
fence behavior still work correctly?


Thanks a lot.

> m,
>
>
>>
>> should i have to use location constraint to avoid stonith device running
>> on same node ?
>> # pcs constraint location ipmi-fence-node1 prefers clustera
>> # pcs constraint location ipmi-fence-node2 prefers clusterb
>>
>> thanks a lot
>>
>> On Tue, May 2, 2017 at 4:25 PM, Marek Grac  wrote:
>>
>>> Hi,
>>>
>>>
>>>
>>> On Tue, May 2, 2017 at 3:39 AM, Albert Weng 
>>> wrote:
>>>
 Hi All,

 I have created active/passive pacemaker cluster on RHEL 7.

 here is my environment:
 clustera : 192.168.11.1
 clusterb : 192.168.11.2
 clustera-ilo4 : 192.168.11.10
 clusterb-ilo4 : 192.168.11.11

 both nodes are connected SAN storage for shared storage.

 i used the following cmd to create my stonith devices on each node :
 # pcs -f stonith_cfg stonith create ipmi-fence-node1 fence_ipmilan
 parms lanplus="ture" pcmk_host_list="clustera"
 pcmk_host_check="static-list" action="reboot" ipaddr="192.168.11.10"
 login=adminsitrator passwd=1234322 op monitor interval=60s

 # pcs -f stonith_cfg stonith create ipmi-fence-node02 fence_ipmilan
 parms lanplus="true" pcmk_host_list="clusterb"
 pcmk_host_check="static-list" action="reboot" ipaddr="192.168.11.11"
 login=USERID passwd=password op monitor interval=60s

 # pcs status
 ipmi-fence-node1 clustera
 ipmi-fence-node2 clusterb

 but when i failover to passive node, then i ran
 # pcs status

 ipmi-fence-node1clusterb
 ipmi-fence-node2clusterb

 why both fence device locate on the same node ?

>>>
>>> When node 'clustera' is down, is there any place where ipmi-fence-node*
>>> can be executed?
>>>
>>> If you are worrying that node can not self-fence itself you are right.
>>> But if 'clustera' will become available then attempt to fence clusterb will
>>> work as expected.
>>>
>>> m,
>>>
>>> ___
>>> Users mailing list: Users@clusterlabs.org
>>> http://lists.clusterlabs.org/mailman/listinfo/users
>>>
>>> Project Home: http://www.clusterlabs.org
>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>> Bugs: http://bugs.clusterlabs.org
>>>
>>>
>>
>>
>> --
>> Kind regards,
>> Albert Weng
>>
>>
>> 
>> 不含病毒。www.avast.com
>> 
>> <#m_-2295807312831151002_m_-1498931108676747190_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>>
>> ___
>> Users mailing list: Users@clusterlabs.org
>> http://lists.clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
>>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
>


-- 
Kind regards,
Albert Weng
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] stonith device locate on same host in active/passive cluster

2017-05-02 Thread Marek Grac
On Tue, May 2, 2017 at 11:02 AM, Albert Weng  wrote:

>
> Hi Marek,
>
> thanks for your quickly responding.
>
> According to you opinion, when i type "pcs status" then i saw the
> following result of fence :
> ipmi-fence-node1(stonith:fence_ipmilan):Started cluaterb
> ipmi-fence-node2(stonith:fence_ipmilan):Started clusterb
>
> Does it means both ipmi stonith devices are working correctly? (rest of
> resources can failover to another node correctly)
>

Yes, they are working correctly.

When it becomes important to run fence agents to kill the other node. It
will be executed from the other node, so the fact where fence agent resides
currently is not important

m,


>
> should i have to use location constraint to avoid stonith device running
> on same node ?
> # pcs constraint location ipmi-fence-node1 prefers clustera
> # pcs constraint location ipmi-fence-node2 prefers clusterb
>
> thanks a lot
>
> On Tue, May 2, 2017 at 4:25 PM, Marek Grac  wrote:
>
>> Hi,
>>
>>
>>
>> On Tue, May 2, 2017 at 3:39 AM, Albert Weng 
>> wrote:
>>
>>> Hi All,
>>>
>>> I have created active/passive pacemaker cluster on RHEL 7.
>>>
>>> here is my environment:
>>> clustera : 192.168.11.1
>>> clusterb : 192.168.11.2
>>> clustera-ilo4 : 192.168.11.10
>>> clusterb-ilo4 : 192.168.11.11
>>>
>>> both nodes are connected SAN storage for shared storage.
>>>
>>> i used the following cmd to create my stonith devices on each node :
>>> # pcs -f stonith_cfg stonith create ipmi-fence-node1 fence_ipmilan parms
>>> lanplus="ture" pcmk_host_list="clustera" pcmk_host_check="static-list"
>>> action="reboot" ipaddr="192.168.11.10" login=adminsitrator passwd=1234322
>>> op monitor interval=60s
>>>
>>> # pcs -f stonith_cfg stonith create ipmi-fence-node02 fence_ipmilan
>>> parms lanplus="true" pcmk_host_list="clusterb"
>>> pcmk_host_check="static-list" action="reboot" ipaddr="192.168.11.11"
>>> login=USERID passwd=password op monitor interval=60s
>>>
>>> # pcs status
>>> ipmi-fence-node1 clustera
>>> ipmi-fence-node2 clusterb
>>>
>>> but when i failover to passive node, then i ran
>>> # pcs status
>>>
>>> ipmi-fence-node1clusterb
>>> ipmi-fence-node2clusterb
>>>
>>> why both fence device locate on the same node ?
>>>
>>
>> When node 'clustera' is down, is there any place where ipmi-fence-node*
>> can be executed?
>>
>> If you are worrying that node can not self-fence itself you are right.
>> But if 'clustera' will become available then attempt to fence clusterb will
>> work as expected.
>>
>> m,
>>
>> ___
>> Users mailing list: Users@clusterlabs.org
>> http://lists.clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
>>
>
>
> --
> Kind regards,
> Albert Weng
>
>
> 
> 不含病毒。www.avast.com
> 
> <#m_-1498931108676747190_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
>
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] stonith device locate on same host in active/passive cluster

2017-05-02 Thread Albert Weng
Hi Marek,

thanks for your quickly responding.

According to you opinion, when i type "pcs status" then i saw the following
result of fence :
ipmi-fence-node1(stonith:fence_ipmilan):Started cluaterb
ipmi-fence-node2(stonith:fence_ipmilan):Started clusterb

Does it means both ipmi stonith devices are working correctly? (rest of
resources can failover to another node correctly)

should i have to use location constraint to avoid stonith device running on
same node ?
# pcs constraint location ipmi-fence-node1 prefers clustera
# pcs constraint location ipmi-fence-node2 prefers clusterb

thanks a lot

On Tue, May 2, 2017 at 4:25 PM, Marek Grac  wrote:

> Hi,
>
>
>
> On Tue, May 2, 2017 at 3:39 AM, Albert Weng  wrote:
>
>> Hi All,
>>
>> I have created active/passive pacemaker cluster on RHEL 7.
>>
>> here is my environment:
>> clustera : 192.168.11.1
>> clusterb : 192.168.11.2
>> clustera-ilo4 : 192.168.11.10
>> clusterb-ilo4 : 192.168.11.11
>>
>> both nodes are connected SAN storage for shared storage.
>>
>> i used the following cmd to create my stonith devices on each node :
>> # pcs -f stonith_cfg stonith create ipmi-fence-node1 fence_ipmilan parms
>> lanplus="ture" pcmk_host_list="clustera" pcmk_host_check="static-list"
>> action="reboot" ipaddr="192.168.11.10" login=adminsitrator passwd=1234322
>> op monitor interval=60s
>>
>> # pcs -f stonith_cfg stonith create ipmi-fence-node02 fence_ipmilan parms
>> lanplus="true" pcmk_host_list="clusterb" pcmk_host_check="static-list"
>> action="reboot" ipaddr="192.168.11.11" login=USERID passwd=password op
>> monitor interval=60s
>>
>> # pcs status
>> ipmi-fence-node1 clustera
>> ipmi-fence-node2 clusterb
>>
>> but when i failover to passive node, then i ran
>> # pcs status
>>
>> ipmi-fence-node1clusterb
>> ipmi-fence-node2clusterb
>>
>> why both fence device locate on the same node ?
>>
>
> When node 'clustera' is down, is there any place where ipmi-fence-node*
> can be executed?
>
> If you are worrying that node can not self-fence itself you are right. But
> if 'clustera' will become available then attempt to fence clusterb will
> work as expected.
>
> m,
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
>


-- 
Kind regards,
Albert Weng


不含病毒。www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] stonith device locate on same host in active/passive cluster

2017-05-02 Thread Marek Grac
Hi,



On Tue, May 2, 2017 at 3:39 AM, Albert Weng  wrote:

> Hi All,
>
> I have created active/passive pacemaker cluster on RHEL 7.
>
> here is my environment:
> clustera : 192.168.11.1
> clusterb : 192.168.11.2
> clustera-ilo4 : 192.168.11.10
> clusterb-ilo4 : 192.168.11.11
>
> both nodes are connected SAN storage for shared storage.
>
> i used the following cmd to create my stonith devices on each node :
> # pcs -f stonith_cfg stonith create ipmi-fence-node1 fence_ipmilan parms
> lanplus="ture" pcmk_host_list="clustera" pcmk_host_check="static-list"
> action="reboot" ipaddr="192.168.11.10" login=adminsitrator passwd=1234322
> op monitor interval=60s
>
> # pcs -f stonith_cfg stonith create ipmi-fence-node02 fence_ipmilan parms
> lanplus="true" pcmk_host_list="clusterb" pcmk_host_check="static-list"
> action="reboot" ipaddr="192.168.11.11" login=USERID passwd=password op
> monitor interval=60s
>
> # pcs status
> ipmi-fence-node1 clustera
> ipmi-fence-node2 clusterb
>
> but when i failover to passive node, then i ran
> # pcs status
>
> ipmi-fence-node1clusterb
> ipmi-fence-node2clusterb
>
> why both fence device locate on the same node ?
>

When node 'clustera' is down, is there any place where ipmi-fence-node* can
be executed?

If you are worrying that node can not self-fence itself you are right. But
if 'clustera' will become available then attempt to fence clusterb will
work as expected.

m,
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] stonith device locate on same host in active/passive cluster

2017-05-01 Thread Albert Weng
Hi All,

the following logs from corosync.log that might help.

Apr 25 10:29:32 [15334] gmlcdbw02pengine: info: native_print:
ipmi-fence-db01(stonith:fence_ipmilan):Started gmlcdbw01
Apr 25 10:29:32 [15334] gmlcdbw02pengine: info: native_print:
ipmi-fence-db02(stonith:fence_ipmilan):Started gmlcdbw02

Apr 25 10:29:32 [15334] gmlcdbw02pengine: info: RecurringOp:
 Start recurring monitor (60s) for ipmi-fence-db01 on gmlcdbw02
Apr 25 10:29:32 [15334] gmlcdbw02pengine:   notice: LogActions:
Moveipmi-fence-db01(Started gmlcdbw01 -> gmlcdbw02)
Apr 25 10:29:32 [15334] gmlcdbw02pengine: info: LogActions:
Leave   ipmi-fence-db02(Started gmlcdbw02)
Apr 25 10:29:32 [15335] gmlcdbw02   crmd:   notice: te_rsc_command:
Initiating action 11: stop ipmi-fence-db01_stop_0 on gmlcdbw01
Apr 25 10:29:32 [15330] gmlcdbw02cib: info: cib_perform_op:
+
/cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='ipmi-fence-db01']/lrm_rsc_op[@id='ipmi-fence-db01_last_0']:
@operation_key=ipmi-fence-db01_stop_0, @operation=stop,
@transition-key=11:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@transition-magic=0:0;11:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@call-id=75, @last-run=1493087372, @last-rc-change=1493087372, @exec-time=0
Apr 25 10:29:32 [15330] gmlcdbw02cib: info: cib_perform_op:
+
/cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='ipmi-fence-db01']/lrm_rsc_op[@id='ipmi-fence-db01_last_0']:
@operation_key=ipmi-fence-db01_stop_0, @operation=stop,
@transition-key=11:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@transition-magic=0:0;11:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@call-id=75, @last-run=1493087372, @last-rc-change=1493087372, @exec-time=0
Apr 25 10:29:32 [15330] gmlcdbw02cib: info: cib_perform_op:
+
/cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='ipmi-fence-db01']/lrm_rsc_op[@id='ipmi-fence-db01_last_0']:
@operation_key=ipmi-fence-db01_stop_0, @operation=stop,
@transition-key=11:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@transition-magic=0:0;11:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@call-id=75, @last-run=1493087372, @last-rc-change=1493087372, @exec-time=0
Apr 25 10:29:32 [15335] gmlcdbw02   crmd: info:
match_graph_event:Action ipmi-fence-db01_stop_0 (11) confirmed on
gmlcdbw01 (rc=0)
Apr 25 10:29:32 [15335] gmlcdbw02   crmd:   notice: te_rsc_command:
Initiating action 12: start ipmi-fence-db01_start_0 on gmlcdbw02 (local)
Apr 25 10:29:32 [15335] gmlcdbw02   crmd: info: do_lrm_rsc_op:
Performing key=12:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850
op=ipmi-fence-db01_start_0
Apr 25 10:29:32 [15332] gmlcdbw02   lrmd: info: log_execute:
executing - rsc:ipmi-fence-db01 action:start call_id:65
Apr 25 10:29:32 [15332] gmlcdbw02   lrmd: info: log_finished:
finished - rsc:ipmi-fence-db01 action:start call_id:65  exit-code:0
exec-time:45ms queue-time:0ms
Apr 25 10:29:33 [15335] gmlcdbw02   crmd:   notice:
process_lrm_event:Operation ipmi-fence-db01_start_0: ok
(node=gmlcdbw02, call=65, rc=0, cib-update=2571, confirmed=true)
Apr 25 10:29:33 [15330] gmlcdbw02cib: info: cib_perform_op:
+
/cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='ipmi-fence-db01']/lrm_rsc_op[@id='ipmi-fence-db01_last_0']:
@operation_key=ipmi-fence-db01_start_0, @operation=start,
@transition-key=12:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@transition-magic=0:0;12:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@call-id=65, @last-run=1493087372, @last-rc-change=1493087372, @exec-time=45
Apr 25 10:29:33 [15330] gmlcdbw02cib: info: cib_perform_op:
+
/cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='ipmi-fence-db01']/lrm_rsc_op[@id='ipmi-fence-db01_last_0']:
@operation_key=ipmi-fence-db01_start_0, @operation=start,
@transition-key=12:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@transition-magic=0:0;12:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@call-id=65, @last-run=1493087372, @last-rc-change=1493087372, @exec-time=45
Apr 25 10:29:33 [15330] gmlcdbw02cib: info: cib_perform_op:
+
/cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='ipmi-fence-db01']/lrm_rsc_op[@id='ipmi-fence-db01_last_0']:
@operation_key=ipmi-fence-db01_start_0, @operation=start,
@transition-key=12:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@transition-magic=0:0;12:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@call-id=65, @last-run=1493087372, @last-rc-change=1493087372, @exec-time=45
Apr 25 10:29:33 [15335] gmlcdbw02   crmd: info:
match_graph_event:Action ipmi-fence-db01_start_0 (12) confirmed on
gmlcdbw02 (rc=0)
Apr 25 10:29:33 [15335] gmlcdbw02   crmd:   notice: te_rsc_command:
Initiating action 13: monitor ipmi-fence-db01_monitor_6 on gmlcdbw02
(local)
Apr 25 10:29:33 [15335] gmlcdbw02   crmd: info: do_lrm_