Hello, 

I have a 3 node Pacemaker + Heartbeat cluster (two real nodes and 1 quorum node 
that cannot run resources) running on Ubuntu 12.04 Server amd64. This cluster 
has a DRBD resource that it mounts and then runs a KVM virtual machine from. I 
have configured the cluster to use ocf:pacemaker:ping with two other devices on 
the network (192.168.0.128, 192.168.0.129), and set constraints to move the 
resources to the most well-connected node (whichever node can see more of these 
two devices): 

primitive p_ping ocf:pacemaker:ping \ 
params name="p_ping" host_list="192.168.0.128 192.168.0.129" multiplier="1000" 
attempts="8" debug="true" \ 
op start interval="0" timeout="60" \ 
op monitor interval="10s" timeout="60" 
... 

clone cl_ping p_ping \ 
meta interleave="true" 

... 
location loc_run_on_most_connected g_vm \ 
rule $id="loc_run_on_most_connected-rule" p_ping: defined p_ping 


Today, 192.168.0.128's network cable was unplugged for a few seconds and then 
plugged back in. During this time, pacemaker recognized that it could not ping 
192.168.0.128 and restarted all of the resources, but left them on the same 
node. My understanding was that since neither node could ping 192.168.0.128 
during this period, pacemaker would do nothing with the resources (leave them 
running). It would only migrate or restart the resources if for example node2 
could ping 192.168.0.128 but node1 could not (move the resources to where 
things are better-connected). Is this understanding incorrect? If so, is there 
a way I can change my configuration so that it will only restart/migrate 
resources if one node is found to be better connected? 

Can you tell me why these resources were restarted? I have attached the syslog 
as well as my full CIB configuration. 

Thanks, 

Andrew Martin 
Aug 22 10:40:31 node1 ping[1668]: [1823]: WARNING: 192.168.0.128 is inactive: PING 192.168.0.128 (192.168.0.128) 56(84) bytes of data.#012#012--- 192.168.0.128 ping statistics ---#0128 packets transmitted, 0 received, 100% packet loss, time 7055ms
Aug 22 10:40:38 node1 attrd_updater: [1860]: info: Invoked: attrd_updater -n p_ping -v 1000 -d 5s 
Aug 22 10:40:43 node1 attrd: [4402]: notice: attrd_trigger_update: Sending flush op to all hosts for: p_ping (1000)
Aug 22 10:40:44 node1 attrd: [4402]: notice: attrd_perform_update: Sent update 265: p_ping=1000
Aug 22 10:40:44 node1 crmd: [4403]: info: abort_transition_graph: te_update_diff:164 - Triggered transition abort (complete=1, tag=nvpair, id=status-1ab0690c-5aa0-4d9c-ae4e-b662e0ca54e5-p_ping, name=p_ping, value=1000, magic=NA, cib=0.121.49) : Transient attribute: update
Aug 22 10:40:44 node1 crmd: [4403]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Aug 22 10:40:44 node1 crmd: [4403]: info: do_state_transition: All 3 cluster nodes are eligible to run resources.
Aug 22 10:40:44 node1 crmd: [4403]: info: do_pe_invoke: Query 1023: Requesting the current CIB: S_POLICY_ENGINE
Aug 22 10:40:44 node1 crmd: [4403]: info: do_pe_invoke_callback: Invoking the PE: query=1023, ref=pe_calc-dc-1345650044-1095, seq=130, quorate=1
Aug 22 10:40:44 node1 pengine: [13079]: notice: unpack_rsc_op: Hard error - p_drbd_mount1:0_last_failure_0 failed with rc=5: Preventing ms_drbd_tools from re-starting on quorum
Aug 22 10:40:44 node1 pengine: [13079]: notice: unpack_rsc_op: Hard error - p_drbd_vmstore:0_last_failure_0 failed with rc=5: Preventing ms_drbd_vmstore from re-starting on quorum
Aug 22 10:40:44 node1 pengine: [13079]: notice: unpack_rsc_op: Hard error - p_vm_myvm_last_failure_0 failed with rc=5: Preventing p_vm_myvm from re-starting on quorum
Aug 22 10:40:44 node1 pengine: [13079]: notice: unpack_rsc_op: Hard error - p_drbd_mount2:0_last_failure_0 failed with rc=5: Preventing ms_drbd_crm from re-starting on quorum
Aug 22 10:40:44 node1 pengine: [13079]: notice: unpack_rsc_op: Operation p_drbd_vmstore:0_last_failure_0 found resource p_drbd_vmstore:0 active on node1
Aug 22 10:40:44 node1 pengine: [13079]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active on node1
Aug 22 10:40:44 node1 pengine: [13079]: notice: unpack_rsc_op: Operation p_drbd_mount1:0_last_failure_0 found resource p_drbd_mount1:0 active on node1
Aug 22 10:40:44 node1 pengine: [13079]: notice: RecurringOp:  Start recurring monitor (20s) for p_drbd_mount2:0 on node1
Aug 22 10:40:44 node1 pengine: [13079]: notice: RecurringOp:  Start recurring monitor (10s) for p_drbd_mount2:1 on node2
Aug 22 10:40:44 node1 pengine: [13079]: notice: RecurringOp:  Start recurring monitor (20s) for p_drbd_mount2:0 on node1
Aug 22 10:40:44 node1 pengine: [13079]: notice: RecurringOp:  Start recurring monitor (10s) for p_drbd_mount2:1 on node2
Aug 22 10:40:44 node1 pengine: [13079]: notice: RecurringOp:  Start recurring monitor (20s) for p_drbd_mount1:0 on node1
Aug 22 10:40:44 node1 pengine: [13079]: notice: RecurringOp:  Start recurring monitor (10s) for p_drbd_mount1:1 on node2
Aug 22 10:40:44 node1 pengine: [13079]: notice: RecurringOp:  Start recurring monitor (20s) for p_drbd_mount1:0 on node1
Aug 22 10:40:44 node1 pengine: [13079]: notice: RecurringOp:  Start recurring monitor (10s) for p_drbd_mount1:1 on node2
Aug 22 10:40:44 node1 pengine: [13079]: notice: RecurringOp:  Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
Aug 22 10:40:44 node1 pengine: [13079]: notice: RecurringOp:  Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
Aug 22 10:40:44 node1 pengine: [13079]: notice: RecurringOp:  Start recurring monitor (20s) for p_drbd_vmstore:0 on node1
Aug 22 10:40:44 node1 pengine: [13079]: notice: RecurringOp:  Start recurring monitor (10s) for p_drbd_vmstore:1 on node2
Aug 22 10:40:44 node1 pengine: [13079]: notice: RecurringOp:  Start recurring monitor (20s) for p_fs_vmstore on node2
Aug 22 10:40:44 node1 pengine: [13079]: notice: RecurringOp:  Start recurring monitor (10s) for p_vm_myvm on node2
Aug 22 10:40:44 node1 pengine: [13079]: notice: LogActions: Leave   p_ping:0#011(Started node1)
Aug 22 10:40:44 node1 pengine: [13079]: notice: LogActions: Leave   p_ping:1#011(Started node2)
Aug 22 10:40:44 node1 pengine: [13079]: notice: LogActions: Leave   p_ping:2#011(Stopped)
Aug 22 10:40:44 node1 pengine: [13079]: notice: LogActions: Leave   p_sysadmin_notify:0#011(Started node1)
Aug 22 10:40:44 node1 pengine: [13079]: notice: LogActions: Leave   p_sysadmin_notify:1#011(Started node2)
Aug 22 10:40:44 node1 pengine: [13079]: notice: LogActions: Leave   p_sysadmin_notify:2#011(Stopped)
Aug 22 10:40:44 node1 pengine: [13079]: notice: LogActions: Demote  p_drbd_mount2:0#011(Master -> Slave node1)
Aug 22 10:40:44 node1 pengine: [13079]: notice: LogActions: Promote p_drbd_mount2:1#011(Slave -> Master node2)
Aug 22 10:40:44 node1 pengine: [13079]: notice: LogActions: Demote  p_drbd_mount1:0#011(Master -> Slave node1)
Aug 22 10:40:44 node1 pengine: [13079]: notice: LogActions: Promote p_drbd_mount1:1#011(Slave -> Master node2)
Aug 22 10:40:44 node1 pengine: [13079]: notice: LogActions: Demote  p_drbd_vmstore:0#011(Master -> Slave node1)
Aug 22 10:40:44 node1 pengine: [13079]: notice: LogActions: Promote p_drbd_vmstore:1#011(Slave -> Master node2)
Aug 22 10:40:44 node1 pengine: [13079]: notice: LogActions: Move    p_fs_vmstore#011(Started node1 -> node2)
Aug 22 10:40:44 node1 pengine: [13079]: notice: LogActions: Move    p_vm_myvm#011(Started node1 -> node2)
Aug 22 10:40:44 node1 pengine: [13079]: notice: LogActions: Leave   stonithnode1#011(Started node2)
Aug 22 10:40:44 node1 pengine: [13079]: notice: LogActions: Leave   stonithnode2#011(Started node1)
Aug 22 10:40:44 node1 pengine: [13079]: notice: LogActions: Leave   stonithquorum#011(Started node2)
Aug 22 10:40:44 node1 crmd: [4403]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Aug 22 10:40:44 node1 crmd: [4403]: info: unpack_graph: Unpacked transition 760: 89 actions in 89 synapses
Aug 22 10:40:44 node1 crmd: [4403]: info: do_te_invoke: Processing graph 760 (ref=pe_calc-dc-1345650044-1095) derived from /var/lib/pengine/pe-input-2952.bz2
Aug 22 10:40:44 node1 crmd: [4403]: info: te_rsc_command: Initiating action 6: cancel p_drbd_mount2:0_monitor_10000 on node1 (local)
Aug 22 10:40:44 node1 lrmd: [4400]: info: cancel_op: operation monitor[91] on p_drbd_mount2:0 for client 4403, its parameters: drbd_resource=[crm] CRM_meta_role=[Master] CRM_meta_timeout=[30000] CRM_meta_name=[monitor] crm_feature_set=[3.0.5] CRM_meta_notify=[true] CRM_meta_clone_node_max=[1] CRM_meta_clone=[0] CRM_meta_clone_max=[2] CRM_meta_master_node_max=[1] CRM_meta_interval=[10000] CRM_meta_globally_unique=[false] CRM_meta_master_max=[1]  cancelled
Aug 22 10:40:44 node1 crmd: [4403]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_monitor_10000 from 6:760:0:bc91a070-5215-4409-9d67-6ae8c99caeb5: lrm_invoke-lrmd-1345650044-1097
Aug 22 10:40:44 node1 crmd: [4403]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1345650044-1097 from node1
Aug 22 10:40:44 node1 crmd: [4403]: info: match_graph_event: Action p_drbd_mount2:0_monitor_10000 (6) confirmed on node1 (rc=0)
Aug 22 10:40:44 node1 crmd: [4403]: info: te_rsc_command: Initiating action 12: cancel p_drbd_mount2:1_monitor_20000 on node2
Aug 22 10:40:44 node1 crmd: [4403]: info: te_pseudo_action: Pseudo action 63 fired and confirmed
Aug 22 10:40:44 node1 crmd: [4403]: info: te_rsc_command: Initiating action 7: cancel p_drbd_mount1:0_monitor_10000 on node1 (local)
Aug 22 10:40:44 node1 lrmd: [4400]: info: cancel_op: operation monitor[92] on p_drbd_mount1:0 for client 4403, its parameters: drbd_resource=[tools] CRM_meta_role=[Master] CRM_meta_timeout=[30000] CRM_meta_name=[monitor] crm_feature_set=[3.0.5] CRM_meta_notify=[true] CRM_meta_clone_node_max=[1] CRM_meta_clone=[0] CRM_meta_clone_max=[2] CRM_meta_master_node_max=[1] CRM_meta_interval=[10000] CRM_meta_globally_unique=[false] CRM_meta_master_max=[1]  cancelled
Aug 22 10:40:44 node1 crmd: [4403]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:0_monitor_10000 from 7:760:0:bc91a070-5215-4409-9d67-6ae8c99caeb5: lrm_invoke-lrmd-1345650044-1100
Aug 22 10:40:44 node1 crmd: [4403]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1345650044-1100 from node1
Aug 22 10:40:44 node1 crmd: [4403]: info: match_graph_event: Action p_drbd_mount1:0_monitor_10000 (7) confirmed on node1 (rc=0)
Aug 22 10:40:44 node1 crmd: [4403]: info: te_rsc_command: Initiating action 8: cancel p_drbd_mount1:1_monitor_20000 on node2
Aug 22 10:40:44 node1 crmd: [4403]: info: te_pseudo_action: Pseudo action 96 fired and confirmed
Aug 22 10:40:44 node1 crmd: [4403]: info: te_rsc_command: Initiating action 2: cancel p_drbd_vmstore:0_monitor_10000 on node1 (local)
Aug 22 10:40:44 node1 lrmd: [4400]: info: cancel_op: operation monitor[93] on p_drbd_vmstore:0 for client 4403, its parameters: drbd_resource=[vmstore] CRM_meta_role=[Master] CRM_meta_timeout=[30000] CRM_meta_name=[monitor] crm_feature_set=[3.0.5] CRM_meta_notify=[true] CRM_meta_clone_node_max=[1] CRM_meta_clone=[0] CRM_meta_clone_max=[2] CRM_meta_master_node_max=[1] CRM_meta_interval=[10000] CRM_meta_globally_unique=[false] CRM_meta_master_max=[1]  cancelled
Aug 22 10:40:44 node1 crmd: [4403]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:0_monitor_10000 from 2:760:0:bc91a070-5215-4409-9d67-6ae8c99caeb5: lrm_invoke-lrmd-1345650044-1103
Aug 22 10:40:44 node1 crmd: [4403]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1345650044-1103 from node1
Aug 22 10:40:44 node1 crmd: [4403]: info: match_graph_event: Action p_drbd_vmstore:0_monitor_10000 (2) confirmed on node1 (rc=0)
Aug 22 10:40:44 node1 crmd: [4403]: info: te_rsc_command: Initiating action 9: cancel p_drbd_vmstore:1_monitor_20000 on node2
Aug 22 10:40:44 node1 crmd: [4403]: info: te_pseudo_action: Pseudo action 129 fired and confirmed
Aug 22 10:40:44 node1 crmd: [4403]: info: te_pseudo_action: Pseudo action 141 fired and confirmed
Aug 22 10:40:44 node1 crmd: [4403]: info: te_rsc_command: Initiating action 136: stop p_vm_myvm_stop_0 on node1 (local)
Aug 22 10:40:44 node1 lrmd: [4400]: info: cancel_op: operation monitor[99] on p_vm_myvm for client 4403, its parameters: crm_feature_set=[3.0.5] CRM_meta_name=[monitor] config=[/mnt/storage/vmstore/config/myvm.xml] CRM_meta_interval=[10000] CRM_meta_timeout=[30000]  cancelled
Aug 22 10:40:44 node1 crmd: [4403]: info: do_lrm_rsc_op: Performing key=136:760:0:bc91a070-5215-4409-9d67-6ae8c99caeb5 op=p_vm_myvm_stop_0 )
Aug 22 10:40:44 node1 lrmd: [4400]: info: rsc:p_vm_myvm stop[100] (pid 2011)
Aug 22 10:40:44 node1 crmd: [4403]: info: process_lrm_event: LRM operation p_drbd_mount2:0_monitor_10000 (call=91, status=1, cib-update=0, confirmed=true) Cancelled
Aug 22 10:40:44 node1 crmd: [4403]: info: process_lrm_event: LRM operation p_drbd_mount1:0_monitor_10000 (call=92, status=1, cib-update=0, confirmed=true) Cancelled
Aug 22 10:40:44 node1 crmd: [4403]: info: process_lrm_event: LRM operation p_drbd_vmstore:0_monitor_10000 (call=93, status=1, cib-update=0, confirmed=true) Cancelled
Aug 22 10:40:44 node1 crmd: [4403]: info: process_lrm_event: LRM operation p_vm_myvm_monitor_10000 (call=99, status=1, cib-update=0, confirmed=true) Cancelled
Aug 22 10:40:44 node1 crmd: [4403]: info: te_rsc_command: Initiating action 168: notify p_drbd_mount2:0_pre_notify_demote_0 on node1 (local)
Aug 22 10:40:44 node1 crmd: [4403]: info: do_lrm_rsc_op: Performing key=168:760:0:bc91a070-5215-4409-9d67-6ae8c99caeb5 op=p_drbd_mount2:0_notify_0 )
Aug 22 10:40:44 node1 lrmd: [4400]: info: rsc:p_drbd_mount2:0 notify[101] (pid 2013)
Aug 22 10:40:44 node1 crmd: [4403]: info: te_rsc_command: Initiating action 170: notify p_drbd_mount2:1_pre_notify_demote_0 on node2
Aug 22 10:40:44 node1 crmd: [4403]: info: te_rsc_command: Initiating action 184: notify p_drbd_mount1:0_pre_notify_demote_0 on node1 (local)
Aug 22 10:40:44 node1 crmd: [4403]: info: do_lrm_rsc_op: Performing key=184:760:0:bc91a070-5215-4409-9d67-6ae8c99caeb5 op=p_drbd_mount1:0_notify_0 )
Aug 22 10:40:44 node1 lrmd: [4400]: info: rsc:p_drbd_mount1:0 notify[102] (pid 2015)
Aug 22 10:40:44 node1 crmd: [4403]: info: te_rsc_command: Initiating action 186: notify p_drbd_mount1:1_pre_notify_demote_0 on node2
Aug 22 10:40:44 node1 crmd: [4403]: info: te_rsc_command: Initiating action 200: notify p_drbd_vmstore:0_pre_notify_demote_0 on node1 (local)
Aug 22 10:40:44 node1 crmd: [4403]: info: do_lrm_rsc_op: Performing key=200:760:0:bc91a070-5215-4409-9d67-6ae8c99caeb5 op=p_drbd_vmstore:0_notify_0 )
Aug 22 10:40:44 node1 lrmd: [4400]: info: rsc:p_drbd_vmstore:0 notify[103] (pid 2016)
Aug 22 10:40:44 node1 crmd: [4403]: info: te_rsc_command: Initiating action 202: notify p_drbd_vmstore:1_pre_notify_demote_0 on node2
Aug 22 10:40:44 node1 VirtualDomain[2011]: [2076]: INFO: Issuing graceful shutdown request for domain myvm.
Aug 22 10:40:44 node1 lrmd: [4400]: info: operation notify[101] on p_drbd_mount2:0 for client 4403: pid 2013 exited with return code 0
Aug 22 10:40:44 node1 crmd: [4403]: info: send_direct_ack: ACK'ing resource op p_drbd_mount2:0_notify_0 from 168:760:0:bc91a070-5215-4409-9d67-6ae8c99caeb5: lrm_invoke-lrmd-1345650044-1112
Aug 22 10:40:44 node1 crmd: [4403]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1345650044-1112 from node1
Aug 22 10:40:44 node1 crmd: [4403]: info: match_graph_event: Action p_drbd_mount2:0_notify_0 (168) confirmed on node1 (rc=0)
Aug 22 10:40:44 node1 crmd: [4403]: info: process_lrm_event: LRM operation p_drbd_mount2:0_notify_0 (call=101, rc=0, cib-update=0, confirmed=true) ok
Aug 22 10:40:44 node1 lrmd: [4400]: info: operation notify[102] on p_drbd_mount1:0 for client 4403: pid 2015 exited with return code 0
Aug 22 10:40:44 node1 crmd: [4403]: info: send_direct_ack: ACK'ing resource op p_drbd_mount1:0_notify_0 from 184:760:0:bc91a070-5215-4409-9d67-6ae8c99caeb5: lrm_invoke-lrmd-1345650044-1113
Aug 22 10:40:44 node1 crmd: [4403]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1345650044-1113 from node1
Aug 22 10:40:44 node1 crmd: [4403]: info: match_graph_event: Action p_drbd_mount1:0_notify_0 (184) confirmed on node1 (rc=0)
Aug 22 10:40:44 node1 crmd: [4403]: info: process_lrm_event: LRM operation p_drbd_mount1:0_notify_0 (call=102, rc=0, cib-update=0, confirmed=true) ok
Aug 22 10:40:44 node1 lrmd: [4400]: info: operation notify[103] on p_drbd_vmstore:0 for client 4403: pid 2016 exited with return code 0
Aug 22 10:40:44 node1 crmd: [4403]: info: send_direct_ack: ACK'ing resource op p_drbd_vmstore:0_notify_0 from 200:760:0:bc91a070-5215-4409-9d67-6ae8c99caeb5: lrm_invoke-lrmd-1345650044-1114
Aug 22 10:40:44 node1 crmd: [4403]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1345650044-1114 from node1
Aug 22 10:40:44 node1 crmd: [4403]: info: match_graph_event: Action p_drbd_vmstore:0_notify_0 (200) confirmed on node1 (rc=0)
Aug 22 10:40:44 node1 crmd: [4403]: info: process_lrm_event: LRM operation p_drbd_vmstore:0_notify_0 (call=103, rc=0, cib-update=0, confirmed=true) ok
Aug 22 10:40:44 node1 lrmd: [4400]: info: RA output: (p_vm_myvm:stop:stdout) Domain myvm is being shutdown
Aug 22 10:40:44 node1 pengine: [13079]: notice: process_pe_message: Transition 760: PEngine Input stored in: /var/lib/pengine/pe-input-2952.bz2
Aug 22 10:40:45 node1 crmd: [4403]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1345650044-33 from node2
Aug 22 10:40:45 node1 crmd: [4403]: info: match_graph_event: Action p_drbd_mount2:1_monitor_20000 (12) confirmed on node2 (rc=0)
Aug 22 10:40:45 node1 crmd: [4403]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1345650044-34 from node2
Aug 22 10:40:45 node1 crmd: [4403]: info: match_graph_event: Action p_drbd_mount1:1_monitor_20000 (8) confirmed on node2 (rc=0)
Aug 22 10:40:45 node1 crmd: [4403]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1345650044-35 from node2
Aug 22 10:40:45 node1 crmd: [4403]: info: match_graph_event: Action p_drbd_vmstore:1_monitor_20000 (9) confirmed on node2 (rc=0)
Aug 22 10:40:45 node1 crmd: [4403]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1345650044-36 from node2
Aug 22 10:40:45 node1 crmd: [4403]: info: match_graph_event: Action p_drbd_mount2:1_notify_0 (170) confirmed on node2 (rc=0)
Aug 22 10:40:45 node1 crmd: [4403]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1345650044-37 from node2
Aug 22 10:40:45 node1 crmd: [4403]: info: match_graph_event: Action p_drbd_mount1:1_notify_0 (186) confirmed on node2 (rc=0)
Aug 22 10:40:45 node1 crmd: [4403]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1345650044-38 from node2
Aug 22 10:40:45 node1 crmd: [4403]: info: match_graph_event: Action p_drbd_vmstore:1_notify_0 (202) confirmed on node2 (rc=0)
Aug 22 10:40:45 node1 crmd: [4403]: info: te_pseudo_action: Pseudo action 64 fired and confirmed
Aug 22 10:40:45 node1 crmd: [4403]: info: te_pseudo_action: Pseudo action 97 fired and confirmed
Aug 22 10:40:45 node1 crmd: [4403]: info: te_pseudo_action: Pseudo action 130 fired and confirmed
Aug 22 10:40:50 node1 crmd: [4403]: info: abort_transition_graph: te_update_diff:164 - Triggered transition abort (complete=0, tag=nvpair, id=status-645e09b4-aee5-4cec-a241-8bd4e03a78c3-p_ping, name=p_ping, value=1000, magic=NA, cib=0.121.57) : Transient attribute: update
Aug 22 10:40:50 node1 crmd: [4403]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000
Aug 22 10:40:50 node1 crmd: [4403]: info: update_abort_priority: Abort action done superceeded by restart
Aug 22 10:41:02 node1 attrd_updater: [2261]: info: Invoked: attrd_updater -n p_ping -v 2000 -d 5s 
Aug 22 10:41:07 node1 attrd: [4402]: notice: attrd_trigger_update: Sending flush op to all hosts for: p_ping (2000)
Aug 22 10:41:08 node1 attrd: [4402]: notice: attrd_perform_update: Sent update 269: p_ping=2000
Aug 22 10:41:08 node1 crmd: [4403]: info: abort_transition_graph: te_update_diff:164 - Triggered transition abort (complete=0, tag=nvpair, id=status-1ab0690c-5aa0-4d9c-ae4e-b662e0ca54e5-p_ping, name=p_ping, value=2000, magic=NA, cib=0.121.59) : Transient attribute: update
Aug 22 10:41:14 node1 crmd: [4403]: info: abort_transition_graph: te_update_diff:164 - Triggered transition abort (complete=0, tag=nvpair, id=status-645e09b4-aee5-4cec-a241-8bd4e03a78c3-p_ping, name=p_ping, value=2000, magic=NA, cib=0.121.61) : Transient attribute: update
Aug 22 10:41:26 node1 attrd_updater: [2515]: info: Invoked: attrd_updater -n p_ping -v 2000 -d 5s 
Aug 22 10:41:50 node1 attrd_updater: [2684]: info: Invoked: attrd_updater -n p_ping -v 2000 -d 5s 
Aug 22 10:42:10 node1 VirtualDomain[2011]: [2926]: ERROR: Virtual domain myvm has unknown status "in shutdown"!
Aug 22 10:42:10 node1 VirtualDomain[2011]: [2928]: INFO: Issuing forced shutdown (destroy) request for domain myvm.
Aug 22 10:42:10 node1 kernel: [646819.400576] br0: port 2(vnet0) entering forwarding state
Aug 22 10:42:10 node1 kernel: [646819.402688] br0: port 2(vnet0) entering disabled state
Aug 22 10:42:10 node1 kernel: [646819.402937] device vnet0 left promiscuous mode
Aug 22 10:42:10 node1 kernel: [646819.402941] br0: port 2(vnet0) entering disabled state
Aug 22 10:42:12 node1 ntpd[4442]: Deleting interface #14 vnet0, fe80::fc16:3eff:fe32:3582#123, interface stats: received=0, sent=0, dropped=0, active_time=636760 secs
Aug 22 10:42:12 node1 ntpd[4442]: peers refreshed
Aug 22 10:42:12 node1 kernel: [646821.705681] type=1400 audit(1345650132.902:48): apparmor="STATUS" operation="profile_remove" name="libvirt-14a9dd6b-7a80-b286-8558-8c0c1f0324dc" pid=2941 comm="apparmor_parser"
Aug 22 10:42:13 node1 lrmd: [4400]: info: RA output: (p_vm_myvm:stop:stderr) Domain myvm destroyed
Aug 22 10:42:13 node1 lrmd: [4400]: info: operation stop[100] on p_vm_myvm for client 4403: pid 2011 exited with return code 0
Aug 22 10:42:13 node1 crmd: [4403]: info: process_lrm_event: LRM operation p_vm_myvm_stop_0 (call=100, rc=0, cib-update=1027, confirmed=true) ok
Aug 22 10:42:13 node1 crmd: [4403]: info: match_graph_event: Action p_vm_myvm_stop_0 (136) confirmed on node1 (rc=0)
Aug 22 10:42:13 node1 crmd: [4403]: info: run_graph: ====================================================
Aug 22 10:42:13 node1 crmd: [4403]: notice: run_graph: Transition 760 (Complete=20, Pending=0, Fired=0, Skipped=39, Incomplete=30, Source=/var/lib/pengine/pe-input-2952.bz2): Stopped
Aug 22 10:42:13 node1 crmd: [4403]: info: te_graph_trigger: Transition 760 is now complete
Aug 22 10:42:13 node1 crmd: [4403]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Aug 22 10:42:13 node1 crmd: [4403]: info: do_state_transition: All 3 cluster nodes are eligible to run resources.
Aug 22 10:42:13 node1 crmd: [4403]: info: do_pe_invoke: Query 1028: Requesting the current CIB: S_POLICY_ENGINE
Aug 22 10:42:13 node1 crmd: [4403]: info: do_pe_invoke_callback: Invoking the PE: query=1028, ref=pe_calc-dc-1345650133-1115, seq=130, quorate=1
Aug 22 10:42:13 node1 pengine: [13079]: notice: unpack_rsc_op: Hard error - p_drbd_mount1:0_last_failure_0 failed with rc=5: Preventing ms_drbd_tools from re-starting on quorum
Aug 22 10:42:13 node1 pengine: [13079]: notice: unpack_rsc_op: Hard error - p_drbd_vmstore:0_last_failure_0 failed with rc=5: Preventing ms_drbd_vmstore from re-starting on quorum
Aug 22 10:42:13 node1 pengine: [13079]: notice: unpack_rsc_op: Hard error - p_vm_myvm_last_failure_0 failed with rc=5: Preventing p_vm_myvm from re-starting on quorum
Aug 22 10:42:13 node1 pengine: [13079]: notice: unpack_rsc_op: Hard error - p_drbd_mount2:0_last_failure_0 failed with rc=5: Preventing ms_drbd_crm from re-starting on quorum
Aug 22 10:42:13 node1 pengine: [13079]: notice: unpack_rsc_op: Operation p_drbd_vmstore:0_last_failure_0 found resource p_drbd_vmstore:0 active on node1
Aug 22 10:42:13 node1 pengine: [13079]: notice: unpack_rsc_op: Operation p_drbd_mount2:0_last_failure_0 found resource p_drbd_mount2:0 active on node1
Aug 22 10:42:13 node1 pengine: [13079]: notice: unpack_rsc_op: Operation p_drbd_mount1:0_last_failure_0 found resource p_drbd_mount1:0 active on node1
Aug 22 10:42:13 node1 pengine: [13079]: notice: RecurringOp:  Start recurring monitor (10s) for p_vm_myvm on node1
Aug 22 10:42:13 node1 pengine: [13079]: notice: LogActions: Leave   p_ping:0#011(Started node1)
Aug 22 10:42:13 node1 pengine: [13079]: notice: LogActions: Leave   p_ping:1#011(Started node2)
Aug 22 10:42:13 node1 pengine: [13079]: notice: LogActions: Leave   p_ping:2#011(Stopped)
Aug 22 10:42:13 node1 pengine: [13079]: notice: LogActions: Leave   p_sysadmin_notify:0#011(Started node1)
Aug 22 10:42:13 node1 pengine: [13079]: notice: LogActions: Leave   p_sysadmin_notify:1#011(Started node2)
Aug 22 10:42:13 node1 pengine: [13079]: notice: LogActions: Leave   p_sysadmin_notify:2#011(Stopped)
Aug 22 10:42:13 node1 pengine: [13079]: notice: LogActions: Leave   p_drbd_mount2:0#011(Master node1)
Aug 22 10:42:13 node1 pengine: [13079]: notice: LogActions: Leave   p_drbd_mount2:1#011(Slave node2)
Aug 22 10:42:13 node1 pengine: [13079]: notice: LogActions: Leave   p_drbd_mount1:0#011(Master node1)
Aug 22 10:42:13 node1 pengine: [13079]: notice: LogActions: Leave   p_drbd_mount1:1#011(Slave node2)
Aug 22 10:42:13 node1 pengine: [13079]: notice: LogActions: Leave   p_drbd_vmstore:0#011(Master node1)
Aug 22 10:42:13 node1 pengine: [13079]: notice: LogActions: Leave   p_drbd_vmstore:1#011(Slave node2)
Aug 22 10:42:13 node1 pengine: [13079]: notice: LogActions: Leave   p_fs_vmstore#011(Started node1)
Aug 22 10:42:13 node1 pengine: [13079]: notice: LogActions: Start   p_vm_myvm#011(node1)
Aug 22 10:42:13 node1 pengine: [13079]: notice: LogActions: Leave   stonithnode1#011(Started node2)
Aug 22 10:42:13 node1 pengine: [13079]: notice: LogActions: Leave   stonithnode2#011(Started node1)
Aug 22 10:42:13 node1 pengine: [13079]: notice: LogActions: Leave   stonithquorum#011(Started node2)
Aug 22 10:42:13 node1 crmd: [4403]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Aug 22 10:42:13 node1 crmd: [4403]: WARN: destroy_action: Cancelling timer for action 6 (src=1915)
Aug 22 10:42:13 node1 crmd: [4403]: WARN: destroy_action: Cancelling timer for action 7 (src=1917)
Aug 22 10:42:13 node1 crmd: [4403]: WARN: destroy_action: Cancelling timer for action 2 (src=1919)
Aug 22 10:42:13 node1 crmd: [4403]: info: unpack_graph: Unpacked transition 761: 4 actions in 4 synapses
Aug 22 10:42:13 node1 crmd: [4403]: info: do_te_invoke: Processing graph 761 (ref=pe_calc-dc-1345650133-1115) derived from /var/lib/pengine/pe-input-2953.bz2
Aug 22 10:42:13 node1 crmd: [4403]: info: te_pseudo_action: Pseudo action 130 fired and confirmed
Aug 22 10:42:13 node1 crmd: [4403]: info: te_rsc_command: Initiating action 128: start p_vm_myvm_start_0 on node1 (local)
Aug 22 10:42:13 node1 crmd: [4403]: info: do_lrm_rsc_op: Performing key=128:761:0:bc91a070-5215-4409-9d67-6ae8c99caeb5 op=p_vm_myvm_start_0 )
Aug 22 10:42:13 node1 lrmd: [4400]: info: rsc:p_vm_myvm start[104] (pid 2957)
Aug 22 10:42:13 node1 pengine: [13079]: notice: process_pe_message: Transition 761: PEngine Input stored in: /var/lib/pengine/pe-input-2953.bz2
Aug 22 10:42:14 node1 VirtualDomain[2957]: [2981]: INFO: Domain name "myvm" saved to /var/run/resource-agents/VirtualDomain-p_vm_myvm.state.
Aug 22 10:42:14 node1 kernel: [646823.129002] type=1400 audit(1345650134.326:49): apparmor="DENIED" operation="open" parent=2950 profile="/usr/lib/libvirt/virt-aa-helper" name="/dev/drbd1" pid=2987 comm="virt-aa-helper" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
Aug 22 10:42:14 node1 kernel: [646823.129177] type=1400 audit(1345650134.330:50): apparmor="DENIED" operation="open" parent=2950 profile="/usr/lib/libvirt/virt-aa-helper" name="/dev/drbd2" pid=2987 comm="virt-aa-helper" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
Aug 22 10:42:14 node1 kernel: [646823.363494] type=1400 audit(1345650134.562:51): apparmor="STATUS" operation="profile_load" name="libvirt-14a9dd6b-7a80-b286-8558-8c0c1f0324dc" pid=2988 comm="apparmor_parser"
Aug 22 10:42:14 node1 attrd_updater: [2994]: info: Invoked: attrd_updater -n p_ping -v 2000 -d 5s 
Aug 22 10:42:15 node1 kernel: [646823.832323] device vnet0 entered promiscuous mode

Attachment: config.cib
Description: Binary data

_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to