Hi,

We are trying to build a HA setup for our servers using DRBD + Corosync + pacemaker stack.

Attached is the configuration file for corosync/pacemaker and drbd.

We are getting errors while testing this setup.
1. When we stop corosync on Master machine say server1(lock), it is Stonith'ed. In this case slave-server2(sher) is promoted to master. But when server1(lock) reboots res_exportfs_export1 is started on both the servers and that resource goes into failed state followed by servers going into unclean state. Then server1(lock) reboots and server2(sher) is master but in unclean state. After server1(lock) comes up, server2(sher) is stonith'ed and server1(lock) is slave(the only online node). When server2(sher) comes up, both the servers are slaves and resource group(rg_export) is stopped. Then server2(sher) becomes Master and server1(lock) is slave and resource group is started.
   At this point configuration becomes stable.


PFA logs(syslog) of server2(sher) after it is promoted to master till it is first rebooted when resource exportfs goes into failed state.

Please let us know if the configuration is appropriate. From the logs we could not figure out exact reason of resource failure.
Your comment on this scenario will be very helpful.

Thanks,
Priyanka

sher(new master) =>

Oct  8 18:01:20 sher kernel: [  886.867496] e1000e: eth0 NIC Link is Down
Oct  8 18:01:22 sher exportfs(res_exportfs_root)[5566]: INFO: Directory 
/mnt/vms is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:01:27 sher exportfs(res_exportfs_export1)[5580]: INFO: Directory 
/mnt/vms/export1 is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:01:30 sher kernel: [  896.771854] e1000e: eth0 NIC Link is Up 1000 
Mbps Full Duplex, Flow Control: Rx/Tx
Oct  8 18:01:30 sher corosync[1320]:  [TOTEM ] A new membership 
(192.168.0.21:3444) was formed. Members joined: 102
Oct  8 18:01:30 sher crmd[1465]:    error: pcmk_cpg_membership: Node lock[102] 
appears to be online even though we think it is dead
Oct  8 18:01:30 sher crmd[1465]:   notice: crm_update_peer_state: 
pcmk_cpg_membership: Node lock[102] - state is now member (was lost)
Oct  8 18:01:30 sher corosync[1320]:  [QUORUM] Members[2]: 101 102
Oct  8 18:01:30 sher corosync[1320]:  [MAIN  ] Completed service 
synchronization, ready to provide service.
Oct  8 18:01:30 sher pacemakerd[1458]:   notice: crm_update_peer_state: 
pcmk_quorum_notification: Node lock[102] - state is now member (was lost)
Oct  8 18:01:32 sher exportfs(res_exportfs_root)[5652]: INFO: Directory 
/mnt/vms is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:01:37 sher exportfs(res_exportfs_export1)[5666]: INFO: Directory 
/mnt/vms/export1 is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:01:43 sher exportfs(res_exportfs_root)[5738]: INFO: Directory 
/mnt/vms is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:01:47 sher exportfs(res_exportfs_export1)[5779]: INFO: Directory 
/mnt/vms/export1 is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:01:49 sher crmd[1465]:   notice: do_state_transition: State 
transition S_IDLE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL 
origin=do_election_count_vote ]
Oct  8 18:01:49 sher crmd[1465]:   notice: do_state_transition: State 
transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC 
cause=C_FSA_INTERNAL origin=do_election_check ]
Oct  8 18:01:49 sher attrd[1463]:   notice: attrd_local_callback: Sending full 
refresh (origin=crmd)
Oct  8 18:01:49 sher attrd[1463]:   notice: attrd_trigger_update: Sending flush 
op to all hosts for: fail-count-res_exportfs_root (2)
Oct  8 18:01:49 sher attrd[1463]:   notice: attrd_trigger_update: Sending flush 
op to all hosts for: master-res_drbd_export (10000)
Oct  8 18:01:49 sher attrd[1463]:   notice: attrd_trigger_update: Sending flush 
op to all hosts for: last-failure-res_exportfs_root (1444306659)
Oct  8 18:01:49 sher attrd[1463]:   notice: attrd_trigger_update: Sending flush 
op to all hosts for: probe_complete (true)
Oct  8 18:01:49 sher attrd[1463]:   notice: attrd_trigger_update: Sending flush 
op to all hosts for: fail-count-fence_lock (INFINITY)
Oct  8 18:01:49 sher attrd[1463]:   notice: attrd_trigger_update: Sending flush 
op to all hosts for: last-failure-fence_lock (1444306771)
Oct  8 18:01:50 sher pengine[1464]:   notice: unpack_config: On loss of CCM 
Quorum: Ignore
Oct  8 18:01:50 sher pengine[1464]:  warning: unpack_rsc_op: Processing failed 
op monitor for res_exportfs_root:0 on sher: not running (7)
Oct  8 18:01:50 sher pengine[1464]:  warning: unpack_rsc_op: Processing failed 
op start for fence_lock on sher: unknown error (1)
Oct  8 18:01:50 sher pengine[1464]:  warning: common_apply_stickiness: Forcing 
fence_lock away from sher after 1000000 failures (max=1000000)
Oct  8 18:01:50 sher pengine[1464]:   notice: LogActions: Start   
fence_sher#011(lock)
Oct  8 18:01:50 sher pengine[1464]:   notice: LogActions: Start   
res_drbd_export:1#011(lock)
Oct  8 18:01:50 sher pengine[1464]:   notice: LogActions: Start   
res_exportfs_root:1#011(lock)
Oct  8 18:01:50 sher pengine[1464]:   notice: LogActions: Start   
res_nfsserver:1#011(lock)
Oct  8 18:01:50 sher pengine[1464]:   notice: process_pe_message: Calculated 
Transition 10: /var/lib/pacemaker/pengine/pe-input-2641.bz2
Oct  8 18:01:50 sher crmd[1465]:   notice: te_rsc_command: Initiating action 9: 
monitor fence_sher_monitor_0 on lock
Oct  8 18:01:50 sher crmd[1465]:   notice: te_rsc_command: Initiating action 
10: monitor fence_lock_monitor_0 on lock
Oct  8 18:01:50 sher crmd[1465]:   notice: te_rsc_command: Initiating action 
11: monitor res_drbd_export:1_monitor_0 on lock
Oct  8 18:01:50 sher crmd[1465]:   notice: te_rsc_command: Initiating action 
12: monitor res_fs_monitor_0 on lock
Oct  8 18:01:50 sher crmd[1465]:   notice: te_rsc_command: Initiating action 
13: monitor res_ip_monitor_0 on lock
Oct  8 18:01:50 sher crmd[1465]:   notice: te_rsc_command: Initiating action 
14: monitor res_exportfs_export1_monitor_0 on lock
Oct  8 18:01:50 sher crmd[1465]:   notice: te_rsc_command: Initiating action 
15: monitor res_exportfs_root:1_monitor_0 on lock
Oct  8 18:01:50 sher crmd[1465]:   notice: te_rsc_command: Initiating action 
16: monitor res_nfsserver:1_monitor_0 on lock
Oct  8 18:01:50 sher crmd[1465]:   notice: te_rsc_command: Initiating action 
79: notify res_drbd_export_pre_notify_start_0 on sher (local)
Oct  8 18:01:50 sher crmd[1465]:   notice: process_lrm_event: LRM operation 
res_drbd_export_notify_0 (call=132, rc=0, cib-update=0, confirmed=true) ok
Oct  8 18:01:50 sher crmd[1465]:  warning: status_from_rc: Action 14 
(res_exportfs_export1_monitor_0) on lock failed (target: 7 vs. rc: 0): Error
Oct  8 18:01:50 sher crmd[1465]:  warning: status_from_rc: Action 15 
(res_exportfs_root:1_monitor_0) on lock failed (target: 7 vs. rc: 0): Error
Oct  8 18:01:50 sher crmd[1465]:   notice: te_rsc_command: Initiating action 8: 
probe_complete probe_complete on lock - no waiting
Oct  8 18:01:50 sher crmd[1465]:   notice: run_graph: Transition 10 
(Complete=15, Pending=0, Fired=0, Skipped=9, Incomplete=7, 
Source=/var/lib/pacemaker/pengine/pe-input-2641.bz2): Stopped
Oct  8 18:01:50 sher pengine[1464]:   notice: unpack_config: On loss of CCM 
Quorum: Ignore
Oct  8 18:01:50 sher pengine[1464]:  warning: unpack_rsc_op: Processing failed 
op monitor for res_exportfs_root:1 on sher: not running (7)
Oct  8 18:01:50 sher pengine[1464]:  warning: unpack_rsc_op: Processing failed 
op start for fence_lock on sher: unknown error (1)
Oct  8 18:01:50 sher pengine[1464]:  warning: common_apply_stickiness: Forcing 
fence_lock away from sher after 1000000 failures (max=1000000)
Oct  8 18:01:50 sher pengine[1464]:    error: native_create_actions: Resource 
res_exportfs_export1 (ocf::exportfs) is active on 2 nodes attempting recovery
Oct  8 18:01:50 sher pengine[1464]:  warning: native_create_actions: See 
http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
Oct  8 18:01:50 sher pengine[1464]:   notice: LogActions: Start   
fence_sher#011(lock)
Oct  8 18:01:50 sher pengine[1464]:   notice: LogActions: Start   
res_drbd_export:1#011(lock)
Oct  8 18:01:50 sher pengine[1464]:   notice: LogActions: Move    
res_exportfs_export1#011(Started lock -> sher)
Oct  8 18:01:50 sher pengine[1464]:   notice: LogActions: Start   
res_nfsserver:1#011(lock)
Oct  8 18:01:50 sher pengine[1464]:    error: process_pe_message: Calculated 
Transition 11: /var/lib/pacemaker/pengine/pe-error-303.bz2
Oct  8 18:01:50 sher crmd[1465]:   notice: te_rsc_command: Initiating action 
10: start fence_sher_start_0 on lock
Oct  8 18:01:50 sher crmd[1465]:   notice: te_rsc_command: Initiating action 
47: stop res_exportfs_export1_stop_0 on sher (local)
Oct  8 18:01:50 sher crmd[1465]:   notice: te_rsc_command: Initiating action 
46: stop res_exportfs_export1_stop_0 on lock
Oct  8 18:01:50 sher crmd[1465]:   notice: te_rsc_command: Initiating action 
55: monitor res_exportfs_root_monitor_10000 on lock
Oct  8 18:01:50 sher crmd[1465]:   notice: te_rsc_command: Initiating action 8: 
probe_complete probe_complete on lock - no waiting
Oct  8 18:01:50 sher crmd[1465]:   notice: te_rsc_command: Initiating action 
73: notify res_drbd_export_pre_notify_start_0 on sher (local)
Oct  8 18:01:50 sher crmd[1465]:   notice: te_rsc_command: Initiating action 
64: start res_nfsserver:1_start_0 on lock
Oct  8 18:01:50 sher exportfs(res_exportfs_export1)[5874]: INFO: Directory 
/mnt/vms/export1 is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:01:50 sher crmd[1465]:  warning: status_from_rc: Action 46 
(res_exportfs_export1_stop_0) on lock failed (target: 0 vs. rc: 5): Error
Oct  8 18:01:50 sher crmd[1465]:  warning: update_failcount: Updating failcount 
for res_exportfs_export1 on lock after failed stop: rc=5 (update=INFINITY, 
time=1444307510)
Oct  8 18:01:50 sher exportfs(res_exportfs_export1)[5874]: INFO: Un-exporting 
file system ...
Oct  8 18:01:50 sher exportfs(res_exportfs_export1)[5874]: INFO: unexporting 
10.105.0.0/255.255.0.0:/mnt/vms/export1
Oct  8 18:01:50 sher exportfs(res_exportfs_export1)[5874]: INFO: Sleeping 92 
seconds to accommodate for NFSv4 lease expiry
Oct  8 18:01:50 sher crmd[1465]:  warning: update_failcount: Updating failcount 
for res_exportfs_export1 on lock after failed stop: rc=5 (update=INFINITY, 
time=1444307510)
Oct  8 18:01:50 sher crmd[1465]:   notice: process_lrm_event: LRM operation 
res_drbd_export_notify_0 (call=139, rc=0, cib-update=0, confirmed=true) ok
Oct  8 18:01:53 sher exportfs(res_exportfs_root)[5929]: INFO: Directory 
/mnt/vms is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:02:00 sher crmd[1465]:  warning: status_from_rc: Action 55 
(res_exportfs_root_monitor_10000) on lock failed (target: 0 vs. rc: 7): Error
Oct  8 18:02:00 sher crmd[1465]:  warning: update_failcount: Updating failcount 
for res_exportfs_root on lock after failed monitor: rc=7 (update=value++, 
time=1444307520)
Oct  8 18:02:03 sher exportfs(res_exportfs_root)[6005]: INFO: Directory 
/mnt/vms is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:02:10 sher lrmd[1462]:  warning: child_timeout_callback: 
res_exportfs_export1_stop_0 process (PID 5874) timed out
Oct  8 18:02:10 sher lrmd[1462]:  warning: operation_finished: 
res_exportfs_export1_stop_0:5874 - timed out after 20000ms
Oct  8 18:02:00 sher crmd[1465]:  warning: update_failcount: Updating failcount 
for res_exportfs_root on lock after failed monitor: rc=7 (update=value++, 
time=1444307520)
Oct  8 18:02:10 sher crmd[1465]:    error: process_lrm_event: LRM operation 
res_exportfs_export1_stop_0 (136) Timed Out (timeout=20000ms)
Oct  8 18:02:10 sher crmd[1465]:  warning: status_from_rc: Action 47 
(res_exportfs_export1_stop_0) on sher failed (target: 0 vs. rc: 1): Error
Oct  8 18:02:10 sher crmd[1465]:  warning: update_failcount: Updating failcount 
for res_exportfs_export1 on sher after failed stop: rc=1 (update=INFINITY, 
time=1444307530)
Oct  8 18:02:10 sher attrd[1463]:   notice: attrd_trigger_update: Sending flush 
op to all hosts for: fail-count-res_exportfs_export1 (INFINITY)
Oct  8 18:02:10 sher attrd[1463]:   notice: attrd_perform_update: Sent update 
86: fail-count-res_exportfs_export1=INFINITY
Oct  8 18:02:10 sher attrd[1463]:   notice: attrd_trigger_update: Sending flush 
op to all hosts for: last-failure-res_exportfs_export1 (1444307530)
Oct  8 18:02:10 sher attrd[1463]:   notice: attrd_perform_update: Sent update 
89: last-failure-res_exportfs_export1=1444307530
Oct  8 18:02:10 sher attrd[1463]:   notice: attrd_trigger_update: Sending flush 
op to all hosts for: fail-count-res_exportfs_export1 (INFINITY)
Oct  8 18:02:10 sher attrd[1463]:   notice: attrd_perform_update: Sent update 
91: fail-count-res_exportfs_export1=INFINITY
Oct  8 18:02:10 sher attrd[1463]:   notice: attrd_trigger_update: Sending flush 
op to all hosts for: last-failure-res_exportfs_export1 (1444307530)
Oct  8 18:02:10 sher attrd[1463]:   notice: attrd_perform_update: Sent update 
93: last-failure-res_exportfs_export1=1444307530
Oct  8 18:02:13 sher exportfs(res_exportfs_root)[6081]: INFO: Directory 
/mnt/vms is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:02:23 sher exportfs(res_exportfs_root)[6157]: INFO: Directory 
/mnt/vms is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:02:33 sher exportfs(res_exportfs_root)[6260]: INFO: Directory 
/mnt/vms is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:02:43 sher exportfs(res_exportfs_root)[6336]: INFO: Directory 
/mnt/vms is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:02:53 sher exportfs(res_exportfs_root)[6412]: INFO: Directory 
/mnt/vms is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:03:03 sher exportfs(res_exportfs_root)[6488]: INFO: Directory 
/mnt/vms is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:03:13 sher exportfs(res_exportfs_root)[6564]: INFO: Directory 
/mnt/vms is exported to 10.105.0.0/255.255.0.0 (started).


Oct  8 18:02:10 sher crmd[1465]:  warning: update_failcount: Updating failcount 
for res_exportfs_export1 on sher after failed stop: rc=1 (update=INFINITY, 
time=1444307530)
Oct  8 18:03:22 sher crmd[1465]:   notice: run_graph: Transition 11 
(Complete=11, Pending=0, Fired=0, Skipped=12, Incomplete=5, 
Source=/var/lib/pacemaker/pengine/pe-error-303.bz2): Stopped
Oct  8 18:03:22 sher pengine[1464]:   notice: unpack_config: On loss of CCM 
Quorum: Ignore
Oct  8 18:03:22 sher pengine[1464]:   notice: unpack_rsc_op: Preventing 
res_exportfs_export1 from re-starting on lock: operation stop failed 'not 
installed' (rc=5)
Oct  8 18:03:22 sher pengine[1464]:  warning: unpack_rsc_op: Processing failed 
op stop for res_exportfs_export1 on lock: not installed (5)
Oct  8 18:03:22 sher pengine[1464]:  warning: pe_fence_node: Node lock will be 
fenced because of resource failure(s)
Oct  8 18:03:22 sher pengine[1464]:  warning: unpack_rsc_op: Processing failed 
op monitor for res_exportfs_root:0 on lock: not running (7)
Oct  8 18:03:22 sher pengine[1464]:  warning: unpack_rsc_op: Processing failed 
op stop for res_exportfs_export1 on sher: unknown error (1)
Oct  8 18:03:22 sher pengine[1464]:  warning: pe_fence_node: Node sher will be 
fenced because of resource failure(s)
Oct  8 18:03:22 sher pengine[1464]:  warning: unpack_rsc_op: Processing failed 
op monitor for res_exportfs_root:1 on sher: not running (7)
Oct  8 18:03:22 sher pengine[1464]:  warning: unpack_rsc_op: Processing failed 
op start for fence_lock on sher: unknown error (1)
Oct  8 18:03:22 sher pengine[1464]:  warning: common_apply_stickiness: Forcing 
res_exportfs_export1 away from lock after 1000000 failures (max=1000000)
Oct  8 18:03:22 sher pengine[1464]:  warning: common_apply_stickiness: Forcing 
fence_lock away from sher after 1000000 failures (max=1000000)
Oct  8 18:03:22 sher pengine[1464]:  warning: common_apply_stickiness: Forcing 
res_exportfs_export1 away from sher after 1000000 failures (max=1000000)
Oct  8 18:03:22 sher pengine[1464]:    error: native_create_actions: Resource 
res_exportfs_export1 (ocf::exportfs) is active on 2 nodes attempting recovery
Oct  8 18:03:22 sher pengine[1464]:  warning: native_create_actions: See 
http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.
Oct  8 18:03:22 sher pengine[1464]:  warning: stage6: Scheduling Node lock for 
STONITH
Oct  8 18:03:22 sher pengine[1464]:   notice: native_stop_constraints: Stop of 
failed resource res_exportfs_export1 is implicit after lock is fenced
Oct  8 18:03:22 sher pengine[1464]:  warning: stage6: Scheduling Node sher for 
STONITH
Oct  8 18:03:22 sher pengine[1464]:   notice: native_stop_constraints: Stop of 
failed resource res_exportfs_export1 is implicit after sher is fenced
Oct  8 18:03:22 sher pengine[1464]:   notice: LogActions: Stop    
fence_sher#011(lock)
Oct  8 18:03:22 sher pengine[1464]:   notice: LogActions: Demote  
res_drbd_export:0#011(Master -> Stopped sher)
Oct  8 18:03:22 sher pengine[1464]:   notice: LogActions: Stop    
res_fs#011(sher)
Oct  8 18:03:22 sher pengine[1464]:   notice: LogActions: Stop    
res_ip#011(sher)
Oct  8 18:03:22 sher pengine[1464]:   notice: LogActions: Stop    
res_exportfs_export1#011(lock)
Oct  8 18:03:22 sher pengine[1464]:   notice: LogActions: Stop    
res_exportfs_export1#011(sher)
Oct  8 18:03:22 sher pengine[1464]:   notice: LogActions: Stop    
res_exportfs_root:0#011(lock)
Oct  8 18:03:22 sher pengine[1464]:   notice: LogActions: Stop    
res_exportfs_root:1#011(sher)
Oct  8 18:03:22 sher pengine[1464]:   notice: LogActions: Stop    
res_nfsserver:0#011(lock)
Oct  8 18:03:22 sher pengine[1464]:   notice: LogActions: Stop    
res_nfsserver:1#011(sher)
Oct  8 18:03:22 sher pengine[1464]:    error: process_pe_message: Calculated 
Transition 12: /var/lib/pacemaker/pengine/pe-error-304.bz2
Oct  8 18:03:22 sher crmd[1465]:   notice: te_fence_node: Executing reboot 
fencing operation (56) on lock (timeout=60000)
Oct  8 18:03:22 sher crmd[1465]:   notice: te_rsc_command: Initiating action 
73: notify res_drbd_export_pre_notify_demote_0 on sher (local)
Oct  8 18:03:22 sher stonith-ng[1461]:   notice: handle_request: Client 
crmd.1465.e3ae1cca wants to fence (reboot) 'lock' with device '(any)'
Oct  8 18:03:22 sher stonith-ng[1461]:   notice: initiate_remote_stonith_op: 
Initiating remote operation reboot for lock: 
375aeb28-a774-4c2f-82c4-a1c0250d8265 (0)
Oct  8 18:03:22 sher crmd[1465]:   notice: process_lrm_event: LRM operation 
res_drbd_export_notify_0 (call=143, rc=0, cib-update=0, confirmed=true) ok
Oct  8 18:03:22 sher stonith-ng[1461]:   notice: can_fence_host_with_device: 
fence_lock can fence lock: dynamic-list
Oct  8 18:03:23 sher exportfs(res_exportfs_root)[6686]: INFO: Directory 
/mnt/vms is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:03:23 sher kernel: [ 1009.741795] e1000e: eth0 NIC Link is Down
Oct  8 18:03:23 sher corosync[1320]:  [TOTEM ] A processor failed, forming new 
configuration.
Oct  8 18:03:24 sher stonith-ng[1461]:   notice: log_operation: Operation 
'reboot' [6670] (call 3 from crmd.1465) for host 'lock' with device 
'fence_lock' returned: 0 (OK)
Oct  8 18:03:25 sher corosync[1320]:  [TOTEM ] A new membership 
(192.168.0.21:3448) was formed. Members left: 102
Oct  8 18:03:25 sher corosync[1320]:  [QUORUM] Members[1]: 101
Oct  8 18:03:25 sher corosync[1320]:  [MAIN  ] Completed service 
synchronization, ready to provide service.
Oct  8 18:03:25 sher crmd[1465]:   notice: crm_update_peer_state: 
pcmk_quorum_notification: Node lock[102] - state is now lost (was member)
Oct  8 18:03:25 sher pacemakerd[1458]:   notice: crm_update_peer_state: 
pcmk_quorum_notification: Node lock[102] - state is now lost (was member)
Oct  8 18:03:25 sher kernel: [ 1011.487119] e1000e: eth0 NIC Link is Up 10 Mbps 
Full Duplex, Flow Control: None
Oct  8 18:03:25 sher kernel: [ 1011.487228] e1000e 0000:00:19.0 eth0: 10/100 
speed: disabling TSO
Oct  8 18:03:25 sher stonith-ng[1461]:   notice: remote_op_done: Operation 
reboot of lock by sher for crmd.1465@sher.375aeb28: OK
Oct  8 18:03:25 sher crmd[1465]:   notice: tengine_stonith_callback: Stonith 
operation 3/56:12:0:6643f7d4-dd65-49dc-a9db-c9ede2838434: OK (0)
Oct  8 18:03:25 sher crmd[1465]:   notice: tengine_stonith_notify: Peer lock 
was terminated (reboot) by sher for sher: OK 
(ref=375aeb28-a774-4c2f-82c4-a1c0250d8265) by client crmd.1465
Oct  8 18:03:25 sher crmd[1465]:   notice: te_fence_node: Executing reboot 
fencing operation (57) on sher (timeout=60000)
Oct  8 18:03:25 sher stonith-ng[1461]:   notice: handle_request: Client 
crmd.1465.e3ae1cca wants to fence (reboot) 'sher' with device '(any)'
Oct  8 18:03:25 sher stonith-ng[1461]:   notice: initiate_remote_stonith_op: 
Initiating remote operation reboot for sher: 
57357f57-d2e5-4254-ba47-716662b916a9 (0)
Oct  8 18:03:25 sher stonith-ng[1461]:   notice: can_fence_host_with_device: 
fence_lock can not fence sher: dynamic-list
Oct  8 18:03:25 sher stonith-ng[1461]:    error: remote_op_done: Operation 
reboot of sher by sher for crmd.1465@sher.57357f57: No such device
Oct  8 18:03:25 sher crmd[1465]:   notice: tengine_stonith_callback: Stonith 
operation 4/57:12:0:6643f7d4-dd65-49dc-a9db-c9ede2838434: No such device (-19)
Oct  8 18:03:25 sher crmd[1465]:   notice: tengine_stonith_callback: Stonith 
operation 4 for sher failed (No such device): aborting transition.
Oct  8 18:03:25 sher crmd[1465]:   notice: tengine_stonith_notify: Peer sher 
was not terminated (reboot) by sher for sher: No such device 
(ref=57357f57-d2e5-4254-ba47-716662b916a9) by client crmd.1465
Oct  8 18:03:25 sher crmd[1465]:   notice: run_graph: Transition 12 
(Complete=9, Pending=0, Fired=0, Skipped=20, Incomplete=9, 
Source=/var/lib/pacemaker/pengine/pe-error-304.bz2): Stopped
Oct  8 18:03:25 sher crmd[1465]:   notice: too_many_st_failures: No devices 
found in cluster to fence sher, giving up
Oct  8 18:03:25 sher crmd[1465]:   notice: do_state_transition: State 
transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS 
cause=C_FSA_INTERNAL origin=notify_crmd ]
Oct  8 18:03:25 sher crmd[1465]:   notice: do_state_transition: State 
transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL 
origin=abort_transition_graph ]
Oct  8 18:03:25 sher pengine[1464]:   notice: unpack_config: On loss of CCM 
Quorum: Ignore
Oct  8 18:03:25 sher pengine[1464]:  warning: unpack_rsc_op: Processing failed 
op stop for res_exportfs_export1 on sher: unknown error (1)
Oct  8 18:03:25 sher pengine[1464]:  warning: pe_fence_node: Node sher will be 
fenced because of resource failure(s)
Oct  8 18:03:25 sher pengine[1464]:  warning: unpack_rsc_op: Processing failed 
op monitor for res_exportfs_root:0 on sher: not running (7)
Oct  8 18:03:25 sher pengine[1464]:  warning: unpack_rsc_op: Processing failed 
op start for fence_lock on sher: unknown error (1)
Oct  8 18:03:25 sher pengine[1464]:  warning: common_apply_stickiness: Forcing 
fence_lock away from sher after 1000000 failures (max=1000000)
Oct  8 18:03:25 sher pengine[1464]:  warning: common_apply_stickiness: Forcing 
res_exportfs_export1 away from sher after 1000000 failures (max=1000000)
Oct  8 18:03:25 sher pengine[1464]:  warning: stage6: Scheduling Node sher for 
STONITH
Oct  8 18:03:25 sher pengine[1464]:   notice: native_stop_constraints: Stop of 
failed resource res_exportfs_export1 is implicit after sher is fenced
Oct  8 18:03:25 sher pengine[1464]:   notice: LogActions: Demote  
res_drbd_export:0#011(Master -> Stopped sher)
Oct  8 18:03:25 sher pengine[1464]:   notice: LogActions: Stop    
res_fs#011(sher)
Oct  8 18:03:25 sher pengine[1464]:   notice: LogActions: Stop    
res_ip#011(sher)
Oct  8 18:03:25 sher pengine[1464]:   notice: LogActions: Stop    
res_exportfs_export1#011(sher)
Oct  8 18:03:25 sher pengine[1464]:   notice: LogActions: Stop    
res_exportfs_root:0#011(sher)
Oct  8 18:03:25 sher pengine[1464]:   notice: LogActions: Stop    
res_nfsserver:0#011(sher)
Oct  8 18:03:25 sher pengine[1464]:  warning: process_pe_message: Calculated 
Transition 13: /var/lib/pacemaker/pengine/pe-warn-364.bz2
Oct  8 18:03:25 sher crmd[1465]:   notice: te_fence_node: Executing reboot 
fencing operation (51) on sher (timeout=60000)
Oct  8 18:03:25 sher stonith-ng[1461]:   notice: handle_request: Client 
crmd.1465.e3ae1cca wants to fence (reboot) 'sher' with device '(any)'
Oct  8 18:03:25 sher crmd[1465]:   notice: te_rsc_command: Initiating action 
67: notify res_drbd_export_pre_notify_demote_0 on sher (local)
Oct  8 18:03:25 sher stonith-ng[1461]:   notice: initiate_remote_stonith_op: 
Initiating remote operation reboot for sher: 
8bc5d6a4-b32e-4424-80c5-83f686b4f2a0 (0)
Oct  8 18:03:25 sher stonith-ng[1461]:   notice: can_fence_host_with_device: 
fence_lock can not fence sher: dynamic-list
Oct  8 18:03:25 sher stonith-ng[1461]:    error: remote_op_done: Operation 
reboot of sher by sher for crmd.1465@sher.8bc5d6a4: No such device
Oct  8 18:03:25 sher crmd[1465]:   notice: tengine_stonith_callback: Stonith 
operation 5/51:13:0:6643f7d4-dd65-49dc-a9db-c9ede2838434: No such device (-19)
Oct  8 18:03:25 sher crmd[1465]:   notice: tengine_stonith_callback: Stonith 
operation 5 for sher failed (No such device): aborting transition.
Oct  8 18:03:25 sher crmd[1465]:   notice: tengine_stonith_notify: Peer sher 
was not terminated (reboot) by sher for sher: No such device 
(ref=8bc5d6a4-b32e-4424-80c5-83f686b4f2a0) by client crmd.1465
Oct  8 18:03:25 sher crmd[1465]:   notice: process_lrm_event: LRM operation 
res_drbd_export_notify_0 (call=146, rc=0, cib-update=0, confirmed=true) ok
Oct  8 18:03:25 sher crmd[1465]:   notice: run_graph: Transition 13 
(Complete=5, Pending=0, Fired=0, Skipped=19, Incomplete=9, 
Source=/var/lib/pacemaker/pengine/pe-warn-364.bz2): Stopped
Oct  8 18:03:25 sher crmd[1465]:   notice: too_many_st_failures: No devices 
found in cluster to fence sher, giving up
Oct  8 18:03:25 sher crmd[1465]:   notice: do_state_transition: State 
transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS 
cause=C_FSA_INTERNAL origin=notify_crmd ]
Oct  8 18:03:31 sher kernel: [ 1017.803975] e1000e: eth0 NIC Link is Down
Oct  8 18:03:33 sher exportfs(res_exportfs_root)[6784]: INFO: Directory 
/mnt/vms is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:03:33 sher kernel: [ 1020.262569] e1000e: eth0 NIC Link is Up 1000 
Mbps Full Duplex, Flow Control: None
Oct  8 18:03:43 sher exportfs(res_exportfs_root)[6860]: INFO: Directory 
/mnt/vms is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:03:53 sher kernel: [ 1039.432561] e1000e: eth0 NIC Link is Down
Oct  8 18:03:53 sher exportfs(res_exportfs_root)[6936]: INFO: Directory 
/mnt/vms is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:03:55 sher kernel: [ 1041.895210] e1000e: eth0 NIC Link is Up 1000 
Mbps Full Duplex, Flow Control: Rx/Tx
Oct  8 18:04:03 sher exportfs(res_exportfs_root)[7039]: INFO: Directory 
/mnt/vms is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:04:13 sher exportfs(res_exportfs_root)[7115]: INFO: Directory 
/mnt/vms is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:04:23 sher exportfs(res_exportfs_root)[7191]: INFO: Directory 
/mnt/vms is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:04:33 sher exportfs(res_exportfs_root)[7267]: INFO: Directory 
/mnt/vms is exported to 10.105.0.0/255.255.0.0 (started).


Oct  8 18:04:43 sher exportfs(res_exportfs_root)[7343]: INFO: Directory 
/mnt/vms is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:04:46 sher kernel: [ 1092.757491] e1000e: eth0 NIC Link is Down
Oct  8 18:04:53 sher exportfs(res_exportfs_root)[7419]: INFO: Directory 
/mnt/vms is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:04:56 sher kernel: [ 1102.481657] e1000e: eth0 NIC Link is Up 1000 
Mbps Full Duplex, Flow Control: Rx/Tx
Oct  8 18:04:56 sher corosync[1320]:  [TOTEM ] A new membership 
(192.168.0.21:3452) was formed. Members joined: 102
Oct  8 18:04:56 sher corosync[1320]:  [QUORUM] Members[2]: 101 102
Oct  8 18:04:56 sher corosync[1320]:  [MAIN  ] Completed service 
synchronization, ready to provide service.
Oct  8 18:04:56 sher crmd[1465]:   notice: crm_update_peer_state: 
pcmk_quorum_notification: Node lock[102] - state is now member (was lost)
Oct  8 18:04:56 sher pacemakerd[1458]:   notice: crm_update_peer_state: 
pcmk_quorum_notification: Node lock[102] - state is now member (was lost)
Oct  8 18:05:03 sher exportfs(res_exportfs_root)[7495]: INFO: Directory 
/mnt/vms is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:05:13 sher exportfs(res_exportfs_root)[7571]: INFO: Directory 
/mnt/vms is exported to 10.105.0.0/255.255.0.0 (started).
Oct  8 18:05:15 sher crmd[1465]:   notice: do_state_transition: State 
transition S_IDLE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL 
origin=do_election_count_vote ]
Oct  8 18:05:15 sher crmd[1465]:   notice: do_state_transition: State 
transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC 
cause=C_FSA_INTERNAL origin=do_election_check ]
Oct  8 18:05:15 sher attrd[1463]:   notice: attrd_local_callback: Sending full 
refresh (origin=crmd)
Oct  8 18:05:15 sher attrd[1463]:   notice: attrd_trigger_update: Sending flush 
op to all hosts for: fail-count-res_exportfs_export1 (INFINITY)
Oct  8 18:05:15 sher attrd[1463]:   notice: attrd_trigger_update: Sending flush 
op to all hosts for: fail-count-res_exportfs_root (2)
Oct  8 18:05:15 sher attrd[1463]:   notice: attrd_trigger_update: Sending flush 
op to all hosts for: master-res_drbd_export (10000)
Oct  8 18:05:15 sher attrd[1463]:   notice: attrd_trigger_update: Sending flush 
op to all hosts for: last-failure-res_exportfs_export1 (1444307530)
Oct  8 18:05:15 sher attrd[1463]:   notice: attrd_trigger_update: Sending flush 
op to all hosts for: last-failure-res_exportfs_root (1444306659)
Oct  8 18:05:15 sher attrd[1463]:   notice: attrd_trigger_update: Sending flush 
op to all hosts for: probe_complete (true)
Oct  8 18:05:15 sher attrd[1463]:   notice: attrd_trigger_update: Sending flush 
op to all hosts for: fail-count-fence_lock (INFINITY)
Oct  8 18:05:15 sher attrd[1463]:   notice: attrd_trigger_update: Sending flush 
op to all hosts for: last-failure-fence_lock (1444306771)
Oct  8 18:05:16 sher pengine[1464]:   notice: unpack_config: On loss of CCM 
Quorum: Ignore
Oct  8 18:05:16 sher pengine[1464]:  warning: unpack_rsc_op: Processing failed 
op stop for res_exportfs_export1 on sher: unknown error (1)
Oct  8 18:05:16 sher pengine[1464]:  warning: pe_fence_node: Node sher will be 
fenced because of resource failure(s)
Oct  8 18:05:16 sher pengine[1464]:  warning: unpack_rsc_op: Processing failed 
op monitor for res_exportfs_root:0 on sher: not running (7)
Oct  8 18:05:16 sher pengine[1464]:  warning: unpack_rsc_op: Processing failed 
op start for fence_lock on sher: unknown error (1)
Oct  8 18:05:16 sher pengine[1464]:  warning: common_apply_stickiness: Forcing 
fence_lock away from sher after 1000000 failures (max=1000000)
Oct  8 18:05:16 sher pengine[1464]:  warning: common_apply_stickiness: Forcing 
res_exportfs_export1 away from sher after 1000000 failures (max=1000000)
Oct  8 18:05:16 sher pengine[1464]:  warning: stage6: Scheduling Node sher for 
STONITH
Oct  8 18:05:16 sher pengine[1464]:   notice: native_stop_constraints: Stop of 
failed resource res_exportfs_export1 is implicit after sher is fenced
Oct  8 18:05:16 sher pengine[1464]:   notice: LogActions: Start   
fence_sher#011(lock)
Oct  8 18:05:16 sher pengine[1464]:   notice: LogActions: Demote  
res_drbd_export:0#011(Master -> Slave sher - blocked)
Oct  8 18:05:16 sher pengine[1464]:   notice: LogActions: Move    
res_drbd_export:0#011(Slave sher -> lock)
Oct  8 18:05:16 sher pengine[1464]:   notice: LogActions: Stop    
res_fs#011(sher)
Oct  8 18:05:16 sher pengine[1464]:   notice: LogActions: Stop    
res_ip#011(sher)
Oct  8 18:05:16 sher pengine[1464]:   notice: LogActions: Stop    
res_exportfs_export1#011(sher)
Oct  8 18:05:16 sher pengine[1464]:   notice: LogActions: Move    
res_exportfs_root:0#011(Started sher -> lock)
Oct  8 18:05:16 sher pengine[1464]:   notice: LogActions: Move    
res_nfsserver:0#011(Started sher -> lock)
Oct  8 18:05:16 sher pengine[1464]:  warning: process_pe_message: Calculated 
Transition 14: /var/lib/pacemaker/pengine/pe-warn-365.bz2
Oct  8 18:05:16 sher crmd[1465]:   notice: te_rsc_command: Initiating action 8: 
monitor fence_sher_monitor_0 on lock
Oct  8 18:05:16 sher crmd[1465]:   notice: te_rsc_command: Initiating action 9: 
monitor fence_lock_monitor_0 on lock
Oct  8 18:05:16 sher crmd[1465]:   notice: te_rsc_command: Initiating action 
10: monitor res_drbd_export_monitor_0 on lock
Oct  8 18:05:16 sher crmd[1465]:   notice: te_rsc_command: Initiating action 
11: monitor res_fs_monitor_0 on lock
Oct  8 18:05:16 sher crmd[1465]:   notice: te_rsc_command: Initiating action 
12: monitor res_ip_monitor_0 on lock
Oct  8 18:05:16 sher crmd[1465]:   notice: te_rsc_command: Initiating action 
13: monitor res_exportfs_export1_monitor_0 on lock
Oct  8 18:05:16 sher crmd[1465]:   notice: te_rsc_command: Initiating action 
14: monitor res_exportfs_root_monitor_0 on lock
Oct  8 18:05:16 sher crmd[1465]:   notice: te_rsc_command: Initiating action 
15: monitor res_nfsserver_monitor_0 on lock
Oct  8 18:05:16 sher crmd[1465]:   notice: te_fence_node: Executing reboot 
fencing operation (68) on sher (timeout=60000)
Oct  8 18:05:16 sher stonith-ng[1461]:   notice: handle_request: Client 
crmd.1465.e3ae1cca wants to fence (reboot) 'sher' with device '(any)'
Oct  8 18:05:16 sher stonith-ng[1461]:   notice: initiate_remote_stonith_op: 
Initiating remote operation reboot for sher: 
a98028b6-08e1-4641-9268-06c449cfde86 (0)
Oct  8 18:05:16 sher crmd[1465]:   notice: te_rsc_command: Initiating action 
82: notify res_drbd_export_pre_notify_demote_0 on sher (local)
Oct  8 18:05:16 sher crmd[1465]:   notice: process_lrm_event: LRM operation 
res_drbd_export_notify_0 (call=149, rc=0, cib-update=0, confirmed=true) ok
Oct  8 18:05:16 sher crmd[1465]:   notice: te_rsc_command: Initiating action 7: 
probe_complete probe_complete on lock - no waiting
Oct  8 18:05:23 sher exportfs(res_exportfs_root)[7699]: INFO: Directory 
/mnt/vms is exported to 10.105.0.0/255.255.0.0 (started).




rebooted!




node $id="101" sher
node $id="102" lock
primitive fence_lock stonith:external/ipmi \
        params hostname="lock" ipaddr="10.105.4.114" userid="test2" 
passwd="passwd1" interface="lan" \
        op start interval="0" timeout="60" start-delay="90" \
        op stop interval="0" timeout="60" \
        op monitor interval="60" timeout="60" start-delay="90"
primitive fence_sher stonith:external/ipmi \
        params hostname="sher" ipaddr="10.105.4.113" userid="test2" 
passwd="passwd2" interface="lan" \
        op start interval="0" timeout="60" start-delay="90" \
        op stop interval="0" timeout="60" \
        op monitor interval="60" timeout="60" start-delay="90"
primitive res_drbd_export ocf:linbit:drbd \
        params drbd_resource="r0" \
        op start interval="0" timeout="150" \
        op stop interval="0" timeout="100" \
        op monitor role="Master" interval="9" timeout="30" \
        op monitor role="Slave" interval="10" timeout="30"
primitive res_exportfs_export1 ocf:heartbeat:exportfs \
        params fsid="1" directory="/mnt/vms/export1" 
options="rw,mountpoint,no_root_squash" clientspec="10.105.0.0/255.255.0.0" 
wait_for_leasetime_on_stop="true" \
        op monitor interval="10s"
primitive res_exportfs_root ocf:heartbeat:exportfs \
        params fsid="0" directory="/mnt/vms" 
options="rw,crossmnt,no_root_squash" clientspec="10.105.0.0/255.255.0.0" \
        op monitor interval="10s"
primitive res_fs ocf:heartbeat:Filesystem \
        params device="/dev/drbd0" directory="/mnt/vms" fstype="ext4" \
        op monitor interval="10s" \
        meta target-role="Started"
primitive res_ip ocf:heartbeat:IPaddr2 \
        params ip="10.105.0.23" cidr_netmask="16" nic="eth1" \
        meta target-role="Started"
primitive res_nfsserver lsb:nfs-kernel-server \
        op monitor interval="10s"
group rg_export res_fs res_ip res_exportfs_export1
ms ms_drbd_export res_drbd_export \
        meta notify="true" master-max="1" master-node-max="1" clone-max="2" 
clone-node-max="1" target-role="Started"
clone cl_exportfs_root res_exportfs_root \
        meta target-role="Started"
clone cl_nfsserver res_nfsserver \
        meta target-role="Started"
location l_fence_lock fence_lock -inf: lock
location l_fence_sher fence_sher -inf: sher
colocation c_export_on_drbd inf: rg_export ms_drbd_export:Master
colocation c_nfs_on_root inf: rg_export cl_exportfs_root
order o_drbd_before_nfs inf: ms_drbd_export:promote rg_export:start
order o_root_before_nfs inf: cl_exportfs_root rg_export:start
property $id="cib-bootstrap-options" \
        dc-version="1.1.10-42f2063" \
        cluster-infrastructure="corosync" \
        stonith-enabled="true" \
        no-quorum-policy="ignore" \
        migration-threshold="1" \
        last-lrm-refresh="1441374526"
### global_common.conf ##################

global {
        usage-count no;
        # minor-count dialog-refresh disable-ip-verification
}

common {
        syncer {
                rate 100M;
                c-plan-ahead 20;
                c-fill-target 50k;
                c-min-rate 10M;
                al-extents 3833;
                use-rle;
        }

        handlers {
                # These are EXAMPLE handlers only.
                # They may have severe implications,
                # like hard resetting the node under certain circumstances.
                # Be careful when chosing your poison.

                pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; 
/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot 
-f";
                pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; 
/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot 
-f";
                local-io-error "/usr/lib/drbd/notify-io-error.sh; 
/usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt 
-f";
                out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";

                ## avoid split-brain in pacemaker cluster
                fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
                split-brain "/usr/lib/drbd/notify-split-brain.sh root";
                after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";

                # before-resync-target 
"/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
                # after-resync-target 
/usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
        }

        startup {
                # wfc-timeout degr-wfc-timeout outdated-wfc-timeout 
wait-after-sb
        }

        options {
                # cpu-mask on-no-data-accessible
        }

        disk {
                # size max-bio-bvecs on-io-error fencing disk-barrier 
disk-flushes
                # disk-drain md-flushes resync-rate resync-after al-extents
                # c-plan-ahead c-delay-target c-fill-target c-max-rate
                # c-min-rate disk-timeout

                ## avoid split-brain in pacemaker cluster
                fencing resource-only;
        }

        net {
                # protocol timeout max-epoch-size max-buffers unplug-watermark
                # connect-int ping-int sndbuf-size rcvbuf-size ko-count
                # allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri
                # after-sb-1pri after-sb-2pri always-asbp rr-conflict
                # ping-timeout data-integrity-alg tcp-cork on-congestion
                # congestion-fill congestion-extents csums-alg verify-alg
                # use-rle

                ## DRBD recovery policy
                after-sb-0pri discard-least-changes; 
                after-sb-1pri call-pri-lost-after-sb; 
                after-sb-2pri call-pri-lost-after-sb; 
        }
}


####  r0.res file content #############
resource r0 {
        protocol C;
        startup {
                wfc-timeout  0;     # non-zero wfc-timeout can be dangerous 
(http://forum.proxmox.com/threads/3465-Is-it-safe-to-use-wfc-timeout-in-DRBD-configuration)
                degr-wfc-timeout 60;
                become-primary-on sher;
        }
        net {
                shared-secret "S3cr3t";
                                sndbuf-size 0;
                                max-buffers 8000;
                                max-epoch-size 8000;
        }
        on sher {
                device /dev/drbd0;
                disk /dev/md0p1;
                address 192.168.0.21:7788;
                meta-disk internal;
        }
        on lock {
                device /dev/drbd0;
                disk /dev/md0p1;
                address 192.168.0.22:7788;
                meta-disk internal;
        }
        disk {
                # no-disk-barrier and no-disk-flushes should be applied only to 
systems with non-volatile (battery backed) controller caches.
                # Follow links for more information:
                # 
http://www.drbd.org/users-guide-8.3/s-throughput-tuning.html#s-tune-disable-barriers
                # 
http://www.drbd.org/users-guide/s-throughput-tuning.html#s-tune-disable-barriers
                # no-disk-barrier;
                # no-disk-flushes;
                no-md-flushes;
        }
}

_______________________________________________
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to