Doing some experiments and Reading TFM, I found this:

5.2.2. Advisory Ordering
When the kind=Optional option is specified for an order constraint, the constraint is considered optional and only has an effect when both resources are stopping and/or starting. Any change in state of the first resource you specified has no effect on the second resource you specified.

(From https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Configuring_the_Red_Hat_High_Availability_Add-On_with_Pacemaker/index.html)

This seems to tickle the right area. Adding "kind=Optional" to the gfs2 -> drbd order constraint makes it all work as desired (start-up and shut-down is correctly ordered, and bringing the other node out of standby doesn't force a gratuitous restart of the gfs2 filesystem and the vms that rely on it on the already active node).

Is that the correct solution I wonder? The term "optional" makes me nervous, but the description matches the desired behavior, in normal cases at least.


FYI, Here's the "working" configuration:

# pcs config
Cluster Name: jusme
Corosync Nodes:

Pacemaker Nodes:
 sv06 sv07

Resources:
 Master: vm_storage_core_dev-master
Meta Attrs: master-max=2 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
  Group: vm_storage_core_dev
   Resource: res_drbd_vm1 (class=ocf provider=linbit type=drbd)
    Attributes: drbd_resource=vm1
    Operations: monitor interval=60s (res_drbd_vm1-monitor-interval-60s)
 Clone: vm_storage_core-clone
  Group: vm_storage_core
   Resource: res_fs_vm1 (class=ocf provider=heartbeat type=Filesystem)
Attributes: device=/dev/drbd/by-res/vm1 directory=/data/vm1 fstype=gfs2 options=noatime,nodiratime
    Operations: monitor interval=60s (res_fs_vm1-monitor-interval-60s)
 Master: nfs_server_dev-master
Meta Attrs: master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
  Group: nfs_server_dev
   Resource: res_drbd_live (class=ocf provider=linbit type=drbd)
    Attributes: drbd_resource=live
Operations: monitor interval=60s (res_drbd_live-monitor-interval-60s) Resource: res_vm_nfs_server (class=ocf provider=heartbeat type=VirtualDomain)
  Attributes: config=/etc/libvirt/qemu/vm09.xml
  Meta Attrs: resource-stickiness=100
Operations: monitor interval=60s (res_vm_nfs_server-monitor-interval-60s)

Stonith Devices:
Fencing Levels:

Location Constraints:
Ordering Constraints:
promote vm_storage_core_dev-master then start vm_storage_core-clone (Optional) (id:order-vm_storage_core_dev-master-vm_storage_core-clone-Optional) promote nfs_server_dev-master then start res_vm_nfs_server (Mandatory) (id:order-nfs_server_dev-master-res_vm_nfs_server-mandatory) start vm_storage_core-clone then start res_vm_nfs_server (Mandatory) (id:order-vm_storage_core-clone-res_vm_nfs_server-mandatory)
Colocation Constraints:
vm_storage_core-clone with vm_storage_core_dev-master (INFINITY) (rsc-role:Started) (with-rsc-role:Master) (id:colocation-vm_storage_core-clone-vm_storage_core_dev-master-INFINITY) res_vm_nfs_server with nfs_server_dev-master (INFINITY) (rsc-role:Started) (with-rsc-role:Master) (id:colocation-res_vm_nfs_server-nfs_server_dev-master-INFINITY) res_vm_nfs_server with vm_storage_core-clone (INFINITY) (id:colocation-res_vm_nfs_server-vm_storage_core-clone-INFINITY)


Ian.


_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to