I have a RHEL6.3 cluster with RHCS and DRBD. When I kill the master node,  DRBD 
on the slave  calls rhcs_fence, but the script thinks its fails returning a (1) 
since the fence device was not on the same subnet as defined by the clusternode 
name in the  cluster.conf.  The fencing actually does occur, but when the 
fenced node reboots, and it tries fo come back in, the New master DRBD always 
reports Primary/Unknown.   THis requires a reboot of both nodes.

Is this by design or a problem. 

I switched back to Obliterate-peer.sh and the problem goes away. 
here is an excerpt from my cluster.conf.

I </clusternode>
                <clusternode name="cl_lm04.ionharris.com" nodeid="2">   
##10.10.10.x
                        <fence>
                                <method name="lm04_fence">
                                        <device name="lm04_ipmi"/>
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <cman expected_votes="1" transport="udpu" two_node="1"/>
        <fencedevices>
                <fencedevice agent="fence_ipmilan" ipaddr="192.168.155.119" 
login="root" name="lm03_ipmi" passwd="nvslab"/>
                <fencedevice agent="fence_ipmilan" ipaddr="192.168.155.100" 
login="root" name="lm04_ipmi" passwd="nvslab"/>
        </fencedevices>

Best regards

John Matchett

-- 
Linux-cluster mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/linux-cluster

Reply via email to