----- Mail original -----
> De: "Digimer" <li...@alteeve.ca>
> À: "linux clustering" <linux-cluster@redhat.com>
> Cc: "GouNiNi" <gounini.geeka...@gmail.com>
> Envoyé: Lundi 30 Juillet 2012 17:10:10
> Objet: Re: [Linux-cluster] Quorum device brain the cluster when master lose 
> network
> 
> On 07/30/2012 10:43 AM, GouNiNi wrote:
> > Hello,
> >
> > I did some tests on 4 nodes cluster with quorum device and I find a
> > bad situation with one test, so I need your knowledges to correct
> > my configuration.
> >
> > Configuation:
> > 4 nodes, all vote for 1
> > quorum device vote for 1 (to hold services with minimum 2 nodes up)
> > cman expected votes 5
> >
> > Situation:
> > I shut down network on 2 nodes, one of them is master.
> >
> > Observation:
> > Fencing of one node (the master)... Quorum device Offline, Quorum
> > disolved ! Services stopped.
> > Fenced node reboot, cluster is quorate, 2nd offline node is fenced.
> > Services restart.
> > 2nd node offline reboot.
> >
> > My cluster is not quorate for 8 min (very long hardware boot :-)
> > and my services were offline.
> >
> > Do you know how to prevent this situation?
> >
> > Regards,
> 
> Please tell us the name and version of the cluster software you are
> using, Please also share your configuration file(s).
> 
> --
> Digimer
> Papers and Projects: https://alteeve.com
> 

Sorry, RHEL5.6 64bits

# rpm -q cman rgmanager
cman-2.0.115-68.el5
rgmanager-2.0.52-9.el5


<?xml version="1.0"?>
<cluster alias="cluname" config_version="144" name="cluname">
        <clusternodes>
                <clusternode name="node1" nodeid="1" votes="1">
                        <fence>
                                <method name="single">
                                        <device name="fenceIBM_307" port="12"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="node2" nodeid="2" votes="1">
                        <fence>
                                <method name="single">
                                        <device name="fenceIBM_307" port="11"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="node3" nodeid="3" votes="1">
                        <fence>
                                <method name="single">
                                        <device name="fenceIBM_308" port="6"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="node4" nodeid="4" votes="1">
                        <fence>
                                <method name="single">
                                        <device name="fenceIBM_308" port="7"/>
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <fencedevices>
                <fencedevice agent="fence_bladecenter" ipaddr="XX.XX.XX.XX" 
login="xxxx" name="fenceIBM_307" passwd="yyyy"/>
                <fencedevice agent="fence_bladecenter" ipaddr="YY.YY.YY.YY" 
login="xxxx" name="fenceIBM_308" passwd="yyyy"/>
        </fencedevices>
        <rm log_level="7">
                <failoverdomains/>
                <resources/>
                <service ...>
                        <...>
                </service>
        </rm>
        <fence_daemon clean_start="0" post_fail_delay="15" 
post_join_delay="300"/>
        <cman expected_votes="5">
                <multicast addr="ZZ.ZZ.ZZ.ZZ"/>
        </cman>
        <quorumd interval="7" label="quorum" tko="12" votes="1"/>
</cluster>

--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

Reply via email to