Hi, On Tue, May 18, 2010 at 10:26:37AM +0200, patrik.rappo...@knapp.com wrote: > > so, that I get you right: > > if the NIC fails or the cat cable gets pulled out, my cluster will be > unable to continue it's work?
No, that's a different event from bringing the interface down. Though if that's the only comm path between the nodes, you're in trouble. > so would there be other options than bonding? Not with openais. Thanks, Dejan > thx 4 your help. > > patrik > > > > > > > Dejan Muhamedagic > <deja...@fastmail > .fm> An > The Pacemaker cluster resource > 17.05.2010 17:35 manager > <pacemaker@oss.clusterlabs.org> > Kopie > Bitte antworten > an Thema > The Pacemaker Re: [Pacemaker] sles11, ocfs2, 2 > cluster resource node cluster with one storage, > manager failover problem > <pacema...@oss.cl > usterlabs.org> > > > > > > > > > Hi, > > On Mon, May 17, 2010 at 12:24:12PM +0200, patrik.rappo...@knapp.com wrote: > > > > hy, > > > > I have following problem: > > > > I configured a 2 node cluster running SLES11 with the HAE extension. I > use > > "pacemaker-1.0.3-4.1", "openais-0.80.3-26.1" and "ocfs2 1.4.1". > > I used the SLES high availabilty guide to configure my cluster. > > > > I have 2 ocfs2 mount clone-resources running on the cluster. > > > > My first problem was, that the cluster didn't work in any way, because I > > defined 2 rings in the openais.conf, which only will be supported from > > novell with the SLES SP1, so I switched back to one ring. > > > > The major problem I have now is following: > > > > When I reboot one node, or stop openais on one node, the cluster behaves > as > > it should, that means the surviving node > > is still able to access the ocfs2 mounts and works normaly. > > > > But if I trigger an ifdown command on the interface, which is defined in > > openais/pacemaker can't handle ifdown. Neither can corosync with > multiple rings. You have to make sure that the interface stays up > at all times (i.e. don't use dhcp). > > Thanks, > > Dejan > > > the openais.conf, the node hangs and restarts after a while. The > surviving > > node doesn't support to open any other ssh sessions and if I try to open > a > > mounted directory, the existing session hangs and I have to wait, till > the > > first node is up again. > > > > I don't have a glue how this can happen and hope, that you can help me. > > > > Attached you can find my configs and failure logs. > > > > (See attached file: config_plus_failure_logs.rtf) > > > > Mit freundlichen Grüßen / Best Regards > > > > Patrik Rapposch, Bsc. > > Systemadministration > > > > KNAPP Systemintegration GmbH > > Waltenbachstraße 9 > > 8700 Leoben, Austria > > Phone: +43 3842 805 > > Mobil: > > Fax: +43 3842 82930-990 > > patrik.rappo...@knapp.com > > www.KNAPP.com > > > > Commercial register number: FN 138870x > > Commercial register court: Leoben > > > > _______________________________________________ > > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > > > Project Home: http://www.clusterlabs.org > > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > > > _______________________________________________ > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > > > > _______________________________________________ > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf _______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf