I think your order constrains are wrong order Cluster-FS-Mount-Order 0: Cluster-FS-O2CB-Clone Cluster-FS-Mount-Clone ?????????????????????????????????? you mount the fs beafore start the dlm ??????????????????????????????????
order Cluster-FS-O2CB-Order 0: Cluster-FS-DLM-Clone Cluster-FS-O2CB-Clone Il giorno 22 febbraio 2012 11:54, Johan Rosing Bergkvist < jbergkvi...@gmail.com> ha scritto: > Hi > So I tried to configure some dlm and o2cb resources, now I get a new error > > here's the output of crm_mon > > ============ > Last updated: Wed Feb 22 11:51:22 2012 > Stack: openais > Current DC: cluster01 - partition with quorum > Version: 1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b > 2 Nodes configured, 2 expected votes > 5 Resources configured. > ============ > > Online: [ cluster01 cluster02 ] > > ClusterIP (ocf::heartbeat:IPaddr2): Started cluster01 > Master/Slave Set: Cluster-FS-DRBD-Master > Masters: [ cluster02 cluster01 ] > Clone Set: Cluster-FS-DLM-Clone > Started: [ cluster02 cluster01 ] > > Failed actions: > Cluster-FS-O2CB:0_monitor_0 (node=cluster01, call=5, rc=5, status=compl > ete): not installed > Cluster-FS-O2CB:0_monitor_0 (node=cluster02, call=5, rc=5, status=compl > ete): not installed > > new config looks like this: > > node cluster01 > > node cluster02 > > primitive Cluster-FS-DLM ocf:pacemaker:controld \ > op monitor interval="120s" > > primitive Cluster-FS-DRBD ocf:linbit:drbd \ > params drbd_resource="cluster-ocfs" \ > operations $id="Cluster-FS-DRBD-ops" \ > op monitor interval="20" role="Master" timout="20" \ > op monitor interval="30" role="Slave" timeout="20" > > primitive Cluster-FS-Mount ocf:heartbeat:Filesystem \ > params device="/dev/drbd/by-res/cluster-ocfs" > directory="/cluster" fstype="ocfs2" \ > op monitor interval="120" > > primitive Cluster-FS-O2CB ocf:pacemaker:o2cb \ > op monitor interval="120" > > primitive ClusterIP ocf:heartbeat:IPaddr2 \ > params ip="212.70.2.110" cidr_netmask="24" \ > op monitor interval="30s" \ > meta target-role="Started" > > ms Cluster-FS-DRBD-Master Cluster-FS-DRBD \ > meta resource-stickines="100" notify="true" interleave="true" > master-max="2" > > clone Cluster-FS-DLM-Clone Cluster-FS-DLM \ > meta globally-unique="false" interleave="true" > > clone Cluster-FS-Mount-Clone Cluster-FS-Mount \ > meta interleave="true" ordered="true" > > clone Cluster-FS-O2CB-Clone Cluster-FS-O2CB \ > meta globally-unique="false" interleave="true" > > colocation Cluster-FS-Colocation inf: Cluster-FS-DLM-Clone > Cluster-FS-DRBD-Master:Master > > colocation Cluster-FS-Mount-Colocation inf: Cluster-FS-Mount-Clone > Cluster-FS-O2CB-Clone > > colocation Cluster-FS-O2CB-Colocation inf: Cluster-FS-O2CB-Clone > Cluster-FS-DLM-Clone > > order Cluster-FS-DLM-Order 0: Cluster-FS-DRBD-Master:promote > Cluster-FS-DLM-Clone > > order Cluster-FS-Mount-Order 0: Cluster-FS-O2CB-Clone > Cluster-FS-Mount-Clone > > order Cluster-FS-O2CB-Order 0: Cluster-FS-DLM-Clone Cluster-FS-O2CB-Clone > > I can get log output if necessary, but honestly I'm not sure what to look > for :/ I'm still new to this. > > thanks > > Den 22. feb. 2012 07.12 skrev Dejan Muhamedagic <deja...@fastmail.fm>: > >> On Tue, Feb 21, 2012 at 04:31:50PM +0100, Florian Haas wrote: >> > On Tue, Feb 21, 2012 at 4:22 PM, Dejan Muhamedagic <deja...@fastmail.fm> >> wrote: >> > > Hi, >> > > >> > > On Tue, Feb 21, 2012 at 02:26:31PM +0100, Florian Haas wrote: >> > >> On 02/21/12 13:39, Johan wrote: >> > >> > >> > >> > I keep getting the: >> > >> > info: RA output: (Cluster-FS-Mount:1:start:stderr) FATAL: Module >> > >> > scsi_hostadapter not found. >> > >> >> > >> That's a red herring. Why the Filesystem RA is still trying to >> modprobe >> > >> scsi_hostadapter, and is even logging any failure to do so with a >> FATAL >> > >> priority, don't ask. :) >> > > >> > > Removed. Let's see who'll complain, then perhaps we'll know why >> > > it was there ;-) >> > >> > Could you zap that from the Raid1 RA too, please? >> >> Zapped. >> >> Cheers, >> >> Dejan >> >> > Cheers, >> > Florian >> > >> > -- >> > Need help with High Availability? >> > http://www.hastexo.com/now >> > >> > _______________________________________________ >> > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org >> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker >> > >> > Project Home: http://www.clusterlabs.org >> > Getting started: >> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf >> > Bugs: http://bugs.clusterlabs.org >> >> _______________________________________________ >> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org >> http://oss.clusterlabs.org/mailman/listinfo/pacemaker >> >> Project Home: http://www.clusterlabs.org >> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf >> Bugs: http://bugs.clusterlabs.org >> > > > > -- > MVH > Johan Bergkvist > > > _______________________________________________ > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org > > -- esta es mi vida e me la vivo hasta que dios quiera
_______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org