On Thu, Sep 02, 2010 at 05:48:53PM -0700, Michael Shadle wrote: > running latest DRBD/Heartbeat/Pacemaker available in Ubuntu Lucid (10.04) > > I have /dev/drbd1, formatted as xfs, I can mount it manually and it > has data on it, but I can't seem to get it to get triggered properly > using heartbeat/pacemaker. At first I followed the DRBD user manual > which had me using cibadmin, which kept telling me an invalid > schema/DTD, then on #drbd someone said to use crm so I tried using > that. It seems like things are configured properly, but a "crm_verify > -L" told me that it would not start without stonith being configured. > So I configured that with some dummy thing, and it seems to want to > start everything up but doesn't work. Looks like it still doesn't > understand what I am going for - which is simply to have an > active/passive /home XFS partition mounted on two machines. That's it. > No DRBD+MySQL/etc. > > mirror1 is active/primary, mirror2 does not exist yet (as soon as > mirror1 is functional I will be reformatting a machine to -make- it > mirror2) > > Most the writeup is MySQL specific so I tried to tweak it but I'm > still at a loss here. Any help? > > drbd 8.3.7 (api 88) - from ubuntu repo > heartbeat version: 1:3.0.3-1ubuntu1 > pacemaker version: 1.0.8+hg15494-2ubuntu2 (same as cibadmin, crmadmin) > > Here's a bunch of daemon.log extract from start and while it's > running: http://pastebin.com/XFpKxeqp > > Here's my /etc/ha.d/ha.cf: > > autojoin none > ucast eth0 10.9.185.4 10.36.148.112 > crm yes > use_logd on > bcast eth1 > warntime 5 > deadtime 15 > initdead 15 > keepalive 2 > node mirror1 mirror2 > > Here's the output from crm... > > # crm > crm(live)# configure > crm(live)configure# show > node $id="fd4053b1-a50b-4c01-9e54-56bc24fdebc1" mirror1 > primitive drbd_r0 ocf:linbit:drbd \ > params drbd_resource="r0" \ > op monitor interval="15s" > primitive fs_r0 ocf:heartbeat:Filesystem \ > params device="/dev/drbd/by-res/r0" directory="/home" fstype="xfs" > primitive st-null stonith:null \ > params hostlist="mirror1 mirror2" > group r0 fs_r0 > clone fencing st-null > property $id="cib-bootstrap-options" \ > dc-version="1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd" \ > cluster-infrastructure="Heartbeat"
You are missing colocation and order dependencies between your drbd, Filesystem, and whatever else needs to be started. And make sure you have the udev rules for drbd (package drbd-udev), if you want to access it via /dev/drbd/by-res/*. > drbd config: > > global { > usage-count no; > } > > common { > protocol C; > > handlers { > pri-on-incon-degr > "/usr/lib/drbd/notify-pri-on-incon-degr.sh; > /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger > ; reboot -f"; > pri-lost-after-sb > "/usr/lib/drbd/notify-pri-lost-after-sb.sh; > /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger > ; reboot -f"; > local-io-error "/usr/lib/drbd/notify-io-error.sh; > /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > > /proc/sysrq-trigger ; halt -f"; > fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; > after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh"; > } > > startup { > degr-wfc-timeout 15; > wfc-timeout 15; > } > > disk { > fencing resource-only; > } > } > > resource r0 { > device /dev/drbd1; > meta-disk internal; > on mirror1 { > disk /dev/sda7; > address 10.9.185.4:7789; > } > on mirror2 { > disk /dev/sda7; > address 10.36.148.112:7789; > } > } > _______________________________________________ > drbd-user mailing list > drbd-user@lists.linbit.com > http://lists.linbit.com/mailman/listinfo/drbd-user -- : Lars Ellenberg : LINBIT | Your Way to High Availability : DRBD/HA support and consulting http://www.linbit.com DRBD® and LINBIT® are registered trademarks of LINBIT, Austria. __ please don't Cc me, but send to list -- I'm subscribed _______________________________________________ drbd-user mailing list drbd-user@lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user