On Fri, Nov 19, 2010 at 3:16 PM, Frank Lazzarini <flazzar...@gmail.com> wrote: > Hi all, > > so I've been playing arround a little bit with setting up a 3 node > cluster, and all seems to work fine when I do the start of the drbd > resources manually. So basically here is my setup, in this current setup > I don't use a virtual ip to connect my stacked resource on. But I don't > really think that's the issue right now. Let me just break it down by > showing you first of all my drbd resource files. > > On every node I have 3 ethernet connections for the moment, so ubox1 and > ubox2 are connected via crosslink on the network 10.10.0.0 and > ubox1,ubox2,ubox3 are connected on the network 10.15.0.0 > > > r0.res > --------- > resource r0 { > syncer { rate 100M; } > > device /dev/drbd0; > disk /dev/sdb1; > meta-disk internal; > > on ubox1 { > address 10.10.0.1:8801; > } > > on ubox2 { > address 10.10.0.2:8801; > } > } > > > r0-stacked.res > --------------------- > resource r0-stacked { > protocol A; > syncer { rate 100M; } > > stacked-on-top-of r0 { > device /dev/drbd10; > address 10.15.0.1:7789; > } > > on ubox3 { > device /dev/drbd10; > disk /dev/sdb1; > address 10.15.0.3:7789; > meta-disk internal; > } > } > > > ha.cf > ------- > logfacility local0 > warntime 5 > deadtime 15 > initdead 30 > bcast eth1 eth2 > auto_failback off > node ubox1 > node ubox2 > node ubox3 > compression bz2 > crm respawn > > > pacemaker rules > ----------------------- > node $id="0d1cc20e-bb19-46c3-82b8-fbf3306c915a" ubox2 \ > attributes standby="off" > node $id="49a12a6b-f893-4e61-89f9-b078e6769fec" ubox3 \ > attributes standby="off" > node $id="c59c73b3-518b-4420-ab56-4364b2cf3e0d" ubox1 \ > attributes standby="off" > > primitive p_drbd_r0 ocf:linbit:drbd \ > params drbd_resource="r0" > > primitive p_drbd_r0S ocf:linbit:drbd \ > params drbd_resource="r0-stacked" > > ms ms_drbd_r0 p_drbd_r0 \ > meta master-max="1" master-node-max="1" clone-max="2" > clone-node-max="1" notify="true" globally-unique="false" > target-role="Started" > > ms ms_drbd_r0S p_drbd_r0S \ > meta master-max="1" clone-max="1" clone-node-max="1" > master-node-max="1" notify="true" globally-unique="false" > > colocation c_drbd_r0S_on_drbd_r0 inf: ms_drbd_r0S ms_drbd_r0:Master > order o_drbd_r0_before_r0S inf: ms_drbd_r0:promote ms_drbd_r0S:start > > property $id="cib-bootstrap-options" \ > dc-version="1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd" \ > cluster-infrastructure="Heartbeat" \ > stonith-enabled="false" \ > no-quorum-policy="ignore" \ > last-lrm-refresh="1290095782" > > > Once every node is started I get this in crm_mon and /proc/drbd > > Online: [ ubox2 ubox3 ubox1 ] > > Full list of resources: > > Master/Slave Set: ms_drbd_r0 > Masters: [ ubox1 ] > Slaves: [ ubox2 ] > Master/Slave Set: ms_drbd_r0S > Masters: [ ubox1 ] > > Migration summary: > * Node ubox2: > * Node ubox1: > * Node ubox3: > > > Now I don't really know why my master/slave resource ms_drbd_r0S doesn't > show me a Slave in crm_mon -rf,
Good question. Can you send through cibadmin -Ql when the cluster is in that state? > well I hope someone can give me a hint > of what I did wrong, it has to be something with the pacemaker rules. > Even the target-role doesn't change to started. Thanks a lot for helping > me out. I am really kind of lost. Thanks > > Btw, these are the outputs of the logs that I get on ubox3 > Nov 19 13:52:31 ubox3 attrd: [6960]: info: find_hash_entry: Creating > hash entry for master-p_drbd_r0S:0 > Nov 19 13:52:31 ubox3 attrd: [6960]: info: find_hash_entry: Creating > hash entry for master-p_drbd_r0:1 > Nov 19 13:52:31 ubox3 attrd: [6960]: info: find_hash_entry: Creating > hash entry for master-p_drbd_r0:0 > Nov 19 13:52:33 ubox3 crmd: [6961]: info: do_lrm_rsc_op: Performing > key=5:6:7:4b0b81d0-e6f4-4a33-a330-0a474483e7c6 op=p_drbd_r0:0_monitor_0 ) > Nov 19 13:52:33 ubox3 lrmd: [6958]: info: rsc:p_drbd_r0:0:2: probe > Nov 19 13:52:33 ubox3 crmd: [6961]: info: do_lrm_rsc_op: Performing > key=6:6:7:4b0b81d0-e6f4-4a33-a330-0a474483e7c6 op=p_drbd_r0S:0_monitor_0 ) > Nov 19 13:52:33 ubox3 lrmd: [6958]: info: rsc:p_drbd_r0S:0:3: probe > Nov 19 13:52:33 ubox3 lrmd: [6958]: info: RA output: > (p_drbd_r0:0:probe:stderr) 'r0' ignored, since this host (ubox3) is not > mentioned with an 'on' keyword. > Nov 19 13:52:33 ubox3 crmd: [6961]: info: process_lrm_event: LRM > operation p_drbd_r0:0_monitor_0 (call=2, rc=7, cib-update=7, > confirmed=true) not running > Nov 19 13:52:33 ubox3 crm_attribute: [7014]: info: Invoked: > crm_attribute -N ubox3 -n master-p_drbd_r0S:0 -l reboot -D > Nov 19 13:52:33 ubox3 crmd: [6961]: info: process_lrm_event: LRM > operation p_drbd_r0S:0_monitor_0 (call=3, rc=7, cib-update=8, > confirmed=true) not running > > _______________________________________________ > Linux-HA mailing list > Linux-HA@lists.linux-ha.org > http://lists.linux-ha.org/mailman/listinfo/linux-ha > See also: http://linux-ha.org/ReportingProblems > _______________________________________________ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems