Hi, die you set notify="true" for the drbd master_slave resource ? This seemed to help to get drbd promoted when I was playing around with the drbd OCF RA.
Still I did not manage to get it running smoothly and had no time to further investigate, so I reverted back to drbddisk. I'm very curious for the results of your testing, please keep us informed ;o)) Regards, Bernhard -------- Original-Nachricht -------- Datum: Thu, 19 Apr 2007 13:51:12 -0400 Von: Doug Knight <[EMAIL PROTECTED]> An: General Linux-HA mailing list <linux-ha@lists.linux-ha.org> Betreff: Re: [Linux-HA] Cannot create group containing drbd using HB GUI > I made the ID change indicated below (for the colocation constraints), > and everything configured fine using cibadmin. Now, I started JUST the > drbd master/slave resource, with the rsc_location rule setting the > expression uname to one of the two nodes in the cluster. Both drbd > processes come up and sync up the partition, but both are still in > slave/secondary mode (i.e. the rsc_location rule did not cause a > promotion). Am I missing something here? This is the rsc_location > constraint: > > <rsc_location id="locate_drbd" rsc="rsc_drbd_7788"> > <rule id="rule_drbd_on_dk" role="master" score="100"> > <expression id="exp_drbd_on_dk" attribute="#uname" > operation="eq" value="arc-dknightlx"/> > </rule> > </rsc_location> > > (By the way, the example from Idioms/MasterConstraints web page does not > have an ID specified in the expression tag, so I added one to mine.) > Doug > On Thu, 2007-04-19 at 13:04 -0400, Doug Knight wrote: > > > ... > > > > > > > >> > > > > > >>>> For exemple > > > > > >>>> <rsc_location id="drbd1_loc_nodeA" rsc="drbd1"> > > > > > >>>> <rule id="pref_drbd1_loc_nodeA" score="600"> > > > > > >>>> <expression attribute="#uname" operation="eq" > value="nodeA" > > > > > >>>> id="pref_drbd1_loc_nodeA_attr"/> > > > > > >>>> </rule> > > > > > >>>> <rule id="pref_drbd1_loc_nodeB" score="800"> > > > > > >>>> <expression attribute="#uname" operation="eq" > value="nodeB" > > > > > >>>> id="pref_drbd1_loc_nodeB_attr"/> > > > > > >>>> </rule> > > > > > >>>> </rsc_location> > > > > > >>>> > > > > > >>>> In this case, nodeB will be primary for resource drbd1. Is > that what > > > > > >>>> > > > > > >> you > > > > > >> > > > > > >>>> were looking for ? > > > > > >>>> > > > > > >>> Not like this, not when using the drbd OCF Resource Agent as a > > > > > >>> master-slave one. In that case, you need to bind the > rsc_location to > > > > > >>> > > > > > >> the > > > > > >> > > > > > >>> role=Master as well. > > > > > >>> > > > > > >> I was missing this in the CIB idioms page. I just added it. > > > > > >> > > > > > >> http://linux-ha.org/CIB/Idioms > > > > > > I tried setting up colocation constraints similar to those shown in the > > example referenced in the URL above, and it complained about the > > identical ids: > > > > [EMAIL PROTECTED] xml]# more rule_fs_on_drbd_slave.xml > > <rsc_colocation id="fs_on_drbd" to="rsc_drbd_7788" to_role="slave" > > from="fs_mirror" score="-infinity"/> > > > > [EMAIL PROTECTED] xml]# more rule_fs_on_drbd_stopped.xml > > <rsc_colocation id="fs_on_drbd" to="rsc_drbd_7788" to_role="stopped" > > from="fs_mirror" score="-infinity"/> > > > > [EMAIL PROTECTED] xml]# cibadmin -o constraints -C -x > > rule_fs_on_drbd_stopped.xml > > > > [EMAIL PROTECTED] xml]# cibadmin -o constraints -C -x > > rule_fs_on_drbd_slave.xml > > Call cib_create failed (-21): The object already exists > > <failed> > > <failed_update id="fs_on_drbd" object_type="rsc_colocation" > > operation="add" reason="The object already exists"> > > <rsc_colocation id="fs_on_drbd" to="rsc_drbd_7788" to_role="slave" > > from="fs_mirror" score="-infinity"/> > > </failed_update> > > </failed> > > > > I'm going to change the ids to be unique and try again, but wanted to > > point this out since it is very similar to the example on the web page. > > > > Doug > > > > > > > > > > > >> http://linux-ha.org/CIB/Idioms/MasterConstraints > > > > > >> > > > > > >> > > > > > >> > > > > > > _______________________________________________ > > > > > > Linux-HA mailing list > > > > > > Linux-HA@lists.linux-ha.org > > > > > > http://lists.linux-ha.org/mailman/listinfo/linux-ha > > > > > > See also: http://linux-ha.org/ReportingProblems > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > Linux-HA mailing list > > > > > Linux-HA@lists.linux-ha.org > > > > > http://lists.linux-ha.org/mailman/listinfo/linux-ha > > > > > See also: http://linux-ha.org/ReportingProblems > > > > > > > > > _______________________________________________ > > > > Linux-HA mailing list > > > > Linux-HA@lists.linux-ha.org > > > > http://lists.linux-ha.org/mailman/listinfo/linux-ha > > > > See also: http://linux-ha.org/ReportingProblems > > > _______________________________________________ > > > Linux-HA mailing list > > > Linux-HA@lists.linux-ha.org > > > http://lists.linux-ha.org/mailman/listinfo/linux-ha > > > See also: http://linux-ha.org/ReportingProblems > > > > > _______________________________________________ > > Linux-HA mailing list > > Linux-HA@lists.linux-ha.org > > http://lists.linux-ha.org/mailman/listinfo/linux-ha > > See also: http://linux-ha.org/ReportingProblems > > > _______________________________________________ > Linux-HA mailing list > Linux-HA@lists.linux-ha.org > http://lists.linux-ha.org/mailman/listinfo/linux-ha > See also: http://linux-ha.org/ReportingProblems -- "Feel free" - 10 GB Mailbox, 100 FreeSMS/Monat ... Jetzt GMX TopMail testen: http://www.gmx.net/de/go/topmail _______________________________________________ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems