Here's an additional question, one that stems from the crm_verify -L I
ran on both nodes (as recommended by an error message in the debug log).
My master/slave resource is configured as:

       <master_slave notify="true" id="ms_drbd_7788">
         <instance_attributes id="ms_drbd_7788_instance_attrs">
           <attributes>
             <nvpair id="ms_drbd_7788_clone_max" name="clone_max"
value="2"/>
             <nvpair id="ms_drbd_7788_clone_node_max"
name="clone_node_max" value="1"/>
             <nvpair id="ms_drbd_7788_master_max" name="master_max"
value="1"/>
             <nvpair id="ms_drbd_7788_master_node_max"
name="master_node_max" value="1"/>
             <nvpair name="target_role" id="ms_drbd_7788_target_role"
value="started"/>
           </attributes>
         </instance_attributes>
         <primitive class="ocf" type="drbd" provider="heartbeat"
id="rsc_drbd_7788">
           <instance_attributes id="rsc_drbd_7788_instance_attrs">
             <attributes>
               <nvpair id="fdb586b1-d439-4dfb-867c-3eefbe5d585f"
name="drbd_resource" value="pgsql"/>
               <nvpair id="rsc_drbd_7788:0_target_role"
name="target_role" value="started"/>
             </attributes>
           </instance_attributes>
         </primitive>
       </master_slave>

The error message is:

[EMAIL PROTECTED] xml]# crm_verify -L
crm_verify[31297]: 2007/04/20_09:21:02 ERROR: unpack_rsc_colocation: No
resource (con=fs_on_drbd_slave, rsc=rsc_drbd_7788)
crm_verify[31297]: 2007/04/20_09:21:02 ERROR: unpack_rsc_colocation: No
resource (con=fs_on_drbd_stopped, rsc=rsc_drbd_7788)
crm_verify[31297]: 2007/04/20_09:21:02 ERROR: unpack_rsc_order:
Constraint drbf_before_fs: no resource found for LHS of fs_mirror
Errors found during check: config not valid

And the constraints in question are:

       <rsc_colocation id="fs_on_drbd_slave" to="rsc_drbd_7788"
to_role="slave" from="fs_mirror" score="-infinity"/>
       <rsc_colocation id="fs_on_drbd_stopped" to="rsc_drbd_7788"
to_role="stopped" from="fs_mirror" score="-infinity"/>

Question: What should the "to" attribute in the colocation constraints
point to in the master/slave resource configuration, the primitive ID or
the master_slave ID?

Doug
WSI Inc.



On Fri, 2007-04-20 at 08:14 -0400, Knight, Doug wrote:

> OK, here's what happened. The drbd resources were both successfully
> running in Secondary mode on both servers, and both partitions were
> synched. My Filesystem resource was stopped, with the colocation, order,
> and place constraints in place. When I started the Filesystem resource,
> which is part of a group, it triggered the appropriate drbd slave to
> promote to master and transition to Primary. However, The Filesystem
> resource did not complete or mount the partition, which I believe is
> because Notify is not enabled on it. A manual cleanup finally got it to
> start and mount, following all of the constraints I had defined. Next, I
> tried putting the server which was drbd primary into Standby state,
> which caused all kinds of problems (hung process, hung GUI, heartbeat
> shutdown wouldn't complete, etc). I finally had to restart heartbeat on
> the server I was trying to send into Standby state (note that this node
> was also the DC at the time). So, I'm back up to where I have drbd in
> slave/slave, secondary/secondary mode, and filesystem stopped.
> 
> I wanted to add notify="true" to either the filesystem resource itself
> or to its group, but the DTD does not define notify for groups (even
> though for some reason the GUI thinks you CAN define the notify
> attribute). I plan on eventually adding an IPaddr and a pgsql resource
> to this group. So I have two questions: 1) Where does it make more sense
> to add notify, at the group level or for the individual resource; and 2)
> Should the DTD define notify as an attribute of groups?
> 
> Doug
> 
> On Fri, 2007-04-20 at 09:56 +0200, Bernhard Limbach wrote:
> 
> > Hi,
> > 
> > die you set notify="true" for the drbd master_slave resource ? This seemed 
> > to help to get drbd promoted when I was playing around with the drbd OCF RA.
> > 
> > Still I did not manage to get it running smoothly and had no time to 
> > further investigate, so I reverted back to drbddisk.
> > 
> > I'm very curious for the results of your testing, please keep us informed 
> > ;o))
> > 
> > Regards,
> > Bernhard
> > 
> > 
> > -------- Original-Nachricht --------
> > Datum: Thu, 19 Apr 2007 13:51:12 -0400
> > Von: Doug Knight <[EMAIL PROTECTED]>
> > An: General Linux-HA mailing list <linux-ha@lists.linux-ha.org>
> > Betreff: Re: [Linux-HA] Cannot create group containing drbd using HB GUI
> > 
> > > I made the ID change indicated below (for the colocation constraints),
> > > and everything configured fine using cibadmin. Now, I started JUST the
> > > drbd master/slave resource, with the rsc_location rule setting the
> > > expression uname to one of the two nodes in the cluster. Both drbd
> > > processes come up and sync up the partition, but both are still in
> > > slave/secondary mode (i.e. the rsc_location rule did not cause a
> > > promotion). Am I missing something here? This is the rsc_location
> > > constraint:
> > > 
> > > <rsc_location id="locate_drbd" rsc="rsc_drbd_7788">
> > >         <rule id="rule_drbd_on_dk" role="master" score="100">
> > >                 <expression id="exp_drbd_on_dk" attribute="#uname"
> > > operation="eq" value="arc-dknightlx"/>
> > >         </rule>
> > > </rsc_location>
> > > 
> > > (By the way, the example from Idioms/MasterConstraints web page does not
> > > have an ID specified in the expression tag, so I added one to mine.)
> > > Doug
> > > On Thu, 2007-04-19 at 13:04 -0400, Doug Knight wrote:
> > > 
> > > > ...
> > > > 
> > > > > > > >>     
> > > > > > > >>>> For exemple
> > > > > > > >>>> <rsc_location id="drbd1_loc_nodeA" rsc="drbd1">
> > > > > > > >>>>     <rule id="pref_drbd1_loc_nodeA" score="600">
> > > > > > > >>>>          <expression attribute="#uname" operation="eq"
> > > value="nodeA" 
> > > > > > > >>>> id="pref_drbd1_loc_nodeA_attr"/>
> > > > > > > >>>>     </rule>
> > > > > > > >>>>     <rule id="pref_drbd1_loc_nodeB" score="800">
> > > > > > > >>>>          <expression attribute="#uname" operation="eq"
> > > value="nodeB" 
> > > > > > > >>>> id="pref_drbd1_loc_nodeB_attr"/>
> > > > > > > >>>>     </rule>
> > > > > > > >>>> </rsc_location>
> > > > > > > >>>>
> > > > > > > >>>> In this case, nodeB will be primary for resource drbd1. Is
> > > that what
> > > > > > > >>>>         
> > > > > > > >> you 
> > > > > > > >>     
> > > > > > > >>>> were looking for ?
> > > > > > > >>>>         
> > > > > > > >>> Not like this, not when using the drbd OCF Resource Agent as a
> > > > > > > >>> master-slave one. In that case, you need to bind the
> > > rsc_location to
> > > > > > > >>>       
> > > > > > > >> the
> > > > > > > >>     
> > > > > > > >>> role=Master as well.
> > > > > > > >>>       
> > > > > > > >> I was missing this in the CIB idioms page.  I just added it.
> > > > > > > >>
> > > > > > > >>        http://linux-ha.org/CIB/Idioms
> > > > 
> > > > 
> > > > I tried setting up colocation constraints similar to those shown in the
> > > > example referenced in the URL above, and it complained about the
> > > > identical ids:
> > > > 
> > > > [EMAIL PROTECTED] xml]# more rule_fs_on_drbd_slave.xml 
> > > > <rsc_colocation id="fs_on_drbd" to="rsc_drbd_7788" to_role="slave"
> > > > from="fs_mirror" score="-infinity"/>
> > > > 
> > > > [EMAIL PROTECTED] xml]# more rule_fs_on_drbd_stopped.xml 
> > > > <rsc_colocation id="fs_on_drbd" to="rsc_drbd_7788" to_role="stopped"
> > > > from="fs_mirror" score="-infinity"/>
> > > > 
> > > > [EMAIL PROTECTED] xml]# cibadmin -o constraints -C -x
> > > > rule_fs_on_drbd_stopped.xml 
> > > > 
> > > > [EMAIL PROTECTED] xml]# cibadmin -o constraints -C -x
> > > > rule_fs_on_drbd_slave.xml 
> > > > Call cib_create failed (-21): The object already exists
> > > >  <failed>
> > > >    <failed_update id="fs_on_drbd" object_type="rsc_colocation"
> > > > operation="add" reason="The object already exists">
> > > >      <rsc_colocation id="fs_on_drbd" to="rsc_drbd_7788" to_role="slave"
> > > > from="fs_mirror" score="-infinity"/>
> > > >    </failed_update>
> > > >  </failed>
> > > > 
> > > > I'm going to change the ids to be unique and try again, but wanted to
> > > > point this out since it is very similar to the example on the web page.
> > > > 
> > > > Doug
> > > > 
> > > > 
> > > > 
> > > > > > > >>        http://linux-ha.org/CIB/Idioms/MasterConstraints
> > > > > > > >>
> > > > > > > >>
> > > > > > > >>     
> > > > > > > > _______________________________________________
> > > > > > > > Linux-HA mailing list
> > > > > > > > Linux-HA@lists.linux-ha.org
> > > > > > > > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > > > > > > > See also: http://linux-ha.org/ReportingProblems
> > > > > > > >
> > > > > > > >   
> > > > > > > _______________________________________________
> > > > > > > Linux-HA mailing list
> > > > > > > Linux-HA@lists.linux-ha.org
> > > > > > > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > > > > > > See also: http://linux-ha.org/ReportingProblems
> > > > > > > 
> > > > > > _______________________________________________
> > > > > > Linux-HA mailing list
> > > > > > Linux-HA@lists.linux-ha.org
> > > > > > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > > > > > See also: http://linux-ha.org/ReportingProblems
> > > > > _______________________________________________
> > > > > Linux-HA mailing list
> > > > > Linux-HA@lists.linux-ha.org
> > > > > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > > > > See also: http://linux-ha.org/ReportingProblems
> > > > > 
> > > > _______________________________________________
> > > > Linux-HA mailing list
> > > > Linux-HA@lists.linux-ha.org
> > > > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > > > See also: http://linux-ha.org/ReportingProblems
> > > > 
> > > _______________________________________________
> > > Linux-HA mailing list
> > > Linux-HA@lists.linux-ha.org
> > > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > > See also: http://linux-ha.org/ReportingProblems
> > 
> _______________________________________________
> Linux-HA mailing list
> Linux-HA@lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to