Hello!

I am working on a cluster with a two Master/Slave instances.

I have: 2 drbd Master/Slave instance, 1 pingd clone instance, 1 group with Filesystem resource.

ms-drbd0 is the first Master/Slave
ms-drbd1 is the second Master/Slave
mail_Group is the first group, it depends on ms-drbd0
samba_Group is the second group, it depends on ms-drbd1

I have the next rules:

<rsc_order id="mail-drbd0_before_fs0" from="Montaxe_mail" action="start" to="ms-drbd0" to_action="promote"/> <rsc_order id="samba-drbd1_before_fs0" from="Montaxe_samba" action="start" to="ms-drbd1" to_action="promote"/> (starts Montaxe_mail when ms-drbd0 has been promoted, start Montaxe_samba when ms-drbd1 has been promoted. These rules are ok, I think)

<rsc_colocation id="mail_Group_on_ms-drbd0" to="ms-drbd0" to_role="master" from="mail_Group" score="INFINITY"/> <rsc_colocation id="samba_Group_on_ms-drbd1" to="ms-drbd1" to_role="master" from="samba_Group" score="INFINITY"/> (Run mail_Group only on the master node, run samba_Group on the master node)

<rsc_location id="mail:drbd" rsc="ms-drbd0">
<rule id="rule:ms-drbd0" role="master" score="100">
<expression attribute="#uname" operation="eq" value="debianquagga2"/>
</rule>
<rule id="mail_Group:pingd:rule" score="-INFINITY" boolean_op="or">
<expression id="mail_Group:pingd:expr:undefined" attribute="pingd" operation="not_defined"/> <expression id="mail_Group:pingd:expr:zero" attribute="pingd" operation="lte" value="0"/>
</rule>
</rsc_location>
<rsc_location id="samba:drbd" rsc="ms-drbd1">
<rule id="rule:ms-drbd1" role="master" score="100">
<expression attribute="#uname" operation="eq" value="debianquagga2"/>
</rule>
<rule id="samba_Group:pingd:rule" score="-INFINITY" boolean_op="or">
<expression id="samba_Group:pingd:expr:undefined" attribute="pingd" operation="not_defined"/> <expression id="samba_Group:pingd:expr:zero" attribute="pingd" operation="lte" value="0"/>
</rule>
</rsc_location>
(Select debianquagga2 as Master and if the node lost its connection take the score -INFINITY to do failover, it applies to ms-drbd0 and ms-drbd1)

With this rules all is working very well but the node selected as master isn't "debianquagga2", Why could be the reason ?

I using Heartbeat 2.1.4


I have attached the cib xml file. If I delete two groups, Master is debianQuagga2, If not, Master is debianQuagga1.
   <configuration>
     <crm_config>
       <cluster_property_set id="cib-bootstrap-options">
         <attributes>
           <nvpair id="symmetric-cluster" name="symmetric-cluster" value="true"/>
           <nvpair id="stonith-enabled" name="stonith-enabled" value="true"/>
           <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="2.1.4-node: aa909246edb386137b986c5773344b98c6969999"/>
         </attributes>
       </cluster_property_set>
     </crm_config>
     <nodes>
       <node id="369975ae-9c9a-497d-9aca-2a47cda0e4ce" uname="debianquagga2" type="normal"/>
       <node id="90375d05-1004-43f9-992e-4b516d75d50b" uname="debianquagga1" type="normal"/>
     </nodes>
     <resources>
       <master_slave id="ms-drbd0">
         <meta_attributes id="ma-ms-drbd0">
           <attributes>
             <nvpair id="ma-ms-drbd0-1" name="clone_max" value="2"/>
             <nvpair id="ma-ms-drbd0-2" name="clone_node_max" value="1"/>
             <nvpair id="ma-ms-drbd0-3" name="master_max" value="1"/>
             <nvpair id="ma-ms-drbd0-4" name="master_node_max" value="1"/>
             <nvpair id="ma-ms-drbd0-5" name="notify" value="yes"/>
             <nvpair id="ma-ms-drbd0-6" name="globally_unique" value="true"/>
           </attributes>
         </meta_attributes>
         <primitive id="drbd0" class="ocf" provider="heartbeat" type="drbd">
           <instance_attributes id="ia-drbd0">
             <attributes>
               <nvpair id="ia-drbd0-1" name="drbd_resource" value="mail_disk"/>
             </attributes>
           </instance_attributes>
           <operations>
             <op id="op-ms-drbd2-1" name="monitor" interval="59s" timeout="60s" start_delay="30s" role="Master"/>
             <op id="op-ms-drbd2-2" name="monitor" interval="60s" timeout="60s" start_delay="30s" role="Slave"/>
           </operations>
         </primitive>
       </master_slave>
       <master_slave id="ms-drbd1">
         <meta_attributes id="ma-ms-drbd1">
           <attributes>
             <nvpair id="ma-ms-drbd1-1" name="clone_max" value="2"/>
             <nvpair id="ma-ms-drbd1-2" name="clone_node_max" value="1"/>
             <nvpair id="ma-ms-drbd1-3" name="master_max" value="1"/>
             <nvpair id="ma-ms-drbd1-4" name="master_node_max" value="1"/>
             <nvpair id="ma-ms-drbd1-5" name="notify" value="yes"/>
             <nvpair id="ma-ms-drbd1-6" name="globally_unique" value="true"/>
           </attributes>
         </meta_attributes>
         <primitive id="drbd1" class="ocf" provider="heartbeat" type="drbd">
           <instance_attributes id="ia-drbd1">
             <attributes>
               <nvpair id="ia-drbd1-1" name="drbd_resource" value="samba_disk"/>
             </attributes>
           </instance_attributes>
           <operations>
             <op id="op-ms-drbd21-1" name="monitor" interval="59s" timeout="60s" start_delay="30s" role="Master"/>
             <op id="op-ms-drbd21-2" name="monitor" interval="60s" timeout="60s" start_delay="30s" role="Slave"/>
           </operations>
         </primitive>
       </master_slave>
       <group id="mail_Group">
         <primitive class="ocf" id="Montaxe_mail" provider="heartbeat" type="Filesystem">
           <operations>
             <op id="op_mail_Group" name="start" timeout="500s"/>
           </operations>
           <instance_attributes id="ia_mail_Group">
             <attributes>
               <nvpair id="drbd_mail_Group_1" name="device" value="/dev/drbd0"/>
               <nvpair id="drbd_mail_Group_2" name="directory" value="/mnt/mail"/>
               <nvpair id="drbd_mail_Group_3" name="fstype" value="ext3"/>
             </attributes>
           </instance_attributes>
         </primitive>
       </group>
       <group id="samba_Group">
         <primitive class="ocf" id="Montaxe_samba" provider="heartbeat" type="Filesystem">
           <operations>
             <op id="op_samba_Group" name="start" timeout="500s"/>
           </operations>
           <instance_attributes id="ia_samba_Group">
             <attributes>
               <nvpair id="drbd_samba_Group_1" name="device" value="/dev/drbd1"/>
               <nvpair id="drbd_samba_Group_2" name="directory" value="/mnt/samba"/>
               <nvpair id="drbd_samba_Group_3" name="fstype" value="ext3"/>
             </attributes>
           </instance_attributes>
         </primitive>
       </group>
       <clone id="pingd">
         <instance_attributes id="pingd">
           <attributes>
             <nvpair id="pingd-clone_node_max" name="clone_node_max" value="1"/>
           </attributes>
         </instance_attributes>
         <primitive id="pingd-child" provider="heartbeat" class="ocf" type="pingd">
           <operations>
             <op id="pingd-child-monitor" name="monitor" interval="20s" timeout="40s" prereq="nothing"/>
             <op id="pingd-child-start" name="start" prereq="nothing"/>
           </operations>
           <instance_attributes id="pingd_inst_attr">
             <attributes>
               <nvpair id="pingd-dampen" name="dampen" value="5s"/>
               <nvpair id="pingd-multiplier" name="multiplier" value="100"/>
             </attributes>
           </instance_attributes>
         </primitive>
       </clone>
     </resources>
     <constraints>
       <rsc_order id="mail-drbd0_before_fs0" from="Montaxe_mail" action="start" to="ms-drbd0" to_action="promote"/>
       <rsc_order id="samba-drbd1_before_fs0" from="Montaxe_samba" action="start" to="ms-drbd1" to_action="promote"/>
       <rsc_colocation id="mail_Group_on_ms-drbd0" to="ms-drbd0" to_role="master" from="mail_Group" score="INFINITY"/>
       <rsc_colocation id="samba_Group_on_ms-drbd1" to="ms-drbd1" to_role="master" from="samba_Group" score="INFINITY"/>
       <rsc_location id="pingd-rules" rsc="pingd">
         <rule id="pingd-clone-rules" score="100" boolean_op="or">
           <expression attribute="#hostname" operation="eq" value="debianQuagga1" id="aa400a39-a27e-4a64-8bf2-2d14d0ef756e"/>
           <expression attribute="#hostname" operation="eq" value="debianQuagga2" id="123c3e27-a114-409b-86d3-2ef33f7325ed"/>
         </rule>
       </rsc_location>
       <rsc_location id="mail:drbd" rsc="ms-drbd0">
         <rule id="rule:ms-drbd0" role="master" score="100">
           <expression attribute="#hostname" operation="eq" value="debianQuagga2" id="c18482e3-ae25-4f6b-9cf7-c92f8458ebff"/>
         </rule>
       </rsc_location>
       <rsc_location id="samba:drbd" rsc="ms-drbd1">
         <rule id="rule:ms-drbd1" role="master" score="100">
           <expression attribute="#hostname" operation="eq" value="debianQuagga2" id="c7fe12c9-3d30-421e-b954-28745928bd18"/>
         </rule>
       </rsc_location>
     </constraints>
   </configuration>

_______________________________________________
Pacemaker mailing list
Pacemaker@clusterlabs.org
http://list.clusterlabs.org/mailman/listinfo/pacemaker

Reply via email to