Hi All, I need drbd configuration file for the Active-Active drbd setup.
Please send me the file / or information how to write drbd.conf for active-active cluster configuration Thanks & Regards, Jugal Shah jugal shah <[EMAIL PROTECTED]> wrote: Hi All, Thank you very much for your help. Anybody please send me the drbd.conf file for the Active-Active Node configuration. Thanks in Advance. Best Regards, Jugal Shah jugal shah <[EMAIL PROTECTED]> wrote: Hi All, Thank you very much Fabio Martins. I have till some confustion regarding Active-Active Node configuration. Though Heartbeat creates virtual IP for the Active node, if the active node fails it transfers the request to the Secondary node by activating it and assigning it the Virtual IP. But In this scenarion, I need heartbeat works as for load-balancing also means, I need how would heartbeat transfer the requests to both node. Is there anyway that heartbeat transfer the request to both nodes. For example: I have to server DBServer1 and DBServer2. DBServer1: Supports for the USA customers. DBServer2: Supports for the UK customers. So, When they both doing their entry into the appropriate severs, the changes are merged into the both database servers. So, by that way they have not much load on both servers. How to configure the heartbeat so that it will transfer the request to both servers as per our need. Thank you very much once again Best Regards, Jugal. Hi Jugal! linux-ha-bounces[at]lists.linux-ha.org wrote on 24/05/2007 07:33:22: > Hi All, > > I have done the configuration of DRBD with the help of heartbeat. > > Anybody please guide me how to do the Active/Active Node > configuration with the help of Heartbeat and DRBD. > > I need the DRBD works like Merge Replication, So, it captures all > the changes from both MySQL Databases and merege them in Both. So in > my case both mysql server is active. In this case you must have different active disks or each node. For example, in node A you will have the disk drbd0 active and the same disk passive in node B. In node B you must have the drbd1 as active and the same disk as passive to node A. If you want to have all the disks mounted on both nodes, you need something like GPFS. Here follows a cib.xml example for what you need. In this case I'm starting the drbd0 and drbd2 on the node s0580crmdb2pr1 as primary disk and mounting them. On node s0580crmdb2pr2 I'm starting the disks drbd1 and drbd3 as primary and mounting them. Considering that I am using the drbddisk resource (from version 1), the drbd must be already running (service drbd start;chkconfig --level 35 drbd on): <cib admin_epoch="0" have_quorum="true" num_peers="1" cib_feature_revision="1.3" generated="true" ccm_transition="11" dc_uuid="a985c07a-84e9-4062-a57c-9cbb3799b5ed" epoch="47" num_updates="1989" crm-debug-origin="create_node_entry" cib-last-written="Tue May 22 14:12:25 2007"> <configuration> <crm_config> <cluster_property_set id="cib-bootstrap-options"> <attributes> <nvpair id="cib-bootstrap-options-symmetric_cluster" name="symmetric_cluster" value="true"/> <nvpair id="cib-bootstrap-options-no_quorum_policy" name="no_quorum_policy" value="stop"/> <nvpair id="cib-bootstrap-options-default_resource_stickiness" name="default_resource_stickiness" value="0"/> <nvpair id="cib-bootstrap-options-default_resource_failure_stickiness" name="default_resource_failure_stickiness" value="0"/> <nvpair name="stonith_enabled" id="cib-bootstrap-options-stonith_enabled" value="False"/> <nvpair id="cib-bootstrap-options-stonith_action" name="stonith_action" value="reboot"/> <nvpair id="cib-bootstrap-options-stop_orphan_resources" name="stop_orphan_resources" value="false"/> <nvpair id="cib-bootstrap-options-stop_orphan_actions" name="stop_orphan_actions" value="true"/> <nvpair id="cib-bootstrap-options-remove_after_stop" name="remove_after_stop" value="false"/> <nvpair id="cib-bootstrap-options-short_resource_names" name="short_resource_names" value="true"/> <nvpair id="cib-bootstrap-options-transition_idle_timeout" name="transition_idle_timeout" value="120s"/> <nvpair id="cib-bootstrap-options-default_action_timeout" name="default_action_timeout" value="1200s"/> <nvpair id="cib-bootstrap-options-is_managed_default" name="is_managed_default" value="true"/> <nvpair name="last-lrm-refresh" id="cib-bootstrap-options-last-lrm-refresh" value="1161269979"/> </attributes> </cluster_property_set> </crm_config> <nodes> <node id="a985c07a-84e9-4062-a57c-9cbb3799b5ed" uname="s0580crmdb2pr2" type="normal"/> <node id="e63ba264-c9e7-48a9-80a7-f6d12a22d4b0" uname="s0580crmdb2pr1" type="normal"/> </nodes> <resources> <group ordered="true" description="Grupo de recursos db2pr1" restart_type="ignore" resource_stickiness="0" is_managed="default" collocated="true" multiple_active="stop_start" id="group_db2pr1"> <primitive id="IP_db2pr1" class="ocf" type="IPaddr" provider="heartbeat" restart_type="ignore" is_managed="default" resource_stickiness="0" description="IP utilizado pelo dbpr1" multiple_active="stop_start"> <instance_attributes id="IP_db2pr1_instance_attrs"> <attributes> <nvpair id="eb355e01-1f73-4ce7-9d43-edf0f160d77d" name="ip" value="10.226.13.12"/> <nvpair id="IP_db2pr1_target_role" name="target_role" value="started"/> </attributes> </instance_attributes> </primitive> <primitive id="resource_drbd0" class="heartbeat" type="drbddisk" provider="heartbeat" restart_type="ignore" is_managed="default" resource_stickiness="0" multiple_active="stop_start"> <instance_attributes id="resource_drbd0_instance_attrs"> <attributes> <nvpair id="resource_drbd0_target_role" name="target_role" value="started"/> <nvpair id="add7aa21-a89a-40a1-ab34-6c626d336629" name="1" value="rmpath0-part1"/> </attributes> </instance_attributes> </primitive> <primitive id="resource_drbd2" class="heartbeat" type="drbddisk" provider="heartbeat" restart_type="ignore" is_managed="default" resource_stickiness="0" multiple_active="stop_start"> <instance_attributes id="resource_drbd2_instance_attrs"> <attributes> <nvpair id="resource_drbd2_target_role" name="target_role" value="started"/> <nvpair id="cf0e6792-e667-47f0-9dc4-784b6e8f98a7" name="1" value="rmpath2-part1"/> </attributes> </instance_attributes> </primitive> <primitive id="resource_drbd0_fs" class="ocf" type="Filesystem" provider="heartbeat" restart_type="ignore" is_managed="default" resource_stickiness="0" multiple_active="stop_start"> <instance_attributes id="resource_drbd0_fs_instance_attrs"> <attributes> <nvpair id="resource_drbd0_fs_target_role" name="target_role" value="started"/> <nvpair id="3b4e294c-7521-46a6-b3d3-9c3df991e4d0" name="device" value="/dev/drbd0"/> <nvpair id="cfd540a4-989f-4a5f-a9a0-90f95d77bb20" name="directory" value="/dbtbs"/> <nvpair id="c9788a08-5070-45a0-beab-33b34cc6e2b2" name="fstype" value="ext3"/> </attributes> </instance_attributes> </primitive> <primitive id="resource_drbd2_fs" class="ocf" type="Filesystem" provider="heartbeat" restart_type="ignore" is_managed="default" resource_stickiness="0" multiple_active="stop_start"> <instance_attributes id="resource_drbd2_fs_instance_attrs"> <attributes> <nvpair id="resource_drbd2_fs_target_role" name="target_role" value="started"/> <nvpair id="343f1b2c-9f16-4aa1-8456-18c098ecc147" name="device" value="/dev/drbd2"/> <nvpair id="d6fea532-33ad-4574-a580-e0f6f358b691" name="directory" value="/dbtemp"/> <nvpair id="45441dad-ca3b-47a5-ae86-d10e5e636a2e" name="fstype" value="ext3"/> </attributes> </instance_attributes> </primitive> <primitive id="resource_db2_prod" class="ocf" type="db2" provider="heartbeat" restart_type="ignore" is_managed="default" resource_stickiness="0" description="DB2 Producao" multiple_active="stop_start"> <instance_attributes id="resource_db2_prod_instance_attrs"> <attributes> <nvpair id="resource_db2_prod_target_role" name="target_role" value="started"/> <nvpair id="94c70633-51d4-4cd8-90d2-4e691be4f44f" name="instance" value="db2admin"/> </attributes> </instance_attributes> </primitive> </group> <group id="group_db2pr2" ordered="true" description="Grupo de recursos db2pr2" restart_type="ignore" resource_stickiness="0" is_managed="default" collocated="true" multiple_active="stop_start"> <primitive id="IP_db2pr2" class="ocf" type="IPaddr" provider="heartbeat" restart_type="ignore" is_managed="default" resource_stickiness="0" description="IP utilizado pelo db2pr2" multiple_active="stop_start"> <instance_attributes id="IP_db2pr2_instance_attrs"> <attributes> <nvpair id="IP_db2pr2_target_role" name="target_role" value="started"/> <nvpair id="10c42a39-8feb-4bd7-a9e6-1b07c8f90b7e" name="ip" value="10.226.13.13"/> </attributes> </instance_attributes> </primitive> <primitive id="resource_drbd1" class="heartbeat" type="drbddisk" provider="heartbeat" restart_type="ignore" is_managed="default" resource_stickiness="0" multiple_active="stop_start"> <instance_attributes id="resource_drbd1_instance_attrs"> <attributes> <nvpair id="resource_drbd1_target_role" name="target_role" value="started"/> <nvpair id="59672e78-5be2-479b-aee9-e1ebee042022" name="1" value="rmpath1-part1"/> </attributes> </instance_attributes> </primitive> <primitive id="resource_drbd3" class="heartbeat" type="drbddisk" provider="heartbeat" restart_type="ignore" is_managed="default" resource_stickiness="0" multiple_active="stop_start"> <instance_attributes id="resource_drbd3_instance_attrs"> <attributes> <nvpair id="resource_drbd3_target_role" name="target_role" value="started"/> <nvpair id="418e16c6-c5a3-4dd8-8534-727c10bd182c" name="1" value="rmpath2-part2"/> </attributes> </instance_attributes> </primitive> <primitive id="resource_drbd1_fs" class="ocf" type="Filesystem" provider="heartbeat" restart_type="ignore" is_managed="default" resource_stickiness="0" multiple_active="stop_start"> <instance_attributes id="resource_drbd1_fs_instance_attrs"> <attributes> <nvpair id="resource_drbd1_fs_target_role" name="target_role" value="started"/> <nvpair id="ad6d9329-94ed-4d91-b72c-0e32a978de37" name="device" value="/dev/drbd1"/> <nvpair id="a7dc2e3e-d520-404b-9f41-d128e14ec6b9" name="directory" value="/dbtbs2"/> <nvpair id="14c454de-8310-42ab-9a3f-5ff303ed007f" name="fstype" value="ext3"/> </attributes> </instance_attributes> </primitive> <primitive id="resource_drbd3_fs" class="ocf" type="Filesystem" provider="heartbeat" restart_type="ignore" is_managed="default" resource_stickiness="0" multiple_active="stop_start"> <instance_attributes id="resource_drbd3_fs_instance_attrs"> <attributes> <nvpair id="resource_drbd3_fs_target_role" name="target_role" value="started"/> <nvpair id="a9a70879-a902-479f-a148-359b13b9ffe6" name="device" value="/dev/drbd3"/> <nvpair id="0479c1c6-3d70-4b12-a9d9-6b9328f208f8" name="directory" value="/dblog"/> <nvpair id="98451c5d-29bf-4469-8a7d-c22798020d6f" name="fstype" value="ext3"/> </attributes> </instance_attributes> </primitive> </group> </resources> <constraints> <rsc_location id="place_db2pr1" rsc="group_db2pr1"> <rule id="prefered_place_db2pr1" score="100"> <expression attribute="#uname" id="745c2e82-e6cb-4c56-9611-797a2533a47d" operation="eq" value="s0580crmdb2pr1"/> </rule> </rsc_location> <rsc_location id="place_db2pr2" rsc="group_db2pr2"> <rule id="prefered_place_db2pr2" score="100"> <expression attribute="#uname" id="4008e22f-7953-4398-bf10-4680d233ce1b" operation="eq" value="s0580crmdb2pr2"/> </rule> </rsc_location> </constraints> </configuration> </cib> I hope it helps you! :) Best Regards, Fabio Martins jugal shah <[EMAIL PROTECTED]> wrote: Hi All, I have done the configuration of DRBD with the help of heartbeat. Anybody please guide me how to do the Active/Active Node configuration with the help of Heartbeat and DRBD. I need the DRBD works like Merge Replication, So, it captures all the changes from both MySQL Databases and merege them in Both. So in my case both mysql server is active. Thanks in Advance. Please reply me as early as possible. Thanks & Regards, Jugal Shah --------------------------------- Now that's room service! Choose from over 150,000 hotels in 45,000 destinations on Yahoo! Travel to find your fit. --------------------------------- Fussy? Opinionated? Impossible to please? Perfect. Join Yahoo!'s user panel and lay it on us. --------------------------------- Be a better Globetrotter. Get better travel answers from someone who knows. Yahoo! Answers - Check it out. --------------------------------- The fish are biting. Get more visitors on your site using Yahoo! Search Marketing. --------------------------------- Be a better Globetrotter. Get better travel answers from someone who knows. Yahoo! Answers - Check it out. _______________________________________________ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems