Guy wrote:
Hi guys,

After much fiddling and learning (still loads to do though) I've got
my 2 node primary/secondary  & secondary/primary set more or less
working. One failure of node 1, node 2 takes both drbd partitions as
primary and mounts the partitions and nfs etc etc.
When node 1 is brought back up I wait for node 2 to sync back over the
old primary on node 1 and then use crm_resource to move the one
primary back to node 1. Is there any way to do this automatically?
Wait for drbd to sync and then push primary back to node 1? This is
just curiosity, doing it manually gives me a chance to see that all is
well.

The problem I've really got is if one node just loses connectivity.
I've played around with dopd and pingd but this hasn't given the
desired results. I have one interface into the network and one
connecting the machines by crossover.
If node 1 loses connectivity (with dopd running) it fences /dev/drbd0
on node 2, thus stopping node 2 from taking it as primary. What I
really need it to do is force any primary partitions into secondary
mode if the node loses connectivity so that the "live" node can sync
back to it after recovery of connectivity. I don't see that dopd can
help me with this, so do I make some sort of constraint with pingd to
demote the partitions if there's no connectivity?

Just set a score of -infinity for the master role when pingd attribute is 0 or undefined.

sth like

<rsc_location id="my_resource:connected" rsc="my_resource">
<rule role="master" id="my_resource:connected:rule" score="-INFINITY" boolean_op="or">
    <expression id="my_resource:connected:expr:undefined"
      attribute="pingd" operation="not_defined"/>
    <expression id="my_resource:connected:expr:zero"
      attribute="pingd" operation="lte" value="0"/>
  </rule>
</rsc_location>

I have location constraints putting one primary partition on each
node, so would I need to do something with the scoring to ensure that
demoted partitions stayed that way until resync by drbd was done?

That does not seem possible right now.

I'd go with keeping the primary on the second node until you manually verified drbd has synced and then migrate manually.

I've attached my conf files. As you can see the only constraints I
currently have are the location preferences for the primary partitions
and the colocation and order constraints to ensure the groups for the
fs, nfs and ipaddr only start on the appropriate node.

Regards
Dominik
_______________________________________________
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to