Hi Darren,

I believe that this is handled by DRBD by fencing the Master/Slave resource during resync using Pacemaker. See http://www.drbd.org/users-guide/s-pacemaker-fencing.html. This would prevent Node A to promote/start services with outdated data (fence-peer), and it would be forced to wait with takeover until the resync is completed (after-resync-target).

Regards,
Menno

Op 11-3-2010 15:52, darren.mans...@opengi.co.uk schreef:
I’ve been reading the DRBD Pacemaker guide on the DRBD.org site and I’m
not sure I can find the answer to my question.

Imagine a scenario:

(NodeA

NodeB

Order and group:

M/S DRBD Promote/Demote

FS Mount

Other resource that depends on the F/S mount

DRBD master location score of 100 on NodeA)

NodeA is down, resources failover to NodeB and everything happily runs
for days. When NodeA is brought back online it isn’t treated as
split-brain as a normal demote/promote would happen. But the data on
NodeA would be very old and possibly take a long time to sync from NodeB.

What would happen in this scenario? Would the RA defer the promote until
the sync is completed? Would the inability to promote cause the failback
to not happen and a resource cleanup is required once the sync has
completed?

I guess this is really down to how advanced the Linbit DRBD RA is?

Thanks

Darren



_______________________________________________
Pacemaker mailing list
Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

_______________________________________________
Pacemaker mailing list
Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Reply via email to