I ran into a similar behavior with an earlier version of glusterfs
on raw disk (not DRBD).  In that case it was a bug in gluster
that, although the nodes were supposed to be operating in a 
"mirror" configuration, the one remaining node would refuse to 
service requests after the other node was stonith'd because
(surprise, surprise) it had lost contact with the other node.

That bug is fixed in the current version.

I know that gluster != ocfs2, however maybe the cause in your
case is analogous?

Devin


_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker

Reply via email to