On 3/3/12 2:14 PM, Florian Haas wrote:
On Sat, Mar 3, 2012 at 6:55 PM, William Seligman
<selig...@nevis.columbia.edu>  wrote:
On 3/3/12 12:03 PM, emmanuel segura wrote:

are you sure the exportfs agent can be use it with clone active/active?

a) I've been through the script. If there's some problem associated with it
being cloned, I haven't seen it. (It can't handle globally-unique="true",
but I didn't turn that on.)

It shouldn't have a problem with being cloned. Obviously, cloning that
RA _really_ makes sense only with the export that manages an NFSv4
virtual root (fsid=0). Otherwise, the export clone has to be hosted on
a clustered filesystem, and you'd have to have a pNFS implementation
that doesn't suck (tough to come by on Linux), and if you want that
sort of replicate, parallel-access NFS you might as well use Gluster.
The downside of the latter, though, is it's currently NFSv3-only,
without sideband locking.

I'll look this over when I have a chance. I think I can get away without a NFSv4 virtual root because I'm exporting everything to my cluster either read-only, or only one system at a time will do any writing. Now that you've warned me, I'll do some more checking.

b) I had similar problems using the exportfs resource in a primary-secondary
setup without clones.

Why would a resource being cloned create an ordering problem? I haven't set
the interleave parameter (even with the documentation I'm not sure what it
does) but A before B before C seems pretty clear, even for cloned resources.

As far as what interleave does. Suppose you have two clones, A and B.
And they're linked with an order constraint, like this:

order A_before_B inf: A B

... then if interleave is false, _all_ instances of A must be started
before _any_ instance of B gets to start anywhere in the cluster.
However if interleave is true, then for any node only the _local_
instance of A needs to be started before it can start the
corresponding _local_ instance of B.

In other words, interleave=true is actually the reasonable thing to
set on all clone instances by default, and I believe the pengine
actually does use a default of interleave=true on defined clone sets
since some 1.1.x release (I don't recall which).

Thanks, Florian. That's a great explanation. I'll probably stick "interleave=true" on most of my clones just to make sure.

It explains an error message I've seen in the logs:

Mar 2 18:15:19 hypatia-tb pengine: [4414]: ERROR: clone_rsc_colocation_rh: Cannot interleave clone ClusterIPClone and Gfs2Clone because they do not support the same number of resources per node

Because ClusterIPClone has globally-unique=true and clone-max=2, it's possible for both instances to be running on a single node; I've seen this a few times in my testing when cycling power on one of the nodes. Interleaving doesn't make sense in such a case.

Bill, seeing as you've already pastebinned your config and crm_mon
output, could you also pastebin your whole CIB as per "cibadmin -Q"
output? Thanks.

Sure: <http://pastebin.com/pjSJ79H6>. It doesn't have the exportfs resources in it; I took them out before leaving for the weekend. If it helps, I'll put them back in and try to get the "cibadmin -Q" output before any nodes crash.

--
Bill Seligman             | mailto://selig...@nevis.columbia.edu
Nevis Labs, Columbia Univ | http://www.nevis.columbia.edu/~seligman/
PO Box 137                |
Irvington NY 10533  USA   | Phone: (914) 591-2823

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to