"Gabriel, Edgar" <egabr...@central.uh.edu> writes:

> It was originally for performance reasons, but this should be fixed at
> this point. I am not aware of correctness problems.
>
> However, let me try to clarify your question about: What do you
> precisely mean by "MPI I/O on Lustre mounts without flock"? Was the
> Lustre filesystem mounted without flock?

No, it wasn't (and romio complains).

> If yes, that could lead to
> some problems, we had that on our Lustre installation for a while, but
> problems were even occurring without MPI I/O in that case (although I
> do not recall all details, just that we had to change the mount
> options).

Yes, without at least localflock you might expect problems with things
like bdb and sqlite, but I couldn't see any file locking calls in the
Lustre component.  If it is a problem, shouldn't the component fail like
without it like romio does?

I have suggested ephemeral PVFS^WOrangeFS but I doubt that will be
thought useful.

> Maybe just take a testsuite (either ours or HDF5), make sure
> to run it in a multi-node configuration and see whether it works
> correctly.

For some reason I didn't think MTT, if that's what you mean, was
available, but I see it is; I'll see if I can drive it when I have a
chance.  Tests from HDF5 might be easiest, thanks for the suggestion.
I'd tried with ANL's "testmpio", which was the only thing I found
immediately, but it threw up errors even on a local filesystem, at which
stage I thought it was best to ask...  I'll report back if I get useful
results.
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to