It was originally for performance reasons, but this should be fixed at this point. I am not aware of correctness problems.
However, let me try to clarify your question about: What do you precisely mean by "MPI I/O on Lustre mounts without flock"? Was the Lustre filesystem mounted without flock? If yes, that could lead to some problems, we had that on our Lustre installation for a while, but problems were even occurring without MPI I/O in that case (although I do not recall all details, just that we had to change the mount options). Maybe just take a testsuite (either ours or HDF5), make sure to run it in a multi-node configuration and see whether it works correctly. Thanks Edgar > -----Original Message----- > From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Dave > Love > Sent: Friday, October 5, 2018 5:15 AM > To: users@lists.open-mpi.org > Subject: [OMPI users] ompio on Lustre > > Is romio preferred over ompio on Lustre for performance or correctness? > If it's relevant, the context is MPI-IO on Lustre mounts without flock, which > ompio doesn't seem to require. > Thanks. > _______________________________________________ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users _______________________________________________ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users