On 12/19/2014 09:52 PM, Rob Latham wrote:
Please don't use NFS for MPI-IO. ROMIO makes a best effort but there's no way to guarantee you won't corrupt a block of data (NFS
Ok. But how can I know the type of filesystem my users will work on? For small jobs, they may have data on NFS and don't car too much for read/write speed... and I want only 1 file format that can be used on any filesystem...
Do you recommend me to disable ROMIO/NFS support when configuring MPICH (how do you ask this to configure?)?
What other library is recommend to use if I have to write distributed data on NFS? Does HDF5, for example, switches from MPI I/O to something else when doing collective I/O on NFS?
I don't want to write a function to write to a file that depends on the final type of filesystem... I expect the library to do a good job for me... and I have chosen MPI I/O do to that job... ;-)
clients are allowed to cache... arbitrarily, it seems). There are so many good parallel file systems with saner consistency semantics .
Can't tell anything about how NFS is usable or not with MPI I/O... I Just use it because our nightly tests are writing results to NFS partitions... as our users may do...
This looks like maybe a calloc would clean it right up.
Ok, the point is: is there a bug, and can it be fixed (even if it is not recommended to use ROMIO/NFS) or at least tracked?
Thanks! Eric