On 01/07/2015 11:51 AM, Dave Love wrote:
Emmanuel Florac <[email protected]> writes:
2. Run a code that uses MPI-IO (or HDF5) to read or write (big) files.
If yes, do I have to modify my code and / or re-compile MPI (or
HDF5) with specific options to account for PVFS ?
If your PVFS cluster is installed with the MPIO libraries, it should
work without code modification. You may need some configuration though,
like running HDF5 with the proper destination storage parameters,
something like
HDF5_PREFIX=pvfs2:/myorangefs mpiexec /whatever/program
I'm not sure what "MPIO libraries" means, but at least with openmpi and
using ROMIO, you need to configure it to build against pvfs2 with
something like
./configure ... --with-io-romio-flags="--with-file_system=ufs+pvfs2"
(Some openmpi versions need patching, but I think the latest 1.8 is OK.)
I don't know mpich but perhaps Rob Latham will chime in with
authoritative answers.
We did some pvfs cleanups back in 2012, so I hope they have propagated
to recent OpenMPI releases by now!
==rob
--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users