Mark Allen via users writes:
> At least for the topic of why romio fails with HDF5, I believe this is the
> fix we need (has to do with how romio processes the MPI datatypes in its
> flatten routine). I made a different fix a long time ago in SMPI for that,
> then somewhat more recently it was r
Hi Mark,
Thanks so much for this - yes, applying that pull request against ompi
4.0.5 allows hdf5 1.10.7's parallel tests to pass on our Lustre
filesystem.
I'll certainly be applying it on our local clusters!
Best wishes,
Mark
On Tue, 1 Dec 2020, Mark Allen via users wrote:
At least for t
Just a point to consider. OMPI does _not_ want to get in the mode of modifying
imported software packages. That is a blackhole of effort we simply cannot
afford.
The correct thing to do would be to flag Rob Latham on that PR and ask that he
upstream the fix into ROMIO so we can absorb it. We sh
Hi,
I'm using an old (but required by the codes) version of hdf5 (1.8.12) in
parallel mode in 2 fortran applications. It relies on MPI/IO. The
storage is NFS mounted on the nodes of a small cluster.
With OpenMPI 1.7 it runs fine but using modern OpenMPI 3.1 or 4.0.5 the
I/Os are 10x to 100x slowe