this is in theory still correct, the default MPI I/O library used by Open MPI 
on Lustre file systems is ROMIO in all release versions. That being said, ompio 
does have support for Lustre as well starting from the 2.1 series, so you can 
use that as well. The main reason that we did not switch to ompio for Lustre as 
the default MPI I/O library is a performance issue that can arise under certain 
circumstances.

Which version of Open MPI are you using? There was a bug fix in the Open MPI to 
ROMIO integration layer sometime in the 4.0 series that fixed a datatype 
problem, which caused some problems in the HDF5 tests. You might be hitting 
that problem.

Thanks
Edgar

-----Original Message-----
From: users <users-boun...@lists.open-mpi.org> On Behalf Of Mark Dixon via users
Sent: Monday, November 16, 2020 4:32 AM
To: users@lists.open-mpi.org
Cc: Mark Dixon <mark.c.di...@durham.ac.uk>
Subject: [OMPI users] MPI-IO on Lustre - OMPIO or ROMIO?

Hi all,

I'm confused about how openmpi supports mpi-io on Lustre these days, and am 
hoping that someone can help.

Back in the openmpi 2.0.0 release notes, it said that OMPIO is the default 
MPI-IO implementation on everything apart from Lustre, where ROMIO is used. 
Those release notes are pretty old, but it still appears to be true.

However, I cannot get HDF5 1.10.7 to pass its MPI-IO tests unless I tell 
openmpi to use OMPIO (OMPI_MCA_io=ompio) and tell UCX not to print warning 
messages (UCX_LOG_LEVEL=ERROR).

Can I just check: are we still supposed to be using ROMIO?

Thanks,

Mark

Reply via email to