My understanding is that MPICH has been typically the reference
implementation, higher quality but less performant, particularly with
the range of fabrics. Certainly I've seen mostly OpenMPI but not MPICH
on various HPC machines. People would use either OpenMPI or a vendors
MPI (which may be forked MPICH with added network hardware support).
I'd really like to hear from upstream users whether they are still
encountering OpenMPI issues.
Personally I favour splitting, using MPICH on 32-bit archs to flush
out bugs, and doing so early in the dev cycle (now) so there is time
to change if necessary.
Thank you both for your comments.
I don't think as the Release Team we have a preference one way or the
other. We'll let you pick the approach that you consider better.
Obviously the freeze is still a long ways off, so if something comes
up it can be changed later.
Cheers,
Emilio
Ok, as a concrete proposal: I propose to leave OpenMPI as the default
MPI for 64-bit archs, move 32-bit archs to MPICH.
I will update mpi-defaults in one week, in order to give time for
further responses on the matter.
I'm not sure how to write a BEN file for transition tracking when the
SOVERSION doesn't change (as in this case with OpenMPI). Help appreciated.
regards
Alastair
--
Alastair McKinstry,
GPG: 82383CE9165B347C787081A2CBE6BB4E5D9AD3A5
e: mckins...@debian.org, im: @alastair:mckinstry.ie