The Open MPI Team, representing a consortium of research, academic, and
industry partners, is pleased to announce the release of Open MPI version
4.0.0.

v4.0.0 is the start of a new release series for Open MPI.  Starting with
this release, the OpenIB BTL supports only iWarp and RoCE by default.
Starting with this release,  UCX is the preferred transport protocol
for Infiniband interconnects. The embedded PMIx runtime has been updated
to 3.0.2.  The embedded Romio has been updated to 3.2.1.  This
release is ABI compatible with the 3.x release streams. There have been numerous
other bug fixes and performance improvements.

Note that starting with Open MPI v4.0.0, prototypes for several
MPI-1 symbols that were deleted in the MPI-3.0 specification
(which was published in 2012) are no longer available by default in
mpi.h. See the README for further details.

Version 4.0.0 can be downloaded from the main Open MPI web site:

  https://www.open-mpi.org/software/ompi/v4.0/


4.0.0 -- September, 2018
------------------------

- OSHMEM updated to the OpenSHMEM 1.4 API.
- Do not build OpenSHMEM layer when there are no SPMLs available.
  Currently, this means the OpenSHMEM layer will only build if
  a MXM or UCX library is found.
- A UCX BTL was added for enhanced MPI RMA support using UCX
- With this release,  OpenIB BTL now only supports iWarp and RoCE by default.
- Updated internal HWLOC to 2.0.2
- Updated internal PMIx to 3.0.2
- Change the priority for selecting external verses internal HWLOC
  and PMIx packages to build.  Starting with this release, configure
  by default selects available external HWLOC and PMIx packages over
  the internal ones.
- Updated internal ROMIO to 3.2.1.
- Removed support for the MXM MTL.
- Removed support for SCIF.
- Improved CUDA support when using UCX.
- Enable use of CUDA allocated buffers for OMPIO.
- Improved support for two phase MPI I/O operations when using OMPIO.
- Added support for Software-based Performance Counters, see
  
https://github.com/davideberius/ompi/wiki/How-to-Use-Software-Based-Performance-Counters-(SPCs)-in-Open-MPI
- Change MTL OFI from opting-IN on "psm,psm2,gni" to opting-OUT on
  "shm,sockets,tcp,udp,rstream"
- Various improvements to MPI RMA performance when using RDMA
  capable interconnects.
- Update memkind component to use the memkind 1.6 public API.
- Fix a problem with javadoc builds using OpenJDK 11.  Thanks to
  Siegmar Gross for reporting.
- Fix a memory leak using UCX.  Thanks to Charles Taylor for reporting.
- Fix hangs in MPI_FINALIZE when using UCX.
- Fix a problem with building Open MPI using an external PMIx 2.1.2
  library.  Thanks to Marcin Krotkiewski for reporting.
- Fix race conditions in Vader (shared memory) transport.
- Fix problems with use of newer map-by mpirun options.  Thanks to
  Tony Reina for reporting.
- Fix rank-by algorithms to properly rank by object and span
- Allow for running as root of two environment variables are set.
  Requested by Axel Huebl.
- Fix a problem with building the Java bindings when using Java 10.
  Thanks to Bryce Glover for reporting.
- Fix a problem with ORTE not reporting error messages if an application
  terminated normally but exited with non-zero error code.  Thanks to
  Emre Brookes for reporting.

Thanks,

Your MPI release team


_______________________________________________
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce

Reply via email to