[OMPI devel] Please test Open MPI v4.0.4rc1
Open MPI v4.0.4rc1 has been posted to https://www.open-mpi.org/software/ompi/v4.0/ 4.0.4 -- May, 2020 --- - Fix an ABI compatibility issue with the Fortran 2008 bindings. Thanks to Alastair McKinstry for reporting. - Fix an issue with rpath of /usr/lib64 when building OMPI on systems with Lustre. Thanks to David Shrader for reporting. - Fix a memory leak occurring with certain MPI RMA operations. - Fix an issue with ORTE's mapping of MPI processes to resources. Thanks to Alex Margolin for reporting and providing a fix. - Correct a problem with incorrect error codes being returned by OMPI MPI_T functions. - Fix an issue with debugger tools not being able to attach to mpirun more than once. Thanks to Gregory Lee for reporting. - Fix an issue with the Fortran compiler wrappers when using NAG compilers. Thanks to Peter Brady for reporting. - Fix an issue with the ORTE ssh based process launcher at scale. Thanks to Benjamín Hernández for reporting. - Address an issue when using shared MPI I/O operations. OMPIO will now successfully return from the file open statement but will raise an error if the file system does not supported shared I/O operations. Thanks to Romain Hild for reporting. - Fix an issue with MPI_WIN_DETACH. Thanks to Thomas Naughton for reporting. Note this release addresses an ABI compatibility issue for the Fortran 2008 bindings. It will not be backward compatible with releases 4.0.0 through 4.0.3 for applications making use of the Fortran 2008 bindings.
[OMPI devel] Open MPI 4.0.4rc3 available for testing
Open MPI v4.0.4rc3 has been posted to https://www.open-mpi.org/software/ompi/v4.0/ This rc includes a fix for a problem discovered with the memory patcher code. As described in the README: - Open MPI v4.0.4 fixed an issue with the memory patcher's ability to intercept shmat and shmdt that could cause wrong answers. This was observed on RHEL8.1 running on ppc64le, but it may affect other systems. For more information, please see: https://github.com/open-mpi/ompi/pull/7778 4.0.4 -- June, 2020 --- - Fix a memory patcher issue intercepting shmat and shmdt. This was observed on RHEL 8.x ppc64le (see README for more info). - Fix an illegal access issue caught using gcc's address sanitizer. Thanks to Georg Geiser for reporting. - Add checks to avoid conflicts with a libevent library shipped with LSF. - Switch to linking against libevent_core rather than libevent, if present. - Add improved support for UCX 1.9 and later. - Fix an ABI compatibility issue with the Fortran 2008 bindings. Thanks to Alastair McKinstry for reporting. - Fix an issue with rpath of /usr/lib64 when building OMPI on systems with Lustre. Thanks to David Shrader for reporting. - Fix a memory leak occurring with certain MPI RMA operations. - Fix an issue with ORTE's mapping of MPI processes to resources. Thanks to Alex Margolin for reporting and providing a fix. - Correct a problem with incorrect error codes being returned by OMPI MPI_T functions. - Fix an issue with debugger tools not being able to attach to mpirun more than once. Thanks to Gregory Lee for reporting. - Fix an issue with the Fortran compiler wrappers when using NAG compilers. Thanks to Peter Brady for reporting. - Fix an issue with the ORTE ssh based process launcher at scale. Thanks to Benjamín Hernández for reporting. - Address an issue when using shared MPI I/O operations. OMPIO will now successfully return from the file open statement but will raise an error if the file system does not supported shared I/O operations. Thanks to Romain Hild for reporting. - Fix an issue with MPI_WIN_DETACH. Thanks to Thomas Naughton for reporting. Note this release addresses an ABI compatibility issue for the Fortran 2008 bindings. It will not be backward compatible with releases 4.0.0 through 4.0.3 for applications making use of the Fortran 2008 bindings.
[OMPI devel] Open MPI 4.0.6rc4
Hi All, Open MPI v4.0.6rc4 (we messed up and had to skip rc3) is now available at https://www.open-mpi.org/software/ompi/v4.0/ Changes since the 4.0.5 release include: - Update embedded PMIx to 3.2.3. This update addresses several MPI_COMM_SPAWN problems. - Fix an issue with MPI_FILE_GET_BYTE_OFFSET when supplying a zero size file view. Thanks to @shanedsnyder for reporting. - Fix an issue with MPI_COMM_SPLIT_TYPE not observing key correctly. Thanks to Wolfgang Bangerth for reporting. - Fix a derived datatype issue that could lead to potential data corruption when using UCX. Thanks to @jayeshkrishna for reporting. - Fix a problem with shared memory transport file name collisions. Thanks to Moritz Kreutzer for reporting. - Fix a problem when using Flux PMI and UCX. Thanks to Sami Ilvonen for reporting and supplying a fix. - Fix a problem with MPIR breakpoint being compiled out using PGI compilers. Thanks to @louisespellacy-arm for reporting. - Fix some ROMIO issues when using Lustre. Thanks to Mark Dixon for reporting. - Fix a problem using an external PMIx 4 to build Open MPI 4.0.x. - Fix a compile problem when using the enable-timing configure option and UCX. Thanks to Jan Bierbaum for reporting. - Fix a symbol name collision when using the Cray compiler to build Open SHMEM. Thanks to Pak Lui for reporting and fixing. - Correct an issue encountered when building Open MPI under OSX Big Sur. Thanks to FX Coudert for reporting. - Various fixes to the OFI MTL. - Fix an issue with allocation of sufficient memory for parsing long environment variable values. Thanks to @zrss for reporting. - Improve reproducibility of builds to assist Open MPI packages. Thanks to Bernhard Wiedmann for bringing this to our attention. Your Open MPI release team.
Re: [OMPI devel] MPI ABI effort
LANL would be interested in supporting this feature as well. Howard On Mon, Aug 28, 2023 at 9:58 AM Jeff Squyres (jsquyres) via devel < devel@lists.open-mpi.org> wrote: > We got a presentation from the ABI WG (proxied via Quincey from AWS) a few > months ago. > > The proposal looked reasonable. > > No one has signed up to do the work yet, but based on what we saw in that > presentation, the general consensus was "sure, we could probably get on > board with that." > > There's definitely going to be issues to be worked out (e.g., are we going > to break Open MPI ABI? Maybe offer 2 flavors of ABI? Is this a > configure-time option, or do we build "both" ways? ...etc.), but it > sounded like the community members who heard this proposal were generally > in favor of moving in this direction. > -- > *From:* devel on behalf of Gilles > Gouaillardet via devel > *Sent:* Saturday, August 26, 2023 2:20 AM > *To:* Open MPI Developers > *Cc:* Gilles Gouaillardet > *Subject:* [OMPI devel] MPI ABI effort > > Folks, > > Jeff Hammond and al. published "MPI Application Binary Interface > Standardization" las week > https://arxiv.org/abs/2308.11214 > > The paper reads the (C) ABI has already been prototyped natively in MPICH. > > Is there any current interest into prototyping this ABI into Open MPI? > > > Cheers, > > Gilles >