My testing is complete.

The only problems not already known are related to PGI's recent "Community
Edition" compilers and have been reported in three separate emails:

[2.0.2rc1] Fortran link failure with PGI fortran on MacOSX
<https://mail-archive.com/devel@lists.open-mpi.org/msg19823.html>
[2.0.2rc1] Build failure w/ PGI compilers on Mac OS X
<https://mail-archive.com/devel@lists.open-mpi.org/msg19824.html>
[2.0.2rc1] runtime error w/ PGI usempif08 on OpenPOWER

For some reason the last one does not appear in the archive!
Perhaps the config.log.bz2 I attached was too large?
Let me know if I should resend it.

BTW:  a typo in the ChangeLog of the announcement email:

- Fix a problem with early exit of a MPI process without calling
MPI_FINALIZE
  of MPI_ABORT that could lead to job hangs.  Thanks to Christof Koehler for
  reporting.

The "of" that begins the second line was almost certainly intended to be
"or".

-Paul

On Wed, Dec 14, 2016 at 6:58 PM, Jeff Squyres (jsquyres) <jsquy...@cisco.com
> wrote:

> Please test!
>
>     https://www.open-mpi.org/software/ompi/v2.0/
>
> Changes since v2.0.1:
>
> - Remove use of DATE in the message queue version string reported to
> debuggers to
>   insure bit-wise reproducibility of binaries.  Thanks to Alastair
> McKinstry
>   for help in fixing this problem.
> - Fix a problem with early exit of a MPI process without calling
> MPI_FINALIZE
>   of MPI_ABORT that could lead to job hangs.  Thanks to Christof Koehler
> for
>   reporting.
> - Fix a problem with forward of SIGTERM signal from mpirun to MPI processes
>   in a job.  Thanks to Noel Rycroft for reporting this problem
> - Plug some memory leaks in MPI_WIN_FREE discovered using Valgrind.  Thanks
>   to Joseph Schuchart for reporting.
> - Fix a problems  MPI_NEIGHOR_ALLTOALL when using a communicator with an
> empty topology
>   graph.  Thanks to Daniel Ibanez for reporting.
> - Fix a typo in a PMIx component help file.  Thanks to @njoly for
> reporting this.
> - Fix a problem with Valgrind false positives when using Open MPI's
> internal memchecker.
>   Thanks to Yvan Fournier for reporting.
> - Fix a problem with MPI_FILE_DELETE returning MPI_SUCCESS when
>   deleting a non-existent file. Thanks to Wei-keng Liao for reporting.
> - Fix a problem with MPI_IMPROBE that could lead to hangs in subsequent MPI
>   point to point or collective calls.  Thanks to Chris Pattison for
> reporting.
> - Fix a problem when configure Open MPI for powerpc with --enable-mpi-cxx
>   enabled.  Thanks to amckinstry for reporting.
> - Fix a problem using MPI_IALLTOALL with MPI_IN_PLACE argument.  Thanks to
>   Chris Ward for reporting.
> - Fix a problem using MPI_RACCUMULATE with the Portals4 transport.  Thanks
> to
>   @PDeveze for reporting.
> - Fix an issue with static linking and duplicate symbols arising from PMIx
>   Slurm components.  Thanks to Limin Gu for reporting.
> - Fix a problem when using MPI dynamic memory windows.  Thanks to
>   Christoph Niethammer for reporting.
> - Fix a problem with Open MPI's pkgconfig files.  Thanks to Alastair
> McKinstry
>   for reporting.
> - Fix a problem with MPI_IREDUCE when the same buffer is supplied for the
>   send and recv buffer arguments.  Thanks to Valentin Petrov for reporting.
> - Fix a problem with atomic operations on PowerPC.  Thanks to Paul
>   Hargrove for reporting.
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to: http://www.cisco.com/web/
> about/doing_business/legal/cri/
>
> _______________________________________________
> devel mailing list
> devel@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/devel
>



-- 
Paul H. Hargrove                          phhargr...@lbl.gov
Computer Languages & Systems Software (CLaSS) Group
Computer Science Department               Tel: +1-510-495-2352
Lawrence Berkeley National Laboratory     Fax: +1-510-486-6900
_______________________________________________
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

Reply via email to