lso mpi_barrier
>
> Care for providing a reasoning for this barrier? Why and where should it
> be placed?
>
>
>
> George.
>
>
>
>
>
>
>
>
>
> *From:* devel [mailto:devel-boun...@open-mpi.org] *On Behalf Of *George
> Bosilca
> *Sent:* Monday, July 21, 2014 8:
to communicate with rankA any more, while it still has work to do.
From: devel [mailto:devel-boun...@open-mpi.org] On Behalf Of George Bosilca
Sent: Monday, July 21, 2014 9:11 PM
To: Open MPI Developers
Subject: Re: [OMPI devel] barrier before calling del_procs
On Mon, Jul 21, 2014 at 1:41 PM
l-boun...@open-mpi.org] *On Behalf Of *George
> Bosilca
> *Sent:* Monday, July 21, 2014 8:19 PM
> *To:* Open MPI Developers
>
> *Subject:* Re: [OMPI devel] barrier before calling del_procs
>
>
>
> There was a long thread of discussion on why we must use an rte_barrier
>
om: Nathan Hjelm [mailto:hje...@lanl.gov<mailto:hje...@lanl.gov>]
Sent: Monday, July 21, 2014 8:01 PM
To: Open MPI Developers
Cc: Yossi Etigin
Subject: Re: [OMPI devel] barrier before calling del_procs
I should add that it is an rte barrier and not an MPI barrier for technical
reasons.
-Nathan
On M
han Hjelm [mailto:hje...@lanl.gov]
> Sent: Monday, July 21, 2014 8:01 PM
> To: Open MPI Developers
> Cc: Yossi Etigin
> Subject: Re: [OMPI devel] barrier before calling del_procs
>
> I should add that it is an rte barrier and not an MPI barrier for
> technical reasons.
>
>
-
From: Nathan Hjelm [mailto:hje...@lanl.gov]
Sent: Monday, July 21, 2014 8:01 PM
To: Open MPI Developers
Cc: Yossi Etigin
Subject: Re: [OMPI devel] barrier before calling del_procs
I should add that it is an rte barrier and not an MPI barrier for technical
reasons.
-Nathan
On Mon, Jul 21,
I should add that it is an rte barrier and not an MPI barrier for
technical reasons.
-Nathan
On Mon, Jul 21, 2014 at 09:42:53AM -0700, Ralph Castain wrote:
>We already have an rte barrier before del procs
>
>Sent from my iPhone
>On Jul 21, 2014, at 8:21 AM, Yossi Etigin wrote:
>
>
We already have an rte barrier before del procs
Sent from my iPhone
> On Jul 21, 2014, at 8:21 AM, Yossi Etigin wrote:
>
> Hi,
>
> We get occasional hangs with MTL/MXM during finalize, because a global
> synchronization is needed before calling del_procs.
> e.g rank A may call del_procs() a
Hi,
We get occasional hangs with MTL/MXM during finalize, because a global
synchronization is needed before calling del_procs.
e.g rank A may call del_procs() and disconnect from rank B, while rank B is
still working.
What do you think about adding an MPI barrier on COMM_WORLD before calling
de