Great! Thank you Josh! If everything looks good over the next couple of days I
will open up the 2.0.2 PR for this.
-Nathan
> On Sep 7, 2016, at 7:22 PM, Josh Hursey wrote:
>
> I just gained access to the PGI 16.7 compiler for ppc64le. I'm going to add
> it to our nightly MTT, so we can monito
I just gained access to the PGI 16.7 compiler for ppc64le. I'm going to add
it to our nightly MTT, so we can monitor progress on this support. It might
not make it into tonight's testing, but should be tomorrow. I might also
try to add it to our Jenkins testing too.
On Wed, Sep 7, 2016 at 7:36 PM,
Thanks for reporting this! Glad the problem is fixed. We will get this into
2.0.2.
-Nathan
> On Sep 7, 2016, at 9:39 AM, Vallee, Geoffroy R. wrote:
>
> I just tried the fix and i can confirm that it fixes the problem. :)
>
> Thanks!!!
>
>> On Sep 2, 2016, at 6:18 AM, Jeff Squyres (jsquyres)
As you know, we have been moving all of the Open MPI community infrastructure
to a new home over the past few months. We'd like to call out several
community partners to say "THANK YOU!" for the help and resources that have
provided, each of which has saved the community a fair amount of money:
I just tried the fix and i can confirm that it fixes the problem. :)
Thanks!!!
> On Sep 2, 2016, at 6:18 AM, Jeff Squyres (jsquyres)
> wrote:
>
> Issue filed at https://github.com/open-mpi/ompi/issues/2044.
>
> I asked Nathan and Sylvain to have a look.
>
>
>> On Sep 1, 2016, at 9:20 PM, Pa
Posted a possible fix to the intercomm hang. See
https://github.com/open-mpi/ompi/pull/2061
-Nathan
> On Sep 7, 2016, at 6:53 AM, Nathan Hjelm wrote:
>
> Looking at the code now. This code was more or less directly translated from
> the blocking version. I wouldn’t be surprised if there is a
Looking at the code now. This code was more or less directly translated from
the blocking version. I wouldn’t be surprised if there is an error that I
didn’t catch with MTT on my laptop.
That said, there is an old comment about not using bcast to avoid a possible
deadlock. Since the collective
Thanks guys,
so i was finally able to reproduce the bug on my (oversubscribed) VM
with tcp.
MPI_Intercomm_merge (indirectly) incorrectly invokes iallgatherv.
1,main (MPI_Issend_rtoa_c.c:196)
1, MPITEST_get_communicator (libmpitest.c:3544)
1,PMPI_Intercomm_merge (pintercomm_merge.c:131)