Here is a quick (and definitively not the cleanest) patch that addresses the MPI_Intercomm issue at the MPI level. It should be applied after removal of 29166.I also added the corrected test case stressing the corner cases by doing barriers at every inter-comm creation and doing a clean disconnect.
Great! I'll welcome the patch - feel free to back mine out when you do.
Thanks!
On Sep 17, 2013, at 2:43 PM, George Bosilca wrote:
> On Sep 17, 2013, at 23:19 , Ralph Castain wrote:
>
>> I very much doubt that it would work, though I can give it a try, as the
>> patch addresses Intercomm_mer
On Sep 17, 2013, at 23:19 , Ralph Castain wrote:
> I very much doubt that it would work, though I can give it a try, as the
> patch addresses Intercomm_merge and not Intercomm_create. I debated about
> putting the patch into "create" instead, but nobody was citing that as being
> a problem. In
On Sep 17, 2013, at 2:01 PM, George Bosilca wrote:
> Ralph,
>
> On Sep 17, 2013, at 20:13 , Ralph Castain wrote:
>
>> I guess we could argue this for awhile, but I personally don't care how it
>> gets fixed. The issue here is that (a) you promised to provide a "better"
>> fix nearly a year
Ralph,
On Sep 17, 2013, at 20:13 , Ralph Castain wrote:
> I guess we could argue this for awhile, but I personally don't care how it
> gets fixed. The issue here is that (a) you promised to provide a "better" fix
> nearly a year ago, (b) it never happened, and © a user who has patiently
> wai
I guess we could argue this for awhile, but I personally don't care how it gets
fixed. The issue here is that (a) you promised to provide a "better" fix nearly
a year ago, (b) it never happened, and (c) a user who has patiently waited all
this time has asked if we could please fix it.
It now wo
Ralph,
I don't think your patch is addressing the right issue. In fact your commit
treat the wrong symptom instead of addressing the core issue that generate the
problem. Let me explain this in terms of MPI.
The MPI_Intercomm_merge function transform an inter-comm into an intra-comm,
basically
Siegmar,
I pushed a bunch of fixes, can you please try now.
Best,
Josh
-Original Message-
From: Jeff Squyres (jsquyres) [mailto:jsquy...@cisco.com]
Sent: Tuesday, September 17, 2013 6:37 AM
To: Siegmar Gross; Open MPI Developers List
Cc: Joshua Ladd
Subject: Re: [OMPI users] Error in o
I agree on the shmem.fh statements. There are a couple of really painful
interfaces to prototype, but for the most part, it should be straight forward.
There's nothing in the OpenSHMEM specification that suggests providing a
Fortran module, so I believe you got bad advice there.
Brian
--
B
Thanks!
Takahiro Kawashima,
MPI development team,
Fujitsu
> Pushed in r29187.
>
> George.
>
>
> On Sep 17, 2013, at 12:03 , "Kawashima, Takahiro"
> wrote:
>
> > George,
> >
> > Copyright-added patch is attached.
> > I don't have my svn account so want someone to commit it.
> >
> > All m
Pushed in r29187.
George.
On Sep 17, 2013, at 12:03 , "Kawashima, Takahiro"
wrote:
> George,
>
> Copyright-added patch is attached.
> I don't have my svn account so want someone to commit it.
>
> All my reported issues are in the ALLTOALL(V|W) MPI_IN_PLACE code,
> which was implemented tw
Hi Ralph,
Thanks a lot!!! thats really cool!!
Best,
Suraj
On Sep 15, 2013, at 5:01 PM, Ralph Castain wrote:
> I fixed it and have filed a cmr to move it to 1.7.3
>
> Thanks for your patience, and for reminding me
> Ralph
>
> On Sep 13, 2013, at 12:05 PM, Suraj Prabhakaran
> wrote:
>
>> De
...moving over to the devel list...
Dave and I looked at this during a break in the EuroMPI conference, and noticed
several things:
1. Some of the shmem interfaces are functions (i.e., return non-void) and some
are subroutines (i.e., return void). They're currently all using a single
macro to
George,
Copyright-added patch is attached.
I don't have my svn account so want someone to commit it.
All my reported issues are in the ALLTOALL(V|W) MPI_IN_PLACE code,
which was implemented two months ago for MPI-2.2 conformance.
Not so surprising.
P.S. Fujitsu does not yet signed the contributi
Takahiro,
Good catches. It's absolutely amazing that some of these errors lasted for so
long before being discovered (especially the extent issue in the MPI_ALLTOALL).
Please feel free to apply your patch and add the correct copyright at the
beginning of all altered files.
Thanks,
George
Hi,
My colleague tested MPI_IN_PLACE for MPI_ALLTOALL, MPI_ALLTOALLV,
and MPI_ALLTOALLW, which was implemented two months ago in Open MPI
trunk. And he found three bugs and created a patch.
Found bugs are:
(A) Missing MPI_IN_PLACE support in self COLL component
The attached alltoall-self-in
16 matches
Mail list logo