On Mar 7, 2010, at 5:13 PM, Jeff Squyres wrote:
> On Mar 7, 2010, at 12:59 PM, Ralph Castain wrote:
>
>> > Quick question about this. We now have an OPAL level progress thread,
>> > which enables the machinery at the OPAL level. Unfortunately, this doesn't
>> > say anything about what the MPI
Begin forwarded message:
From: Barry Smith
Date: March 7, 2010 9:17:10 PM CST
To: de...@open-mpi.org
Cc: Satish Balay
Subject: valgrind problem with 1.4.1 and MPI_Allgather()
> ==9066== Source and destination overlap in memcpy(0xa571694,
0xa571698, 8)
> ==9066==at 0xC5B224: memcpy (
On Mar 7, 2010, at 12:59 PM, Ralph Castain wrote:
> Quick question about this. We now have an OPAL level progress
thread, which enables the machinery at the OPAL level.
Unfortunately, this doesn't say anything about what the MPI level
will do?
That is correct and has always been the case.
I'm not sure about that -- OPAL_SOS will take some time to propagate
throughout the code base, even after the infrastructure is added to
the trunk.
My point was that it might not be worth it to revamp BTL_ERROR if
OPAL_SOS is coming. But I'd still like to get the new TCP BTL
messages in.
Those are excellent questions that I have asked as well at various times :-)
Some thoughts below
On Mar 7, 2010, at 1:20 PM, George Bosilca wrote:
> Quick question about this. We now have an OPAL level progress thread, which
> enables the machinery at the OPAL level. Unfortunately, this doesn't
Quick question about this. We now have an OPAL level progress thread, which
enables the machinery at the OPAL level. Unfortunately, this doesn't say
anything about what the MPI level will do? Moreover, this is quite confusing as
there are no communications layers in OPAL so one can ask what an O
Then let's just be patient until OPAL_SOS make it in the trunk, and save us the
burden of a large effort made twice.
george.
On Mar 5, 2010, at 22:35 , Ralph Castain wrote:
>
> On Mar 5, 2010, at 7:22 PM, Jeff Squyres wrote:
>
>> On Mar 5, 2010, at 6:10 PM, Ralph Castain wrote:
>>
I a