George -- can you file a ticket about this?
On Jun 12, 2011, at 1:25 PM, George Bosilca wrote:
> Fraderic,
>
> Based on the current version of the MPI standard, the two groups involved in
> the intercomm_create have to be disjoints, which means the leader cannot be
> the same process.
>
> Re
Fraderic,
Based on the current version of the MPI standard, the two groups involved in
the intercomm_create have to be disjoints, which means the leader cannot be the
same process.
Regarding the issue in Open MPI, the problem is deep in our modex exchange
(contact information). In the example
Dear all, thank you very much for the time spent at looking at my problem.
After reading your contributions, it's not clear wether there is a bug in
OpenMPI or not.
So I created a small self contained source code to analyse the behavior,
and the problem is still there.
I was wondering if the loc
On 6/7/2011 10:23 AM, George Bosilca wrote:
>
> On Jun 7, 2011, at 11:00 , Edgar Gabriel wrote:
>
>> George,
>>
>> I did not look over all the details of your test, but it looks to
>> me like you are violating one of the requirements of
>> intercomm_create namely the request that the two group
On Jun 7, 2011, at 11:00 , Edgar Gabriel wrote:
> George,
>
> I did not look over all the details of your test, but it looks to me
> like you are violating one of the requirements of intercomm_create
> namely the request that the two groups have to be disjoint. In your case
> the parent process(
George,
I did not look over all the details of your test, but it looks to me
like you are violating one of the requirements of intercomm_create
namely the request that the two groups have to be disjoint. In your case
the parent process(es) are part of both local intra-communicators, isn't it?
I j
George --
Do we need to file a bug about this?
On Jun 7, 2011, at 1:57 AM, George Bosilca wrote:
> Frederic,
>
> Attached you will find an example that is supposed to work. The main
> difference with your code is on T3, T4 where you have inversed the local and
> remote comm. As depicted on t
Frederic,
Attached you will find an example that is supposed to work. The main difference
with your code is on T3, T4 where you have inversed the local and remote comm.
As depicted on the picture attached below, during the 3th step you will create
the intercomm between ab and c (no overlap) usi
Since your new intra-communicator contains all members, couldn't you
just use the MPI_COMM_WORLD communicator?
2011/6/1 Frédéric Feyel :
> Hello,
>
> I have a problem using MPI_Intercomm_create.
>
> I 5 tasks, let's say T0, T1, T2, T3, T4 resulting from two spawn
> operations by T0.
>
> So I have
Hello,
I have a problem using MPI_Intercomm_create.
I 5 tasks, let's say T0, T1, T2, T3, T4 resulting from two spawn
operations by T0.
So I have two intra-communicator :
intra0 contains : T0, T1, T2
intra1 contains : T0, T3, T4
my goal is to make a collective loop to build a single intra-commu
10 matches
Mail list logo