Hi Gabriele
In OpenMPI 1.3 it doesn't matter:

[jody@aim-plankton ~]$ mpirun -np 4 mpi_test5
aim-plankton.uzh.ch: rank  0 : MPI_Test # 0 ok. [3...3]
aim-plankton.uzh.ch: rank  1 : MPI_Test # 0 ok. [0...0]
aim-plankton.uzh.ch: rank  2 : MPI_Test # 0 ok. [1...1]
aim-plankton.uzh.ch: rank  3 : MPI_Test # 0 ok. [2...2]
[jody@aim-plankton ~]$ mpirun -np 4 mpi_test5_rev
aim-plankton.uzh.ch: rank  1 : MPI_Test # 0 ok. [0...0]
aim-plankton.uzh.ch: rank  2 : MPI_Test # 0 ok. [1...1]
aim-plankton.uzh.ch: rank  3 : MPI_Test # 0 ok. [2...2]
aim-plankton.uzh.ch: rank  0 : MPI_Test # 0 ok. [3...3]

Jody

On Thu, Feb 5, 2009 at 11:48 AM, Gabriele Fatigati <g.fatig...@cineca.it> wrote:
> Hi Jody,
> thanks four your quick reply. But what's the difference?
>
> 2009/2/5 jody <jody....@gmail.com>:
>> Hi Gabriele
>>
>> Shouldn't you reverse the order of your send and recv from
>>    MPI_Irecv(buffer_recv, bufferLen, MPI_INT, recv_to, tag,
>> MPI_COMM_WORLD, &request);
>>    MPI_Send(buffer_send, bufferLen, MPI_INT, send_to, tag, MPI_COMM_WORLD);
>>
>> to
>>
>>    MPI_Send(buffer_send, bufferLen, MPI_INT, send_to, tag, MPI_COMM_WORLD);
>>    MPI_Irecv(buffer_recv, bufferLen, MPI_INT, recv_to, tag,
>> MPI_COMM_WORLD, &request);
>>
>> ?
>> Jody
>>
>> On Thu, Feb 5, 2009 at 11:37 AM, Gabriele Fatigati <g.fatig...@cineca.it> 
>> wrote:
>>> Dear OpenMPI developer,
>>> i have found a very strange behaviour of MPI_Test. I'm using OpenMPI
>>> 1.2 over Infiniband interconnection net.
>>>
>>> I've tried to implement net check with a series of MPI_Irecv and
>>> MPI_Send  beetwen processors, testing with MPI_Wait the end of Irecv.
>>> For strange reasons, i've noted that, when i launch the test in one
>>> node, it works well. If i launch over 2 or more procs over different
>>> nodes, MPI_Test fails many time before to tell that the IRecv is
>>> finished.
>>>
>>> I've tried that it fails also after one minutes, with very small
>>> buffer( less than eager limit). It's impossible that the communication
>>> is pending after one minutes, with 10 integer sended. To solve this,
>>> I need to implement a loop over MPI_Test, and only after 3 or 4
>>> MPI_Test it returns that IRecv finished successful. Is it possible
>>> that MPI_Test needs to call many time also if the communication is
>>> already finished?
>>>
>>> In attach you have my simple C test program.
>>>
>>> Thanks in advance.
>>>
>>> --
>>> Ing. Gabriele Fatigati
>>>
>>> Parallel programmer
>>>
>>> CINECA Systems & Tecnologies Department
>>>
>>> Supercomputing Group
>>>
>>> Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
>>>
>>> www.cineca.it                    Tel:   +39 051 6171722
>>>
>>> g.fatigati [AT] cineca.it
>>>
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>>
>
>
>
> --
> Ing. Gabriele Fatigati
>
> Parallel programmer
>
> CINECA Systems & Tecnologies Department
>
> Supercomputing Group
>
> Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
>
> www.cineca.it                    Tel:   +39 051 6171722
>
> g.fatigati [AT] cineca.it
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to