Re: [OMPI users] MPI_Irecv: Confusion with <> inputy parameter

2012-08-26 Thread devendra rai
Hello Hristo, Jeff,

Thanks a lot for your note. I understand the concept much better now. In fact, 
now I understand why the word "maximum number of elements in the receive 
buffer" in all of the documentation means.

However, I still think that the online documentation is confusing (and little 
vague), and could be worded better. It is worsened by the fact that all other 
sites simply copy the description verbatim.

Thanks a lot anyway!

Devendra





 From: "Iliev, Hristo" <il...@rz.rwth-aachen.de>
To: devendra rai <rai.deven...@yahoo.co.uk> 
Cc: Open MPI Users <us...@open-mpi.org> 
Sent: Tuesday, 21 August 2012, 10:37
Subject: Re: [OMPI users] MPI_Irecv: Confusion with <> inputy 
parameter
 

Hello, Devendra,

Sending and receiving messages in MPI are atomic operations - they complete 
only when the whole message was sent or received. MPI_Test only tells you if 
the operation has completed - there is no indication like "30% of the message 
was sent/received, stay tuned for more".

On the sender side, the message is constructed by taking bytes from various 
locations in memory, specified by the type map of the MPI datatype used. Then 
on the receiver side the message is deconstructed back into memory by placing 
the received bytes according to the type map of the MPI datatype provided. The 
combination of receive datatype and receive count gives you a certain number of 
bytes (that is the type size obtainable by MPI_Type_size times "count"). If the 
message is shorter, that means that some elements of the receive buffer will 
not be filled, which is OK - you can test exactly how many elements were filled 
with MPI_Get_count on the status of the receive operation. If the message was 
longer, however, there won't be enough place to put all the data that the 
message is carrying and an overflow error would occur.

This works best by example. Image that in one process you issue:

MPI_Send(data, 80, MPI_BYTE, ...);

This will send a message containing 80 elements of type byte. Now on the 
receiver side you issue:

MPI_Irecv(data, 160, MPI_BYTE, ..., );

What will happen is that the message will be received in its entirety since 80 
times the size of MPI_BYTE is less than or equal to 160 times the size of 
MPI_BYTE. Calling MPI_Test on "request" will produce true in the completion 
flag and you will get back a status variable (unless you provided 
MPI_STATUS_IGNORE) and then you can call:

MPI_Get_count(, MPI_BYTE, );

Now "count" will contain 80 - the actual number of elements received.

But if the receive operation was instead:

MPI_Irecv(data, 40, MPI_BYTE, ..., );

since 40 times the size of MPI_BYTE is less than the size of the message, there 
will be not enough space to receive the entire message and an overflow error 
would occur. The MPI_Irecv itself only initiates the receive operation and will 
not return an error. Rather you will obtain the overflow error in the MPI_ERROR 
field of the status argument, returned by MPI_Test (the test call itself will 
return MPI_SUCCESS).

Since MPI operations are atomic, you cannot send a message of 160 element and 
then receive it with two separate receives of size 80 - this is very important 
and it is difficult to grasp initially by people, who come to MPI from the 
traditional Unix network programming.

I would recommend that you head to http://www.mpi-forum.org/ and download from 
there the PDF of the latest MPI 2.2 standard (or order the printed book). 
Unlike many other standard documents this one is actually readable by normal 
people and contains many useful explanations and examples. Read through entire 
section 3.2 to get a better idea of how messaging works in MPI.

Hope that helps to clarify things,

Hristo


On 21.08.2012, at 10:01, devendra rai <rai.deven...@yahoo.co.uk> wrote:

Hello Jeff and Hristo,
>
>Now I am completely confused:
>
>So, let's say, the complete reception requires 8192 bytes. And, I have:
>
>MPI_Irecv(
>    (void*)this->receivebuffer,/* the receive buffer */
>    this->receive_packetsize,  /* 80 */
>    MPI_BYTE,   /* The data type expected 
>*/
>    this->transmittingnode,    /* The node from which to 
>receive */
>    this->uniquetag,   /* Tag */
>    MPI_COMM_WORLD, /* Communicator */
>    _request  /* request handle */
>    );
>
>
>
>
>
>That means, the the MPI_Test will tell me that the reception is complete when 
>I have received the first 80 bytes. Correct?
>
>
>Next, let[s say that I have a receive buffer with a capacity of 160 bytes, 
>then, will overflow error occur h

Re: [OMPI users] MPI_Irecv: Confusion with <> inputy parameter

2012-08-21 Thread Iliev, Hristo
Hello, Devendra,

Sending and receiving messages in MPI are atomic operations - they complete 
only when the whole message was sent or received. MPI_Test only tells you if 
the operation has completed - there is no indication like "30% of the message 
was sent/received, stay tuned for more".

On the sender side, the message is constructed by taking bytes from various 
locations in memory, specified by the type map of the MPI datatype used. Then 
on the receiver side the message is deconstructed back into memory by placing 
the received bytes according to the type map of the MPI datatype provided. The 
combination of receive datatype and receive count gives you a certain number of 
bytes (that is the type size obtainable by MPI_Type_size times "count"). If the 
message is shorter, that means that some elements of the receive buffer will 
not be filled, which is OK - you can test exactly how many elements were filled 
with MPI_Get_count on the status of the receive operation. If the message was 
longer, however, there won't be enough place to put all the data that the 
message is carrying and an overflow error would occur.

This works best by example. Image that in one process you issue:

MPI_Send(data, 80, MPI_BYTE, ...);

This will send a message containing 80 elements of type byte. Now on the 
receiver side you issue:

MPI_Irecv(data, 160, MPI_BYTE, ..., );

What will happen is that the message will be received in its entirety since 80 
times the size of MPI_BYTE is less than or equal to 160 times the size of 
MPI_BYTE. Calling MPI_Test on "request" will produce true in the completion 
flag and you will get back a status variable (unless you provided 
MPI_STATUS_IGNORE) and then you can call:

MPI_Get_count(, MPI_BYTE, );

Now "count" will contain 80 - the actual number of elements received.

But if the receive operation was instead:

MPI_Irecv(data, 40, MPI_BYTE, ..., );

since 40 times the size of MPI_BYTE is less than the size of the message, there 
will be not enough space to receive the entire message and an overflow error 
would occur. The MPI_Irecv itself only initiates the receive operation and will 
not return an error. Rather you will obtain the overflow error in the MPI_ERROR 
field of the status argument, returned by MPI_Test (the test call itself will 
return MPI_SUCCESS).

Since MPI operations are atomic, you cannot send a message of 160 element and 
then receive it with two separate receives of size 80 - this is very important 
and it is difficult to grasp initially by people, who come to MPI from the 
traditional Unix network programming.

I would recommend that you head to http://www.mpi-forum.org/ and download from 
there the PDF of the latest MPI 2.2 standard (or order the printed book). 
Unlike many other standard documents this one is actually readable by normal 
people and contains many useful explanations and examples. Read through entire 
section 3.2 to get a better idea of how messaging works in MPI.

Hope that helps to clarify things,

Hristo

On 21.08.2012, at 10:01, devendra rai <rai.deven...@yahoo.co.uk> wrote:

> Hello Jeff and Hristo,
> 
> Now I am completely confused:
> 
> So, let's say, the complete reception requires 8192 bytes. And, I have:
> 
> MPI_Irecv(
> (void*)this->receivebuffer,/* the receive buffer */
> this->receive_packetsize,  /* 80 */
> MPI_BYTE,   /* The data type expected 
> */
> this->transmittingnode,/* The node from which to 
> receive */
> this->uniquetag,   /* Tag */
> MPI_COMM_WORLD, /* Communicator */
> _request  /* request handle */
> );
> 
> 
> That means, the the MPI_Test will tell me that the reception is complete when 
> I have received the first 80 bytes. Correct?
> 
> Next, let[s say that I have a receive buffer with a capacity of 160 bytes, 
> then, will overflow error occur here? Even if I have decided to receive a 
> large payload in chunks of 80 bytes?
> 
> I am sorry, the manual and the API reference was too vague for me.
> 
> Thanks a lot
> 
> Devendra
> From: "Iliev, Hristo" <il...@rz.rwth-aachen.de>
> To: Open MPI Users <us...@open-mpi.org> 
> Cc: devendra rai <rai.deven...@yahoo.co.uk> 
> Sent: Tuesday, 21 August 2012, 9:48
> Subject: Re: [OMPI users] MPI_Irecv: Confusion with <> inputy 
> parameter
> 
> Jeff,
> 
> >> Or is it the number of elements that are expected to be received, and 
> >> hence MPI_Test will tell me that the receive is not complete untill 
> >> "count" number of elements have not been received?
> > 
> > Yes.
> 
> 

Re: [OMPI users] MPI_Irecv: Confusion with <> inputy parameter

2012-08-21 Thread jody
Hi Devendra

MPI has no way of knowing how big your receive buffer is -
that's why you have to pass the "count" argument, to tell MPI
how many items of your data type (in your case many bytes)
it may copy to your receive buffer.

When data arrives that is longer than the number you
specified in the "count" argument, the data will be cut off after
count bytes (and an error will be returned).
Any shorter amount of data will be copied to your receive buffer
and the call to MPI_Recv will terminate successfully.

It is your responsibility to pass the correct value of "count".

If you expect data of 160 bytes you have to allocate a buffer
with a size greater or equal to 160 and you have to set your
"count" parameter to the size you allocated.

If you want to receive data in chunks, you have to send it in chunks.

I hope this helps
  Jody


On Tue, Aug 21, 2012 at 10:01 AM, devendra rai <rai.deven...@yahoo.co.uk> wrote:
> Hello Jeff and Hristo,
>
> Now I am completely confused:
>
> So, let's say, the complete reception requires 8192 bytes. And, I have:
>
> MPI_Irecv(
> (void*)this->receivebuffer,/* the receive buffer */
> this->receive_packetsize,  /* 80 */
> MPI_BYTE,   /* The data type
> expected */
> this->transmittingnode,/* The node from which to
> receive */
> this->uniquetag,   /* Tag */
> MPI_COMM_WORLD, /* Communicator */
> _request  /* request handle */
> );
>
>
> That means, the the MPI_Test will tell me that the reception is complete
> when I have received the first 80 bytes. Correct?
>
> Next, let[s say that I have a receive buffer with a capacity of 160 bytes,
> then, will overflow error occur here? Even if I have decided to receive a
> large payload in chunks of 80 bytes?
>
> I am sorry, the manual and the API reference was too vague for me.
>
> Thanks a lot
>
> Devendra
> 
> From: "Iliev, Hristo" <il...@rz.rwth-aachen.de>
> To: Open MPI Users <us...@open-mpi.org>
> Cc: devendra rai <rai.deven...@yahoo.co.uk>
> Sent: Tuesday, 21 August 2012, 9:48
> Subject: Re: [OMPI users] MPI_Irecv: Confusion with <> inputy
> parameter
>
> Jeff,
>
>>> Or is it the number of elements that are expected to be received, and
>>> hence MPI_Test will tell me that the receive is not complete untill "count"
>>> number of elements have not been received?
>>
>> Yes.
>
> Answering "Yes" this question might further the confusion there. The "count"
> argument specifies the *capacity* of the receive buffer and the receive
> operation (blocking or not) will complete successfully for any matching
> message with size up to "count", even for an empty message with 0 elements,
> and will produce an overflow error if the received message was longer and
> data truncation has to occur.
>
> On 20.08.2012, at 16:32, Jeff Squyres <jsquy...@cisco.com> wrote:
>
>> On Aug 20, 2012, at 5:51 AM, devendra rai wrote:
>>
>>> Is it the number of elements that have been received *thus far* in the
>>> buffer?
>>
>> No.
>>
>>> Or is it the number of elements that are expected to be received, and
>>> hence MPI_Test will tell me that the receive is not complete untill "count"
>>> number of elements have not been received?
>>
>> Yes.
>>
>>> Here's the reason why I have a problem (and I think I may be completely
>>> stupid here, I'd appreciate your patience):
>> [snip]
>>> Does anyone see what could be going wrong?
>>
>> Double check that the (sender_rank, tag, communicator) tuple that you
>> issued in the MPI_Irecv matches the (rank, tag, communicator) tuple from the
>> sender (tag and communicator are arguments on the sending side, and rank is
>> the rank of the sender in that communicator).
>>
>> When receives block like this without completing like this, it usually
>> means a mismatch between the tuples.
>>
>> --
>> Jeff Squyres
>> jsquy...@cisco.com
>> For corporate legal information go to:
>> http://www.cisco.com/web/about/doing_business/legal/cri/
>>
>>
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> --
> Hristo Iliev, Ph.D. -- High Performance Computing,
> RWTH Aachen University, Center for Computing and Communication
> Seffenter Weg 23,  D 52074  Aachen (Germany)
> Tel: +49 241 80 24367 -- Fax/UMS: +49 241 80 624367
>
>
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] MPI_Irecv: Confusion with <> inputy parameter

2012-08-21 Thread devendra rai
Hello Jeff and Hristo,

Now I am completely confused:

So, let's say, the complete reception requires 8192 bytes. And, I have:

MPI_Irecv(
    (void*)this->receivebuffer,/* the receive buffer */
    this->receive_packetsize,  /* 80 */
    MPI_BYTE,   /* The data type expected */
    this->transmittingnode,    /* The node from which to 
receive */
    this->uniquetag,   /* Tag */
    MPI_COMM_WORLD, /* Communicator */
    _request  /* request handle */
    );



That means, the the MPI_Test will tell me that the reception is complete when I 
have received the first 80 bytes. Correct?

Next, let[s say that I have a receive buffer with a capacity of 160 bytes, 
then, will overflow error occur here? Even if I have decided to receive a large 
payload in chunks of 80 bytes?

I am sorry, the manual and the API reference was too vague for me.

Thanks a lot

Devendra



 From: "Iliev, Hristo" <il...@rz.rwth-aachen.de>
To: Open MPI Users <us...@open-mpi.org> 
Cc: devendra rai <rai.deven...@yahoo.co.uk> 
Sent: Tuesday, 21 August 2012, 9:48
Subject: Re: [OMPI users] MPI_Irecv: Confusion with <> inputy 
parameter
 
Jeff,

>> Or is it the number of elements that are expected to be received, and hence 
>> MPI_Test will tell me that the receive is not complete untill "count" number 
>> of elements have not been received?
> 
> Yes.

Answering "Yes" this question might further the confusion there. The "count" 
argument specifies the *capacity* of the receive buffer and the receive 
operation (blocking or not) will complete successfully for any matching message 
with size up to "count", even for an empty message with 0 elements, and will 
produce an overflow error if the received message was longer and data 
truncation has to occur.

On 20.08.2012, at 16:32, Jeff Squyres <jsquy...@cisco.com> wrote:

> On Aug 20, 2012, at 5:51 AM, devendra rai wrote:
> 
>> Is it the number of elements that have been received *thus far* in the 
>> buffer?
> 
> No.
> 
>> Or is it the number of elements that are expected to be received, and hence 
>> MPI_Test will tell me that the receive is not complete untill "count" number 
>> of elements have not been received?
> 
> Yes.
> 
>> Here's the reason why I have a problem (and I think I may be completely 
>> stupid here, I'd appreciate your patience):
> [snip]
>> Does anyone see what could be going wrong?
> 
> Double check that the (sender_rank, tag, communicator) tuple that you issued 
> in the MPI_Irecv matches the (rank, tag, communicator) tuple from the sender 
> (tag and communicator are arguments on the sending side, and rank is the rank 
> of the sender in that communicator).
> 
> When receives block like this without completing like this, it usually means 
> a mismatch between the tuples.
> 
> -- 
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to: 
> http://www.cisco.com/web/about/doing_business/legal/cri/
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

--
Hristo Iliev, Ph.D. -- High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23,  D 52074  Aachen (Germany)
Tel: +49 241 80 24367 -- Fax/UMS: +49 241 80 624367

Re: [OMPI users] MPI_Irecv: Confusion with <> inputy parameter

2012-08-21 Thread Iliev, Hristo
Jeff,

>> Or is it the number of elements that are expected to be received, and hence 
>> MPI_Test will tell me that the receive is not complete untill "count" number 
>> of elements have not been received?
> 
> Yes.

Answering "Yes" this question might further the confusion there. The "count" 
argument specifies the *capacity* of the receive buffer and the receive 
operation (blocking or not) will complete successfully for any matching message 
with size up to "count", even for an empty message with 0 elements, and will 
produce an overflow error if the received message was longer and data 
truncation has to occur.

On 20.08.2012, at 16:32, Jeff Squyres  wrote:

> On Aug 20, 2012, at 5:51 AM, devendra rai wrote:
> 
>> Is it the number of elements that have been received *thus far* in the 
>> buffer?
> 
> No.
> 
>> Or is it the number of elements that are expected to be received, and hence 
>> MPI_Test will tell me that the receive is not complete untill "count" number 
>> of elements have not been received?
> 
> Yes.
> 
>> Here's the reason why I have a problem (and I think I may be completely 
>> stupid here, I'd appreciate your patience):
> [snip]
>> Does anyone see what could be going wrong?
> 
> Double check that the (sender_rank, tag, communicator) tuple that you issued 
> in the MPI_Irecv matches the (rank, tag, communicator) tuple from the sender 
> (tag and communicator are arguments on the sending side, and rank is the rank 
> of the sender in that communicator).
> 
> When receives block like this without completing like this, it usually means 
> a mismatch between the tuples.
> 
> -- 
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to: 
> http://www.cisco.com/web/about/doing_business/legal/cri/
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

--
Hristo Iliev, Ph.D. -- High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23,  D 52074  Aachen (Germany)
Tel: +49 241 80 24367 -- Fax/UMS: +49 241 80 624367




smime.p7s
Description: S/MIME cryptographic signature


Re: [OMPI users] MPI_Irecv: Confusion with <> inputy parameter

2012-08-20 Thread Jeff Squyres
On Aug 20, 2012, at 5:51 AM, devendra rai wrote:

> Is it the number of elements that have been received *thus far* in the buffer?

No.

> Or is it the number of elements that are expected to be received, and hence 
> MPI_Test will tell me that the receive is not complete untill "count" number 
> of elements have not been received?

Yes.

> Here's the reason why I have a problem (and I think I may be completely 
> stupid here, I'd appreciate your patience):
[snip]
> Does anyone see what could be going wrong?

Double check that the (sender_rank, tag, communicator) tuple that you issued in 
the MPI_Irecv matches the (rank, tag, communicator) tuple from the sender (tag 
and communicator are arguments on the sending side, and rank is the rank of the 
sender in that communicator).

When receives block like this without completing like this, it usually means a 
mismatch between the tuples.

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/




[OMPI users] MPI_Irecv: Confusion with <> inputy parameter

2012-08-20 Thread devendra rai
Hello Community,

I have a problem understanding the API for MPI_Irecv:

int MPI_Irecv( void *buf, int count, MPI_Datatype datatype, int source, int 
tag, MPI_Comm comm, MPI_Request *request );  
Parameters
buf 
[in] initial address of receive buffer (choice) 
count 
[in] number of elements in receive buffer (integer) 
datatype 
[in] datatype of each receive buffer element (handle) 
source 
[in] rank of source (integer) 
tag 
[in] message tag (integer) 
comm 
[in] communicator (handle) 
request 
[out] communication request (handle) 

What exactly does "count" mean here? 

Is it the number of elements that have been received *thus far* in the buffer?
Or is it the number of elements that are expected to be received, and hence 
MPI_Test will tell me that the receive is not complete untill "count" number of 
elements have not been received?

Here's the reason why I have a problem (and I think I may be completely stupid 
here, I'd appreciate your patience):

I have node 1 transmit data to node 2, in a pack of 80 bytes:

Mon Aug 20 11:09:04 2012[1,1]:    Finished transmitting 80 bytes to 2 
node with Tag 1000

On the receiving end:

MPI_Irecv(
    (void*)this->receivebuffer,/* the receive buffer */
    this->receive_packetsize,  /* 80 */
    MPI_BYTE,   /* The data type expected */
    this->transmittingnode,    /* The node from which to 
receive */
    this->uniquetag,   /* Tag */
    MPI_COMM_WORLD, /* Communicator */
    _request  /* request handle */
    );

I see that node 1 tells me that the transmit was successful using the MPI_Test:

MPI_Test(_request, , _status);

which returns me "true" on Node 1 (sender).

However, I am never able to receive the payload on Node 2:

Mon Aug 20 11:09:04 2012[1,2]:Attemting to receive payload from node 1 
with tag 1000, receivepacketsize: 80


I am using MPI_Issend to send payload between node 1 and node 2.

Does anyone see what could be going wrong?

Thanks a lot

Devendra