Hello, Devendra, Sending and receiving messages in MPI are atomic operations - they complete only when the whole message was sent or received. MPI_Test only tells you if the operation has completed - there is no indication like "30% of the message was sent/received, stay tuned for more".
On the sender side, the message is constructed by taking bytes from various locations in memory, specified by the type map of the MPI datatype used. Then on the receiver side the message is deconstructed back into memory by placing the received bytes according to the type map of the MPI datatype provided. The combination of receive datatype and receive count gives you a certain number of bytes (that is the type size obtainable by MPI_Type_size times "count"). If the message is shorter, that means that some elements of the receive buffer will not be filled, which is OK - you can test exactly how many elements were filled with MPI_Get_count on the status of the receive operation. If the message was longer, however, there won't be enough place to put all the data that the message is carrying and an overflow error would occur. This works best by example. Image that in one process you issue: MPI_Send(data, 80, MPI_BYTE, ...); This will send a message containing 80 elements of type byte. Now on the receiver side you issue: MPI_Irecv(data, 160, MPI_BYTE, ..., &request); What will happen is that the message will be received in its entirety since 80 times the size of MPI_BYTE is less than or equal to 160 times the size of MPI_BYTE. Calling MPI_Test on "request" will produce true in the completion flag and you will get back a status variable (unless you provided MPI_STATUS_IGNORE) and then you can call: MPI_Get_count(&status, MPI_BYTE, &count); Now "count" will contain 80 - the actual number of elements received. But if the receive operation was instead: MPI_Irecv(data, 40, MPI_BYTE, ..., &request); since 40 times the size of MPI_BYTE is less than the size of the message, there will be not enough space to receive the entire message and an overflow error would occur. The MPI_Irecv itself only initiates the receive operation and will not return an error. Rather you will obtain the overflow error in the MPI_ERROR field of the status argument, returned by MPI_Test (the test call itself will return MPI_SUCCESS). Since MPI operations are atomic, you cannot send a message of 160 element and then receive it with two separate receives of size 80 - this is very important and it is difficult to grasp initially by people, who come to MPI from the traditional Unix network programming. I would recommend that you head to http://www.mpi-forum.org/ and download from there the PDF of the latest MPI 2.2 standard (or order the printed book). Unlike many other standard documents this one is actually readable by normal people and contains many useful explanations and examples. Read through entire section 3.2 to get a better idea of how messaging works in MPI. Hope that helps to clarify things, Hristo On 21.08.2012, at 10:01, devendra rai <rai.deven...@yahoo.co.uk> wrote: > Hello Jeff and Hristo, > > Now I am completely confused: > > So, let's say, the complete reception requires 8192 bytes. And, I have: > > MPI_Irecv( > (void*)this->receivebuffer,/* the receive buffer */ > this->receive_packetsize, /* 80 */ > MPI_BYTE, /* The data type expected > */ > this->transmittingnode, /* The node from which to > receive */ > this->uniquetag, /* Tag */ > MPI_COMM_WORLD, /* Communicator */ > &Irecv_request /* request handle */ > ); > > > That means, the the MPI_Test will tell me that the reception is complete when > I have received the first 80 bytes. Correct? > > Next, let[s say that I have a receive buffer with a capacity of 160 bytes, > then, will overflow error occur here? Even if I have decided to receive a > large payload in chunks of 80 bytes? > > I am sorry, the manual and the API reference was too vague for me. > > Thanks a lot > > Devendra > From: "Iliev, Hristo" <il...@rz.rwth-aachen.de> > To: Open MPI Users <us...@open-mpi.org> > Cc: devendra rai <rai.deven...@yahoo.co.uk> > Sent: Tuesday, 21 August 2012, 9:48 > Subject: Re: [OMPI users] MPI_Irecv: Confusion with <<int count>> inputy > parameter > > Jeff, > > >> Or is it the number of elements that are expected to be received, and > >> hence MPI_Test will tell me that the receive is not complete untill > >> "count" number of elements have not been received? > > > > Yes. > > Answering "Yes" this question might further the confusion there. The "count" > argument specifies the *capacity* of the receive buffer and the receive > operation (blocking or not) will complete successfully for any matching > message with size up to "count", even for an empty message with 0 elements, > and will produce an overflow error if the received message was longer and > data truncation has to occur. > > On 20.08.2012, at 16:32, Jeff Squyres <jsquy...@cisco.com> wrote: > > > On Aug 20, 2012, at 5:51 AM, devendra rai wrote: > > > >> Is it the number of elements that have been received *thus far* in the > >> buffer? > > > > No. > > > >> Or is it the number of elements that are expected to be received, and > >> hence MPI_Test will tell me that the receive is not complete untill > >> "count" number of elements have not been received? > > > > Yes. > > > >> Here's the reason why I have a problem (and I think I may be completely > >> stupid here, I'd appreciate your patience): > > [snip] > >> Does anyone see what could be going wrong? > > > > Double check that the (sender_rank, tag, communicator) tuple that you > > issued in the MPI_Irecv matches the (rank, tag, communicator) tuple from > > the sender (tag and communicator are arguments on the sending side, and > > rank is the rank of the sender in that communicator). > > > > When receives block like this without completing like this, it usually > > means a mismatch between the tuples. > > > > -- > > Jeff Squyres > > jsquy...@cisco.com > > For corporate legal information go to: > > http://www.cisco.com/web/about/doing_business/legal/cri/ > > > > > > _______________________________________________ > > users mailing list > > us...@open-mpi.org > > http://www.open-mpi.org/mailman/listinfo.cgi/users > > -- > Hristo Iliev, Ph.D. -- High Performance Computing, > RWTH Aachen University, Center for Computing and Communication > Seffenter Weg 23, D 52074 Aachen (Germany) > Tel: +49 241 80 24367 -- Fax/UMS: +49 241 80 624367 > > > > -- Hristo Iliev, Ph.D. -- High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 241 80 24367 -- Fax/UMS: +49 241 80 624367
smime.p7s
Description: S/MIME cryptographic signature