Jeff Squyres wrote:
On Aug 26, 2009, at 10:38 AM, Jeff Squyres (jsquyres) wrote:

Yes, this could cause blocking.  Specifically, the receiver may not
advance any other senders until the matching Irecv is posted and is
able to make progress.

I should clarify something else here -- for long messages where the pipeline protocol is used, OB1 may need to be invoked repeatedly to keep making progress on all the successive fragments. I.e., if a send is long enough to entail many fragments, then OB1 may (read: likely will) not progress *all* of them simultaneously. Hence, if you're calling MPI_Test(), for example, to kick the progress engine, you may have to call it a few times to get *all* the fragments processed.

How many fragments are processed in each call to progress can depend on the speed of your hardware and network, etc.

Hi Jeff,

If I understand you correctly, you may have mentioned something relevant to me here. I've allocated an MPI_Request with MPI_Irecv, and I'm polling for a received packet using MPI_Request_get_status. If there is a queue of messages to be received, and a queue of messages waiting to be sent...

1. will MPI_Request_get_status definitely return true?
2. will all the packets in the transmit queue be sent?

I don't fully understand the receive and transmit queues of OpenMPI, so I'm using the terms in the most general sense. I'm reading the source of MPI_Request_get_status now...

Thanks again,
Shaun

Reply via email to