Shaun Jackman wrote:
2 calls MPI_Test. No message is waiting, so 2 decides to send.
2 sends to 0 and does not block (0 has one MPI_Irecv posted)
3 calls MPI_Test. No message is waiting, so 3 decides to send.
3 sends to 1 and does not block (1 has one MPI_Irecv posted)
0 calls MPI_Test. No message is waiting, so 0 decides to send.
0 receives the message from 2, consuming its MPI_Irecv
1 calls MPI_Test. No message is waiting, so 1 decides to send.
1 receives the message from 3, consuming its MPI_Irecv
0 sends to 1 and blocks (1 has no more MPI_Irecv posted)
1 sends to 0 and blocks (0 has no more MPI_Irecv posted)
and now processes 0 and 1 are deadlocked.
I'm in over my head here, but let me try.
Okay, so the problem is 0 sends to 1 and 1 sends to 0 so they both
lock. The usual way around this is for each process first to post an
MPI_Irecv, but that might be consumed by some "third party" sends. So,
you lock.
Still, why can't you use non-blocking sends? Use MPI_Isend. As you
wait for its completion, you can process in-coming messages.