Sudheer,

Locks in MPI don't mean mutexes, they mark the beginning and end of a passive mode communication epoch. All MPI operations within an epoch logically occur concurrently and must be non-conflicting. So, what you're written below is incorrect: the get is not guaranteed to complete until the call to unlock. Because of this, it conflicts with the ensuing call to MPI_Accumulate, which is an error.

I don't share your pessimism about MPI-2 RMA asynchronous progress. As Brian hinted, the standard says you should get progress without making MPI calls. I think you might be getting tripped up by the poorly named MPI_Lock/Unlock calls. These aren't like mutexes and can't be used to ensure exclusive data access for read-modify-write operations (like in your example). In order to do that, you need an actual mutex, which can be implemented on top of MPI-2 RMA (I can provide reference if you need it, I'm sure the code is available somewhere in MPI tests/examples too).

Best,
 ~Jim.

On 04/13/2011 03:11 PM, Abhishek Kulkarni wrote:
But given the existing behavior, all bets are off and it renders passive
synchronization
(MPI_Win_unlock) mostly similar to active synchronization (MPI_Win_fence).
In trying to emulate a distributed shared memory model, I was hoping to
do things
like:

MPI_Win_lock(MPI_LOCK_EXCLUSIVE, 0, 0, win);
MPI_Get(&out, 1, MPI_INT, 0, 0, 1, MPI_INT, win);
if (out % 2 == 0)
      out++;
MPI_Accumulate(&out, 1, MPI_INT, 0, 0, 1, MPI_INT, MPI_REPLACE, win);
MPI_Win_unlock(0, win);

but it is impossible to implement such atomic sections given no semantic
guarantees
on ordering of the RMA calls.

Reply via email to