Just to follow up on Jeff's comments:

I'm a member of the MPI-3 RMA committee and we are working on improving the current state of the RMA spec. Right now it's not possible to ask for local completion of specific RMA operations. Part of the current RMA proposal is an extension that would allow you to ask for per-operation completion. However, this is strictly in the context of passive mode RMA which is asynchronous.

Active mode RMA implies an explicit synchronization across all processes involved in the communication before anything can complete. This allows for very efficient implementation and use. If you don't synchronize across all processes in the active communication group then active mode RMA is probably not the right construct for your algorithm; point-to-point send/recv synchronization/completion is still the right choice.

Jeff, I respectfully disagree on advising users to avoid MPI-2 RMA. We unfortunately don't get to pitch the existing chapter and rewrite it for MPI-3. Even if we could, I don't think we would because it's actually not that bad. So, all of MPI-2 RMA will pass unscathed into MPI-3; anything you write now will still work under MPI-3. Our work will be adding new constructs and improved semantics to RMA so that it is more featureful, flexible, and better performing. I do grant that the spec is challenging to read, but there are much better books available to users who are interested in their algorithms and not MPI nuts and bolts.

Part of the reason why RMA hasn't enjoyed as much success as MPI two-sided is because users have been shy about using it, so implementers haven't prioritized it. As a result, implementations aren't that great, so users avoid it, and the cycle continues. So, please do use MPI RMA. I'm all ears if you have any feedback.

All the best,
 ~Jim.

On 02/24/2011 07:36 AM, Jeff Squyres wrote:
I personally find the entire MPI one-sided chapter to be incredibly confusing 
and subject to arbitrary interpretation.  I have consistently advised people to 
not use it since the late '90s.

That being said, the MPI one-sided chapter is being overhauled in the MPI-3 
forum; the standardization process for that chapter is getting pretty close to 
consensus.  The new chapter is promised to be much better.

My $0.02 is that you might be better severed staying away from the MPI-2 
one-sided stuff because of exactly the surprises and limitations that you've 
run in to, and wait for MPI-3 implementations for real one-sided support.


On Feb 24, 2011, at 8:21 AM, Toon Knapen wrote:

But that is what surprises me. Indeed the scenario I described can be 
implemented using two-sided communication, but it seems not to be possible when 
using one sided communication.

Additionally the MPI 2.2. standard describes on page 356 the matching rules for post and 
start, complete and wait and there it says : "MPI_WIN_COMPLETE(win) initiate a 
nonblocking send with tag tag1 to each process in the group of the preceding start call. 
No need to wait for the completion of these sends."
The wording 'nonblocking send' startles me somehow !?

toon


On Thu, Feb 24, 2011 at 2:05 PM, James Dinan<di...@mcs.anl.gov>  wrote:
Hi Toon,

Can you use non-blocking send/recv?  It sounds like this will give you the 
completion semantics you want.

Best,
  ~Jim.


On 2/24/11 6:07 AM, Toon Knapen wrote:
In that case, I have a small question concerning design:
Suppose task-based parallellism where one node (master) distributes
work/tasks to 2 other nodes (slaves) by means of an MPI_Put. The master
allocates 2 buffers locally in which it will store all necessary data
that is needed by the slave to perform the task. So I do an MPI_Put on
each of my 2 buffers to send each buffer to a specific slave. Now I need
to know when I can reuse one of my buffers to already store the next
task (that I will MPI_Put later on). The only way to know this is call
MPI_Complete. But since this is blocking and if this buffer is not ready
to be reused yet, I can neither verify if the other buffer is already
available to me again (in the same thread).
I would very much appreciate input on how to solve such issue !
thanks in advance,
toon
On Tue, Feb 22, 2011 at 7:21 PM, Barrett, Brian W<bwba...@sandia.gov
<mailto:bwba...@sandia.gov>>  wrote:

    On Feb 18, 2011, at 8:59 AM, Toon Knapen wrote:

     >  (Probably this issue has been discussed at length before but
    unfortunately I did not find any threads (on this site or anywhere
    else) on this topic, if you are able to provide me with links to
    earlier discussions on this topic, please do not hesitate)
     >
     >  Is there an alternative to MPI_Win_complete that does not
    'enforce completion of preceding RMS calls at the origin' (as said
    on pag 353 of the mpi-2.2 standard) ?
     >
     >  I would like to know if I can reuse the buffer I gave to MPI_Put
    but without blocking on it, if the MPI lib is still using it, I want
    to be able to continue (and use another buffer).


    There is not.   MPI_Win_complete is the only way to finish a
    MPI_Win_start epoch, and is always blocking until local completion
    of all messages started during the epoch.

    Brian

    --
      Brian W. Barrett
      Dept. 1423: Scalable System Software
      Sandia National Laboratories



    _______________________________________________
    users mailing list
    us...@open-mpi.org<mailto:us...@open-mpi.org>

    http://www.open-mpi.org/mailman/listinfo.cgi/users




_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



Reply via email to