I personally find the entire MPI one-sided chapter to be incredibly confusing 
and subject to arbitrary interpretation.  I have consistently advised people to 
not use it since the late '90s.

That being said, the MPI one-sided chapter is being overhauled in the MPI-3 
forum; the standardization process for that chapter is getting pretty close to 
consensus.  The new chapter is promised to be much better.

My $0.02 is that you might be better severed staying away from the MPI-2 
one-sided stuff because of exactly the surprises and limitations that you've 
run in to, and wait for MPI-3 implementations for real one-sided support.


On Feb 24, 2011, at 8:21 AM, Toon Knapen wrote:

> But that is what surprises me. Indeed the scenario I described can be 
> implemented using two-sided communication, but it seems not to be possible 
> when using one sided communication.
>  
> Additionally the MPI 2.2. standard describes on page 356 the matching rules 
> for post and start, complete and wait and there it says : 
> "MPI_WIN_COMPLETE(win) initiate a nonblocking send with tag tag1 to each 
> process in the group of the preceding start call. No need to wait for the 
> completion of these sends." 
> The wording 'nonblocking send' startles me somehow !?
>  
> toon
> 
>  
> On Thu, Feb 24, 2011 at 2:05 PM, James Dinan <di...@mcs.anl.gov> wrote:
> Hi Toon,
> 
> Can you use non-blocking send/recv?  It sounds like this will give you the 
> completion semantics you want.
> 
> Best,
>  ~Jim.
> 
> 
> On 2/24/11 6:07 AM, Toon Knapen wrote:
> In that case, I have a small question concerning design:
> Suppose task-based parallellism where one node (master) distributes
> work/tasks to 2 other nodes (slaves) by means of an MPI_Put. The master
> allocates 2 buffers locally in which it will store all necessary data
> that is needed by the slave to perform the task. So I do an MPI_Put on
> each of my 2 buffers to send each buffer to a specific slave. Now I need
> to know when I can reuse one of my buffers to already store the next
> task (that I will MPI_Put later on). The only way to know this is call
> MPI_Complete. But since this is blocking and if this buffer is not ready
> to be reused yet, I can neither verify if the other buffer is already
> available to me again (in the same thread).
> I would very much appreciate input on how to solve such issue !
> thanks in advance,
> toon
> On Tue, Feb 22, 2011 at 7:21 PM, Barrett, Brian W <bwba...@sandia.gov
> <mailto:bwba...@sandia.gov>> wrote:
> 
>    On Feb 18, 2011, at 8:59 AM, Toon Knapen wrote:
> 
>     > (Probably this issue has been discussed at length before but
>    unfortunately I did not find any threads (on this site or anywhere
>    else) on this topic, if you are able to provide me with links to
>    earlier discussions on this topic, please do not hesitate)
>     >
>     > Is there an alternative to MPI_Win_complete that does not
>    'enforce completion of preceding RMS calls at the origin' (as said
>    on pag 353 of the mpi-2.2 standard) ?
>     >
>     > I would like to know if I can reuse the buffer I gave to MPI_Put
>    but without blocking on it, if the MPI lib is still using it, I want
>    to be able to continue (and use another buffer).
> 
> 
>    There is not.   MPI_Win_complete is the only way to finish a
>    MPI_Win_start epoch, and is always blocking until local completion
>    of all messages started during the epoch.
> 
>    Brian
> 
>    --
>      Brian W. Barrett
>      Dept. 1423: Scalable System Software
>      Sandia National Laboratories
> 
> 
> 
>    _______________________________________________
>    users mailing list
>    us...@open-mpi.org <mailto:us...@open-mpi.org>
> 
>    http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/


Reply via email to