Re: [OMPI users] Memory mapped memory

2011-10-17 Thread James Dinan
Sure, this is possible and generally works, although it is not defined by the MPI standard. Regular shared memory rules apply: you may have to add additional memory consistency and/or synchronization calls depending on your platform to ensure that MPI sees intended data updates. Best, ~Jim. On

Re: [OMPI users] MPI one-sided passive synchronization.

2011-04-13 Thread James Dinan
Sudheer, Locks in MPI don't mean mutexes, they mark the beginning and end of a passive mode communication epoch. All MPI operations within an epoch logically occur concurrently and must be non-conflicting. So, what you're written below is incorrect: the get is not guaranteed to complete

Re: [OMPI users] nonblock alternative to MPI_Win_complete

2011-02-24 Thread James Dinan
Hi Toon, Can you use non-blocking send/recv? It sounds like this will give you the completion semantics you want. Best, ~Jim. On 2/24/11 6:07 AM, Toon Knapen wrote: In that case, I have a small question concerning design: Suppose task-based parallellism where one node (master) distributes

Re: [OMPI users] Using MPI_Put/Get correctly?

2010-12-16 Thread James Dinan
On 12/16/2010 08:34 AM, Jeff Squyres wrote: > Additionally, since MPI-3 is updating the semantics of the one-sided > stuff, it might be worth waiting for all those clarifications before > venturing into the MPI one-sided realm. One-sided semantics are much > more subtle and complex than two-sided

Re: [OMPI users] One-sided datatype errors

2010-12-14 Thread James Dinan
the problem on a single node with Open MPI 1.5 and the trunk. I have submitted a ticket with the information. https://svn.open-mpi.org/trac/ompi/ticket/2656 Rolf On 12/13/10 18:44, James Dinan wrote: Hi, I'm getting strange behavior using datatypes in a one-sided MPI_Accumulate operation. The attached

[OMPI users] One-sided datatype errors

2010-12-13 Thread James Dinan
Test * * Author: James Dinan <di...@mcs.anl.gov> * Date : December, 2010 * * This code performs N accumulates into a 2d patch of a shared array. The * array has dimensions [X, Y] and the subarray has dimensions [SUB_X, SUB_Y] * and begins at index [0, 0]. The input and output b