The attached version is modified to use passive target, which does not
require collective synchronization for remote access.

Note that I didn't compile and run this and don't write MPI in Fortran
so there may be syntax errors.

Jeff

On Thu, Jan 16, 2014 at 11:03 AM, Christoph Niethammer
<nietham...@hlrs.de> wrote:
> Hello,
>
> Find attached a minimal example - hopefully doing what you intended.
>
> Regards
> Christoph
>
> --
>
> Christoph Niethammer
> High Performance Computing Center Stuttgart (HLRS)
> Nobelstrasse 19
> 70569 Stuttgart
>
> Tel: ++49(0)711-685-87203
> email: nietham...@hlrs.de
> http://www.hlrs.de/people/niethammer
>
>
>
> ----- Ursprüngliche Mail -----
> Von: "Pradeep Jha" <prad...@ccs.engg.nagoya-u.ac.jp>
> An: "Open MPI Users" <us...@open-mpi.org>
> Gesendet: Freitag, 10. Januar 2014 10:23:40
> Betreff: Re: [OMPI users] Calling a variable from another processor
>
>
>
> Thanks for your responses. I am still not able to figure it out. I will 
> further simply my problem statement. Can someone please help me with a 
> fortran90 code for that.
>
>
> 1) I have N processors each with an array A of size S
> 2) On any random processor (say rank X), I calculate the two integer values, 
> Y and Z. (0<=Y<N and 0<Z<=S)
> 3) On processor X, I want to get the value of A(Z) on processor Y.
>
>
> This operation will happen parallely on each processor. Can anyone please 
> help me with this?
>
>
>
>
>
>
>
> 2014/1/9 Jeff Hammond < jeff.scie...@gmail.com >
>
>
> One sided is quite simple to understand. It is like file io. You read/write 
> (get/put) to a memory object. If you want to make it hard to screw up, use 
> passive target bss wrap you calls in lock/unlock so every operation is 
> globally visible where it's called.
>
> I've never deadlocked RMA while p2p is easy to hang for nontrivial patterns 
> unless you only do nonblocking plus waitall.
>
> If one finds MPI too hard to learn, there are both GA/ARMCI and OpenSHMEM 
> implementations over MPI-3 already (I wrote both...).
>
> The bigger issue is that OpenMPI doesn't support MPI-3 RMA, just the MPI-2 
> RMA stuff, and even then, datatypes are broken with RMA. Both ARMCI-MPI3 and 
> OSHMPI (OpenSHMEM over MPI-3) require a late-model MPICH-derivative to work, 
> but these are readily available on every platform normal people use (BGQ is 
> the only system missing, and that will be resolved soon). I've run MPI-3 on 
> my Mac (MPICH), clusters (MVAPICH), Cray (CrayMPI), and SGI (MPICH).
>
> Best,
>
> Jeff
>
> Sent from my iPhone
>
>
>
>> On Jan 9, 2014, at 5:39 AM, "Jeff Squyres (jsquyres)" < jsquy...@cisco.com > 
>> wrote:
>>
>> MPI one-sided stuff is actually pretty complicated; I wouldn't suggest it 
>> for a beginner (I don't even recommend it for many MPI experts ;-) ).
>>
>> Why not look at the MPI_SOURCE in the status that you got back from the 
>> MPI_RECV? In fortran, it would look something like (typed off the top of my 
>> head; forgive typos):
>>
>> -----
>> integer, dimension(MPI_STATUS_SIZE) :: status
>> ...
>> call MPI_Recv(buffer, ..., status, ierr)
>> -----
>>
>> The rank of the sender will be in status(MPI_SOURCE).
>>
>>
>>> On Jan 9, 2014, at 6:29 AM, Christoph Niethammer < nietham...@hlrs.de > 
>>> wrote:
>>>
>>> Hello,
>>>
>>> I suggest you have a look onto the MPI one-sided functionality (Section 11 
>>> of the MPI Spec 3.0).
>>> Create a window to allow the other processes to access the arrays A 
>>> directly via MPI_Get/MPI_Put.
>>> Be aware of synchronization which you have to implement via MPI_Win_fence 
>>> or manual locking.
>>>
>>> Regards
>>> Christoph
>>>
>>> --
>>>
>>> Christoph Niethammer
>>> High Performance Computing Center Stuttgart (HLRS)
>>> Nobelstrasse 19
>>> 70569 Stuttgart
>>>
>>> Tel: ++49(0)711-685-87203
>>> email: nietham...@hlrs.de
>>> http://www.hlrs.de/people/niethammer
>>>
>>>
>>>
>>> ----- Ursprüngliche Mail -----
>>> Von: "Pradeep Jha" < prad...@ccs.engg.nagoya-u.ac.jp >
>>> An: "Open MPI Users" < us...@open-mpi.org >
>>> Gesendet: Donnerstag, 9. Januar 2014 12:10:51
>>> Betreff: [OMPI users] Calling a variable from another processor
>>>
>>>
>>>
>>>
>>>
>>> I am writing a parallel program in Fortran77. I have the following problem: 
>>> 1) I have N number of processors.
>>> 2) Each processor contains an array A of size S.
>>> 3) Using some function, on every processor (say rank X), I calculate the 
>>> value of two integers Y and Z, where Z<S. (the values of Y and Z are 
>>> different on every processor)
>>> 4) I want to get the value of A(Z) on processor Y to processor X.
>>>
>>> I thought of first sending the numerical value X to processor Y from 
>>> processor X and then sending A(Z) from processor Y to processor X. But it 
>>> is not possible as processor Y does not know the numerical value X and so 
>>> it won't know from which processor to receive the numerical value X from.
>>>
>>> I tried but I haven't been able to come up with any code which can 
>>> implement this action. So I am not posting any codes.
>>>
>>> Any suggestions?
>>>
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>>
>> --
>> Jeff Squyres
>> jsquy...@cisco.com
>> For corporate legal information go to: 
>> http://www.cisco.com/web/about/doing_business/legal/cri/
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



-- 
Jeff Hammond
jeff.scie...@gmail.com

Attachment: var_access.f95
Description: Binary data

Reply via email to