Alfio --

We just released Open MPI v2.0.0, with lots of MPI RMA fixes.  Would you mind 
testing there?


> On Jul 12, 2016, at 1:33 PM, Alfio Lazzaro <alfio.lazz...@gmail.com> wrote:
> 
> Dear OpenMPI developers,
> we found a strange behavior when using MPI-RMA passive target and OpenMPI 
> (versions 1.8.3 and 1.10.2). We don't see any problem when using MPICH.
> 
> This is a small example on what we want to do:
> 
> ===================
> program rma_openmpi
>   use mpi
>   integer :: nproc, rank, ierr
>   integer :: win, request, size
>   INTEGER(kind=mpi_address_kind) :: size_aint, disp_aint     
>   integer, DIMENSION(:), ALLOCATABLE :: meta, recv
> 
>   call MPI_INIT(ierr)
>   call MPI_COMM_SIZE(MPI_COMM_WORLD, nproc, ierr)
>   call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)
> 
>   size=100
>   ALLOCATE(meta(size),recv(size))
>   meta(:) = rank
>   recv(:) = -1
>   size_aint = size*4
> 
>   call MPI_WIN_CREATE(meta,size_aint,4,MPI_INFO_NULL,MPI_COMM_WORLD,win,ierr)
>   call MPI_WIN_LOCK_ALL(MPI_MODE_NOCHECK, win, ierr)
> 
>   disp_aint = 0
>   CALL MPI_RGET(recv,size,MPI_INTEGER,MOD(rank+1,2),disp_aint,&
>        size,MPI_INTEGER,win,request,ierr)
>   IF (ierr .NE. 0) STOP "error mpi_rget"
> 
>   CALL MPI_WAIT(request,MPI_STATUS_IGNORE,ierr)
>   IF (ierr .NE. 0) STOP "error mpi_wait"
> 
> !  call MPI_Win_flush_all(win,ierr)                                           
>                                           
>   print *,rank,"=",recv(1)
> 
>   call MPI_WIN_UNLOCK_ALL(win, ierr)
>   call MPI_WIN_FREE(win,ierr)
> 
>   DEALLOCATE(meta)
>   call MPI_FINALIZE(ierr)
> end program rma_openmpi
> 
> ===================
> 
> You can run with 2 ranks. 
> As you can see it is a simple rget operation from the neighbor rank. However, 
> it seems that the communications doesn't complete after the mpi_wait. Indeed 
> we get:
> 
>           0 =          -1
>           1 =          -1
> while it should be:
> 
>            0 =           1
>            1 =           0
> 
> The code works as we want by uncommenting the flush operation, but we would 
> expect the same behavior with such a synchronization.
> 
> Thanks for your help!
> 
> Best regards,
> 
> Alfio
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2016/07/29648.php


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/

Reply via email to