Dear all,

I'm trying to use MPI_ACCUMULATE with a user-defined datatype.
Basically, I want to be able to accumulate a subset of a
three-dimensional array, but since the data is non-contiguous in memory,
it requires defining a new type with MPI_TYPE_VECTOR and MPI_TYPE_HVECTOR.

I wrote a simple program to GET and ACCUMULATE a subset of a 2D array,
and it fails when it tries to ACCUMULATE. When I try to run the
following with two processes

#####################################################################3
program test_prog

  implicit none
  include 'mpif.h'

  integer :: ierr, nprocs, rank
  integer :: MPI_new
  integer :: temp, win

  integer(kind=MPI_ADDRESS_KIND) :: lb, size, sizeofreal

  real :: array(10,10)
  real :: data(2,2)

  pointer (ptr, temp)

  array = 2.
  data = 1.

  ! Startup MPI
  call MPI_INIT(ierr)
  call MPI_COMM_SIZE(MPI_COMM_WORLD, nprocs, ierr)
  call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)

  ! Create new type
  call MPI_TYPE_VECTOR( 2, 2, 10, MPI_REAL8, MPI_new, ierr )
  call MPI_TYPE_COMMIT( MPI_new, ierr )

  ! Create window
  ptr = loc(array)
  call MPI_TYPE_GET_EXTENT( MPI_REAL8, lb, sizeofreal, ierr )
  size = 10*10*10*sizeofreal
  call MPI_ALLOC_MEM( size, MPI_INFO_NULL, ptr, ierr)
  call MPI_WIN_CREATE( array, size, sizeofreal, MPI_INFO_NULL, &
       & MPI_COMM_WORLD, win, ierr )
  call MPI_WIN_FENCE( 0, win, ierr )

  ! Get data
  if ( rank == 1 ) then
    call MPI_WIN_LOCK( MPI_LOCK_SHARED, 0, 0, win, ierr )
    call MPI_GET( data, 4, MPI_REAL8, 0, 0, 1, MPI_new, &
         & win, ierr )
    call MPI_WIN_UNLOCK( 0, win, ierr )
    print *, data
  end if

  ! Accumulate data
  if ( rank == 1 ) then
    call MPI_WIN_LOCK( MPI_LOCK_SHARED, 0, 0, win, ierr )
    call MPI_ACCUMULATE( data, 4, MPI_REAL8, 0, 0, 1, MPI_new, &
         & MPI_SUM, win, ierr )
    call MPI_WIN_UNLOCK( 0, win, ierr )
  end if

  ! print data
  call MPI_BARRIER( MPI_COMM_WORLD, ierr )
  if ( rank == 0 ) print *, array(1:5,1,1)

  ! Finalize
  call MPI_WIN_FREE( win, ierr )
  call MPI_FINALIZE( ierr )

end program test_prog
#####################################################################3

I get the following error:
[yra128:27896] *** An error occurred in MPI_Accumlate
[yra128:27896] *** on win
[yra128:27896] *** MPI_ERR_ARG: invalid argument of some other kind
[yra128:27896] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)

Looking at the source code for OpenMPI, this should happen if the
primitive datatypes are different, but using the same user-defined
datatype in an MPI_GET succeeds.

I'm running on the Yellowrail cluster at LANL on OpenMPI 1.3.3
(InfiniBand if I'm not mistaken) using the Intel 10.0.023 Fortran
compiler. I believe this is unlikely to be a make/build issue since the
HPC folks here set everything up. I know the ability to perform
MPI_ACCUMULATE with a user-defined type was not in versions < 1.3, but
it should work for 1.3.3.

Any help or insights would be greatly appreciated.

Best regards,
Paul Romano

PS I'm compiling with -r8 to make sure my reals are REAL8s

Reply via email to