Hi All,

I would like to send an array A, which has different dimensions in the
processors. Then the root receive these As and puts them into another array
globA. I know MPI_allgatherv can do this. However, there are still some
implementation issues that are not very clear for me. Thank you very much
if any of you can give me some suggestions and comments. The piece of code
is as follows (I am not sure if it is completely correct):


!...calculate the total size for the total size of the globA,
PROCASize(myidf) is the size of array A in each processor.

        allocate(PROCASize(numprocs))
        PROCASize(myidf) = Asize
        call
mpi_allreduce(PROCSize,PROCSize,numprocs,mpi_integer,mpi_sum,MPI_COMM_WORLD,ierr)
        globAsize = sum(PROCAsize)

!...calculate the RECS and DISP for MPI_allgatherv
        allocate(RECSASize(0:numprocs-1))
        allocate(DISP(0:numprocs-1))
        do i=1,numprocs
           RECSASize(i-1) = PROCASize(i)
        enddo
        call mpi_type_extent(mpi_integer, extent, ierr)
        do i=1,numprocs
             DISP(i-1) = 1 + (i-1)*RECSASIze(i-1)*extent
        enddo

!...allocate the size of the array globA
        allocate(globA(globASize*extent))
        call mpi_allgatherv(A,ASize,MPI_INTEGER,globA, RECSASIze,
DISP,MPI_INTEGER,MPI_COMM_WORLD,ierr)

My Questions:

1, How to allocate the globA, i.e. the receive buff's size? Should I
use globASize*extent
or just globASize?


2, about the displacements in globA, i.e. DISP(:), it is stand for the
order of an array? like 1, 2, 3, ...., this corresponds to DISP(i-1) = 1 +
(i-1)*RECSASIze(i-1)*extent. Or this array's elements are the address at
which the data from different processors will be stored in globA?

3, should the arrays start from 0 to numprocs-1? or start from 1 to
numprocs? This may be important when they work as arguments in
mpi_allgatherv subroutine.


These questions may be too simple for MPI professionals, but I do not have
much experience on this. Thus I am sincerely eager to get some comments and
suggestions from you. Thank you in advance!


regards,
Huangwei

Reply via email to