On Aug 31, 2013, at 14:56 , Huangwei <hz...@cam.ac.uk> wrote:

> Hi All,
>  
> I would like to send an array A, which has different dimensions in the 
> processors. Then the root receive these As and puts them into another array 
> globA. I know MPI_allgatherv can do this. However, there are still some 
> implementation issues that are not very clear for me. Thank you very much if 
> any of you can give me some suggestions and comments. The piece of code is as 
> follows (I am not sure if it is completely correct):
>  
>  
> !...calculate the total size for the total size of the globA, 
> PROCASize(myidf) is the size of array A in each processor.
>  
>         allocate(PROCASize(numprocs))
>         PROCASize(myidf) = Asize
>         call 
> mpi_allreduce(PROCSize,PROCSize,numprocs,mpi_integer,mpi_sum,MPI_COMM_WORLD,ierr)
>         globAsize = sum(PROCAsize)
>  
> !...calculate the RECS and DISP for MPI_allgatherv
>         allocate(RECSASize(0:numprocs-1))
>         allocate(DISP(0:numprocs-1))
>         do i=1,numprocs
>            RECSASize(i-1) = PROCASize(i)
>         enddo
>         call mpi_type_extent(mpi_integer, extent, ierr)
>         do i=1,numprocs
>              DISP(i-1) = 1 + (i-1)*RECSASIze(i-1)*extent
>         enddo
>  
> !...allocate the size of the array globA
>         allocate(globA(globASize*extent))
>         call mpi_allgatherv(A,ASize,MPI_INTEGER,globA, RECSASIze, 
> DISP,MPI_INTEGER,MPI_COMM_WORLD,ierr)
>  
> My Questions:
>  
> 1, How to allocate the globA, i.e. the receive buff's size? Should I use 
> globASize*extent or justglobalize?

I don't understand what globASize is supposed to be as you do the reduction on 
PROCSize and then sum PROCAsize.

Anyway, you should always allocate the memory for collective based on the total 
number of elements to receive times the extent of each element. In fact to be 
even more accurate, if we suppose that you correctly computed the DISP array, 
you should allocate globA as DISP(numprocs-1) + RECSASIze.

>  
> 2, about the displacements in globA, i.e. DISP(:), it is stand for the order 
> of an array? like 1, 2, 3, ...., this corresponds to DISP(i-1) = 1 + 
> (i-1)*RECSASIze(i-1)*extent. Or this array's elements are the address at 
> which the data from different processors will be stored in globA?

These are the displacement from the beginning of the array where the data from 
a peer is stored. The index in this array is the rank of the peer process in 
the communicator.

>  
> 3, should the arrays start from 0 to numprocs-1? or start from 1 to numprocs? 
> This may be important when they work as arguments in mpi_allgatherv 
> subroutine.

It doesn't matter how you allocate it (0:numprocs-1) or simple (numprocs) the 
compiler will do the right this when creating the call using the array.

  George.

>  
>  
> These questions may be too simple for MPI professionals, but I do not have 
> much experience on this. Thus I am sincerely eager to get some comments and 
> suggestions from you. Thank you in advance!
> 
> 
> regards,
> Huangwei
> 
>  
> 
>  
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to