Hi Jeff

Thank you so much for taking the time to read and answer all my blurb.

Yes, the Matlabish multidimensional arrays of Fortran90 provide
convenient ways to refer to array sections,
to avoid explicit calculation of address arithmetic,
etc, but they can be quite annoying when all you want is to pass the
array address to a subprogram and forget about dimensionality checks.

As for MPI_STARTALL, MPI_WAITALL, and any function with
an array of requests argument (and perhaps array of statuses also)
I am certainly happy with the
"pass a assumed-size array" ( req(*) ) solution,
although the purists may frown at my code.
This solution also avoids any possible overhead
that looping over MPI_START may incur.

Just a note about the MPI documentation and the Fortran90 bindings.
It would be helpful if the MPI function man pages, and
perhaps any new edition or third volume of the "MPI Complete Reference",
maybe the MPI-3 standard, were more clear about the subroutine
arguments in the Fortran90 bindings.
Currently they seem to deal only with Fortran77,
only mention that a certain argument is an array of some
basic/MPI type, and even use the "assumed-size array notation",
e.g.  "ARRAY_OF_REQUESTS(*)" .
Nothing is said about array rank/dimensionality
restrictions or capabilities,
which would be helpful information for those like me
who are enchained to Fortran90 like Prometheus to the cliff.

Many thanks again for your help.

Cheers,
Gus Correa

Jeff Squyres wrote:
On Oct 15, 2010, at 12:17 AM, Gus Correa wrote:

I am having trouble compiling code with MPI_STARTALL using
OpenMPI 1.4.2 mpif90 built with gcc (4.1.2) and Intel ifort (10.1.017),
when the array of requests is multi-dimensional.

Right-o -- it's the strict bindings checking in F90 that's biting you.

It gives me this error message:

**************************
fortcom: Error: mpiwrap_mod.F90, line 478: There is no matching specific 
subroutine for this generic subroutine call.   [MPI_STARTALL]
   call MPI_STARTALL(nreq,req,ierr)
---------^
**************************

However, if I replace MPI_STARTALL by a loop that calls
MPI_START repeated times, the compilation succeeds.
I wonder if the serialization imposed by the loop may
have some performance impact,
or if MPI_STARTALL just implements the same type of loop.

MPI_STARTALL does do the same type of loop -- you do gain a bit of performance 
(depending on how big your loop is) just because there's one less function call 
traversal.

Another workaround is to declare my array of requests
as a one-dimensional assumed-size array inside my subroutine.

The problem seems to be that MPI_STARTALL doesn't handle multi-dimensional 
arrays of requests.

I can live with either workaround above,
but is this supposed to be so?

Based on my understanding of Fortran, yes.

I poked around on the OpenMPI code in ompi/mpi/f90/scripts
and I found out that several OpenMPI Fortran90 subroutines
have code to handle arrays up to rank 7 (the Fortran90 maximum),
mostly for the send and receive buffers.

Right. Those are different buffers, though -- those are the choice buffers for sending and receiving. OMPI just gets the starting pointer and iterates through memory according to the associated count and MPI datatype.

For requests, it's an array of structures (i.e., the fortran integers are converted to their C struct counterparts). And there's a defined shape/size/whatever-the-right-fortran-term-is for that.

I guess it would be nice if all OpenMPI
subroutines in the Fortran90 bindings would accept
arrays of rank up to 7 on all of their array dummy arguments.
Assuming this doesn't violate the MPI standard, of course.
This would allow more flexibility when writing MPI programs
in Fortran90.

From my understanding of Fortran, that would violate the MPI spec.

You could, I think, use an array subsection when you call MPI_STARALL that would give you a 1D array of integers, right?


Reply via email to