Hello,

The benefits of 'using' the MPI module over 'including' MPIF.H are clear because of the sanity checks it performs, and I recently did some testing with the module that seems to uncover a possible bug or design flaw in OpenMPI's handling of arrays in user-defined data types. Attached are two example program that build a MPI_TYPE_CREATE_HINDEXED data type. In each the 'array_of_blocklengths' and 'array_of_displacements' are based on a 3-dimensional Cartesian mapping of processor space. In program 'threedarrays.f90' the arrays are specified and constructed as 3-dimensional. In program 'onedarrays.f90' the arrays are 1-dimensional and constructed using explicit calculation of a single index equivalent to 3 indices. Otherwise the programs are identical.

Compiling 'threedarrays.f90' yields the familiar error message:

threedarrays.f90(61): error #6285: There is no matching specific subroutine for this generic subroutine call. [MPI_TYPE_CREATE_HINDEXED] call mpi_type_create_hindexed(lenidx,array_of_blocklengths,array_of_displacements, &
-----------^
compilation aborted for threedarrays.f90 (code 1)


While compiling 'onedarrays.f90' is successful.

I don't see anywhere in the MPI documentation that these arrays need to be one-dimensional, but apparently the parameter checking done by the Openmpi MPI module expects this. Is this by design, or an oversight? BTW, the Intel MPI module does not flag this situation, so apparently it accepts multiple dimensional arrays.

T. Rosmond


Attachment: threedarrays.f90.bz2
Description: application/bzip

Attachment: onedarrays.f90.bz2
Description: application/bzip

Reply via email to