Props for that testio script.  I think you win the award for "most easy to 
reproduce test case."  :-)

I notice that some of the lines went over 72 columns, so I renamed the file 
x.f90 and changed all the comments from "c" to "!" and joined the two &-split 
lines.  The error about implicit type for lenr went away, but then when I 
enabled better type checking by using "use mpi" instead of "include 'mpif.h'", 
I got the following:

x.f90:99.77:

    call mpi_type_indexed(lenij,ijlena,ijdisp,mpi_real,ij_vector_type,ierr)
                                                                           1  
Error: There is no specific subroutine for the generic 'mpi_type_indexed' at (1)

I looked at our mpi F90 module and see the following:

interface MPI_Type_indexed
subroutine MPI_Type_indexed(count, array_of_blocklengths, 
array_of_displacements, oldtype, newtype, ierr)
  integer, intent(in) :: count
  integer, dimension(*), intent(in) :: array_of_blocklengths
  integer, dimension(*), intent(in) :: array_of_displacements
  integer, intent(in) :: oldtype
  integer, intent(out) :: newtype
  integer, intent(out) :: ierr
end subroutine MPI_Type_indexed
end interface

I don't quite grok the syntax of the "allocatable" type ijdisp, so that might 
be the problem here...?

Regardless, I'm not entirely sure if the problem is the >72 character lines, 
but then when that is gone, I'm not sure how the allocatable stuff fits in...  
(I'm not enough of a Fortran programmer to know)




On May 10, 2011, at 7:14 PM, Tom Rosmond wrote:

> I would appreciate someone with experience with MPI-IO look at the
> simple fortran program gzipped and attached to this note.  It is
> imbedded in a script so that all that is necessary to run it is do:
> 'testio' from the command line.  The program generates a small 2-D input
> array, sets up an MPI-IO environment, and write a 2-D output array
> twice, with the only difference being the displacement arrays used to
> construct the indexed datatype.  For the first write, simple
> monotonically increasing displacements are used, for the second the
> displacements are 'shuffled' in one dimension.  They are printed during
> the run.
> 
> For the first case the file is written properly, but for the second the
> program hangs on MPI_FILE_WRITE_AT_ALL and must be aborted manually.
> Although the program is compiled as an mpi program, I am running on a
> single processor, which makes the problem more puzzling.
> 
> The program should be relatively self-explanatory, but if more
> information is needed, please ask.  I am on an 8 core Xeon based Dell
> workstation running Scientific Linux 5.5, Intel fortran 12.0.3, and
> OpenMPI 1.5.3.  I have also attached output from 'ompi_info'.
> 
> T. Rosmond
> 
> 
> <testio.gz><info_ompi.gz>_______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/


Reply via email to