Thanks for looking at my problem.  Sounds like you did reproduce my
problem.  I have added some comments below

On Thu, 2011-05-19 at 22:30 -0400, Jeff Squyres wrote:
> Props for that testio script.  I think you win the award for "most easy to 
> reproduce test case."  :-)
> 
> I notice that some of the lines went over 72 columns, so I renamed the file 
> x.f90 and changed all the comments from "c" to "!" and joined the two &-split 
> lines.  The error about implicit type for lenr went away, but then when I 
> enabled better type checking by using "use mpi" instead of "include 
> 'mpif.h'", I got the following:

What fortran compiler did you use?

In the original script my Intel compile used the -132 option, 
allowing up to that many columns per line.  I still think in
F77 fortran much of the time, and use 'c' for comments out
of habit.  The change to '!' doesn't make any difference.


> x.f90:99.77:
> 
>     call mpi_type_indexed(lenij,ijlena,ijdisp,mpi_real,ij_vector_type,ierr)
>                                                                            1  
> Error: There is no specific subroutine for the generic 'mpi_type_indexed' at 
> (1)

Hmmm, very strange, since I am looking right at the MPI standard
documents with that routine documented.  I too get this compile failure
when I switch to 'use mpi'.  Could that be a problem with the Open MPI
fortran libraries???
> 
> I looked at our mpi F90 module and see the following:
> 
> interface MPI_Type_indexed
> subroutine MPI_Type_indexed(count, array_of_blocklengths, 
> array_of_displacements, oldtype, newtype, ierr)
>   integer, intent(in) :: count
>   integer, dimension(*), intent(in) :: array_of_blocklengths
>   integer, dimension(*), intent(in) :: array_of_displacements
>   integer, intent(in) :: oldtype
>   integer, intent(out) :: newtype
>   integer, intent(out) :: ierr
> end subroutine MPI_Type_indexed
> end interface
> 
> I don't quite grok the syntax of the "allocatable" type ijdisp, so that might 
> be the problem here...?

Just a standard F90 'allocatable' statement.  I've written thousands
just like it.
> 
> Regardless, I'm not entirely sure if the problem is the >72 character lines, 
> but then when that is gone, I'm not sure how the allocatable stuff fits in... 
>  (I'm not enough of a Fortran programmer to know)
> 
Anyone else out that who can comment????


T. Rosmond



> 
> On May 10, 2011, at 7:14 PM, Tom Rosmond wrote:
> 
> > I would appreciate someone with experience with MPI-IO look at the
> > simple fortran program gzipped and attached to this note.  It is
> > imbedded in a script so that all that is necessary to run it is do:
> > 'testio' from the command line.  The program generates a small 2-D input
> > array, sets up an MPI-IO environment, and write a 2-D output array
> > twice, with the only difference being the displacement arrays used to
> > construct the indexed datatype.  For the first write, simple
> > monotonically increasing displacements are used, for the second the
> > displacements are 'shuffled' in one dimension.  They are printed during
> > the run.
> > 
> > For the first case the file is written properly, but for the second the
> > program hangs on MPI_FILE_WRITE_AT_ALL and must be aborted manually.
> > Although the program is compiled as an mpi program, I am running on a
> > single processor, which makes the problem more puzzling.
> > 
> > The program should be relatively self-explanatory, but if more
> > information is needed, please ask.  I am on an 8 core Xeon based Dell
> > workstation running Scientific Linux 5.5, Intel fortran 12.0.3, and
> > OpenMPI 1.5.3.  I have also attached output from 'ompi_info'.
> > 
> > T. Rosmond
> > 
> > 
> > <testio.gz><info_ompi.gz>_______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 

Reply via email to