Hmm. FWIW, I'm unable to replicate your error. I tried with the OMPI SVN trunk and a build of the OMPI 1.3.3 tarball using the GNU compiler suite on RHEL4U5.

I've even compiled your sample code with "mpif90" using the "use mpi" statement -- I did not get an unclassifiable statement. What version of Open MPI are you using? Please sent the info listed here:

    http://www.open-mpi.org/community/help/

Can you confirm that you're not accidentally mixing and matching multiple versions of Open MPI?



On Jul 30, 2009, at 10:41 AM, Ricardo Fonseca wrote:

(I just realized I had the wrong subject line, here it goes again)

Hi Jeff

Yes, I am using the right one. I've installed the freshly compiled openmpi into /opt/openmpi/1.3.3-g95-32. If I edit the mpif.h file by hand and put "error!" in the first line I get:

zamblap:sandbox zamb$ edit /opt/openmpi/1.3.3-g95-32/include/mpif.h

zamblap:sandbox zamb$ mpif77 inplace_test.f90

In file mpif.h:1

   Included at inplace_test.f90:7

error!

1

Error: Unclassifiable statement at (1)

(btw, if I use the F90 bindings instead I get a similar problem, except the address for the MPI_IN_PLACE fortran constant is slightly different from the F77 binding, i.e. instead of 0x50920 I get 0x508e0)

Thanks for your help,

Ricardo


On Jul 29, 2009, at 17:00 , users-requ...@open-mpi.org wrote:

Message: 2
Date: Wed, 29 Jul 2009 07:54:38 -0500
From: Jeff Squyres <jsquy...@cisco.com>
Subject: Re: [OMPI users] OMPI users] MPI_IN_PLACE in Fortran
        withMPI_REDUCE  / MPI_ALLREDUCE
To: "Open MPI Users" <us...@open-mpi.org>
Message-ID: <986510b6-7103-4d7b-b7d6-9d8afdc19...@cisco.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed; delsp=yes

Can you confirm that you're using the right mpif.h?

Keep in mind that each MPI implementation's mpif.h is different --
it's a common mistake to assume that the mpif.h from one MPI
implementation should work with another implementation (e.g., someone
copied mpif.h from one MPI to your software's source tree, so the
compiler always finds that one instead of the MPI-implementation-
provided mpif.h.).


On Jul 28, 2009, at 1:17 PM, Ricardo Fonseca wrote:

Hi George

I did some extra digging and found that (for some reason) the
MPI_IN_PLACE parameter is not being recognized as such by
mpi_reduce_f (reduce_f.c:61). I added a couple of printfs:

   printf(" sendbuf = %p \n", sendbuf );

   printf(" MPI_FORTRAN_IN_PLACE = %p \n", &MPI_FORTRAN_IN_PLACE );
   printf(" mpi_fortran_in_place = %p \n", &mpi_fortran_in_place );
printf(" mpi_fortran_in_place_ = %p \n", &mpi_fortran_in_place_ );
   printf(" mpi_fortran_in_place__ = %p \n",
&mpi_fortran_in_place__ );

And this is what I get on node 0:

sendbuf = 0x50920
MPI_FORTRAN_IN_PLACE = 0x17cd30
mpi_fortran_in_place = 0x17cd34
mpi_fortran_in_place_ = 0x17cd38
mpi_fortran_in_place__ = 0x17cd3c

This makes OMPI_F2C_IN_PLACE(sendbuf) fail. If I replace the line:

sendbuf = OMPI_F2C_IN_PLACE(sendbuf);

with:

   if ( sendbuf == 0x50920 ) {
     printf("sendbuf is MPI_IN_PLACE!\n");
     sendbuf = MPI_IN_PLACE;
   }

Then the code works and gives the correct result:

sendbuf is MPI_IN_PLACE!
Result:
3. 3. 3. 3.

So my guess is that somehow the MPI_IN_PLACE constant for fortran is
getting the wrong address. Could this be related to the fortran
compilers I'm using (ifort / g95)?

Ricardo


_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
jsquy...@cisco.com

Reply via email to