Re: [OMPI users] Issue with Profiling Fortran code

2008-12-06 Thread Jeff Squyres

On Dec 5, 2008, at 6:58 PM, Anthony Chan wrote:


AFAIK, all known/popular MPI implemention's fortran binding
layer is implemented with C MPI functions, including
MPICH2 and OpenMPI.   If MPICH2's fortran layer was implemented
the way you said, typical profiling tools including MPE will
fail to work with fortran applications.



FWIW, NEC's MPI has their Fortran functions directly call back-end  
functionality (vs. calling the C MPI API).  We've considered doing  
this for at least some key Fortran MPI API functions in Open MPI as  
well.  Specifically, have MPI_SEND [Fortran] directly call the back- 
end sending functionality rather than calling MPI_Send [C] or  
PMPI_Send [C].


Hence, I don't think it's a good assumption to rely on (that the MPI  
Fortran API always calls the [P]MPI C API).


I think one of George's middle points is the most relevant here: those  
who are interested in this stuff should definitely participate in the  
MPI Forum's Tools group:


https://svn.mpi-forum.org/trac/mpi-forum-web/wiki/MPI3Tools

--
Jeff Squyres
Cisco Systems



Re: [OMPI users] Issue with Profiling Fortran code

2008-12-05 Thread Nick Wright
I think this issue is now resolved and thanks everybody for your help. I 
certainly learnt a lot!



For the first case you describe, as OPENMPI is now, the call sequence

from fortran is

mpi_comm_rank -> MPI_Comm_rank -> PMPI_Comm_rank

For the second case, as MPICH is now, its

mpi_comm_rank -> PMPI_Comm_rank



AFAIK, all known/popular MPI implemention's fortran binding 
layer is implemented with C MPI functions, including
MPICH2 and OpenMPI.   If MPICH2's fortran layer was implemented 
the way you said, typical profiling tools including MPE will

fail to work with fortran applications.

e.g. check mpich2-xxx/src/binding/f77/sendf.c.


To answer this specific point see for example the comment in

src/binding/f77/comm_sizef.c

/* This defines the routine that we call, which must be the PMPI version
   since we're renameing the Fortran entry as the pmpi version */

and the workings of the definition in MPICH

#ifndef MPICH_MPI_FROM_PMPI

This is what makes MPICH behaviour different than OPENMPI's in this matter.

Regards, Nick.


A.Chan

So for the first case if I have a pure fortran/C++ code I have to 
profile at the C interface.


So is the patch now retracted ?

Nick.

I think you have an incorrect deffinition of "correctly" :). 
According 
to the MPI standard, an MPI implementation is free to either layer 
language bindings (and only allow profiling at the lowest layer)  or

not

layer the language bindings (and require profiling libraries
intercept 
each language).  The only requirement is that the implementation 
document what it has done.


Since everyone is pretty clear on what Open MPI has done, I don't
think 

you can claim Open MPI is doing it "incorrectly".  Different from
MPICH 

is not necessarily incorrect.  (BTW, LAM/MPI handles profiling the
same 

way as Open MPI).

Brian

On Fri, 5 Dec 2008, Nick Wright wrote:


Hi Antony

That will work yes, but its not portable to other MPI's that do 
implement the profiling layer correctly unfortunately.


I guess we will just need to detect that we are using openmpi when
our 

tool is configured and add some macros to deal with that
accordingly. 

Is there an easy way to do this built into openmpi?

Thanks

Nick.

Anthony Chan wrote:

Hope I didn't misunderstand your question.  If you implement
your profiling library in C where you do your real

instrumentation,

you don't need to implement the fortran layer, you can simply

link

with Fortran to C MPI wrapper library -lmpi_f77. i.e.

/bin/mpif77 -o foo foo.f -L/lib -lmpi_f77

-lYourProfClib

where libYourProfClib.a is your profiling tool written in C. If
you 

don't want to intercept the MPI call twice for fortran program,
you need to implment fortran layer.  In that case, I would think

you

can just call C version of PMPI_xxx directly from your fortran
layer, 

e.g.

void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
printf("mpi_comm_rank call successfully intercepted\n");
*info = PMPI_Comm_rank(comm,rank);
}

A.Chan

- "Nick Wright"  wrote:


Hi

I am trying to use the PMPI interface with OPENMPI to profile a
fortran program.

I have tried with 1.28 and 1.3rc1 with --enable-mpi-profile

switched

on.

The problem seems to be that if one eg. intercepts to call to 
mpi_comm_rank_ (the fortran hook) then calls pmpi_comm_rank_ this

then

calls MPI_Comm_rank (the C hook) not PMPI_Comm_rank as it

should.

So if one wants to create a library that can profile C and

Fortran

codes at the same time one ends up intercepting the mpi call
twice. 

Which is

not desirable and not what should happen (and indeed doesn't

happen in

other MPI implementations).

A simple example to illustrate is below. If somebody knows of a

fix to

avoid this issue that would be great !

Thanks

Nick.

pmpi_test.c: mpicc pmpi_test.c -c

#include
#include "mpi.h"
void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
   printf("mpi_comm_rank call successfully intercepted\n");
   pmpi_comm_rank_(comm,rank,info);
}
int MPI_Comm_rank(MPI_Comm comm, int *rank) {
   printf("MPI_comm_rank call successfully intercepted\n");
   PMPI_Comm_rank(comm,rank);
}

hello_mpi.f: mpif77 hello_mpi.f pmpi_test.o

   program hello
implicit none
include 'mpif.h'
integer ierr
integer myid,nprocs
character*24 fdate,host
call MPI_Init( ierr )
   myid=0
   call mpi_comm_rank(MPI_COMM_WORLD, myid, ierr )
   call mpi_comm_size(MPI_COMM_WORLD , nprocs, ierr )
   call getenv('HOST',host)
   write (*,*) 'Hello World from proc',myid,' out

of',nprocs,host

   call mpi_finalize(ierr)
   end



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http

Re: [OMPI users] Issue with Profiling Fortran code

2008-12-05 Thread Anthony Chan
Hi George,

- "George Bosilca"  wrote:

> On Dec 5, 2008, at 03:16 , Anthony Chan wrote:
> 
> > void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
> >printf("mpi_comm_rank call successfully intercepted\n");
> >*info = PMPI_Comm_rank(comm,rank);
> > }
> 
> Unfortunately this example is not correct. The real Fortran prototype 
> 
> for the MPI_Comm_rank function is
> void mpi_comm_rank_(MPI_Fint *comm, MPI_Fint *rank, MPI_Fint *ierr).

Yes, you are right.  I was being sloppy (it was late, so just cut/paste
from Nick's code), the correct code should be

void mpi_comm_rank_(MPI_Fint *comm, MPI_Fint *rank, MPI_Fint *info) {
   printf("mpi_comm_rank call successfully intercepted\n");
*info = PMPI_Comm_rank(MPI_Comm_f2c(*comm),*rank);
}


A.Chan
> 
> As you might notice there is no MPI_Comm (and believe me for Open MPI 
> 
> MPI_Comm is different than MPI_Fint), and there is no guarantee that 
> 
> the C int is the same as the Fortran int (looks weird but true).  
> Therefore, several conversions are required in order to be able to go 
> 
> from the Fortran layer into the C one.
> 
> As a result, a tool should never cross the language boundary by  
> itself. Instead it should call the pmpi function as provided by the  
> MPI library. This doesn't really fix the issue that started this email
>  
> thread, but at least clarify it a little bit.
> 
>george.
> 
> >
> > A.Chan
> >
> > - "Nick Wright"  wrote:
> >
> >> Hi
> >>
> >> I am trying to use the PMPI interface with OPENMPI to profile a
> >> fortran
> >> program.
> >>
> >> I have tried with 1.28 and 1.3rc1 with --enable-mpi-profile
> switched
> >> on.
> >>
> >> The problem seems to be that if one eg. intercepts to call to
> >> mpi_comm_rank_ (the fortran hook) then calls pmpi_comm_rank_ this 
> 
> >> then
> >>
> >> calls MPI_Comm_rank (the C hook) not PMPI_Comm_rank as it should.
> >>
> >> So if one wants to create a library that can profile C and Fortran
> >> codes
> >> at the same time one ends up intercepting the mpi call twice. Which
>  
> >> is
> >>
> >> not desirable and not what should happen (and indeed doesn't happen
>  
> >> in
> >>
> >> other MPI implementations).
> >>
> >> A simple example to illustrate is below. If somebody knows of a fix
>  
> >> to
> >>
> >> avoid this issue that would be great !
> >>
> >> Thanks
> >>
> >> Nick.
> >>
> >> pmpi_test.c: mpicc pmpi_test.c -c
> >>
> >> #include
> >> #include "mpi.h"
> >> void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
> >>   printf("mpi_comm_rank call successfully intercepted\n");
> >>   pmpi_comm_rank_(comm,rank,info);
> >> }
> >> int MPI_Comm_rank(MPI_Comm comm, int *rank) {
> >>   printf("MPI_comm_rank call successfully intercepted\n");
> >>   PMPI_Comm_rank(comm,rank);
> >> }
> >>
> >> hello_mpi.f: mpif77 hello_mpi.f pmpi_test.o
> >>
> >>   program hello
> >>implicit none
> >>include 'mpif.h'
> >>integer ierr
> >>integer myid,nprocs
> >>character*24 fdate,host
> >>call MPI_Init( ierr )
> >>   myid=0
> >>   call mpi_comm_rank(MPI_COMM_WORLD, myid, ierr )
> >>   call mpi_comm_size(MPI_COMM_WORLD , nprocs, ierr )
> >>   call getenv('HOST',host)
> >>   write (*,*) 'Hello World from proc',myid,' out
> of',nprocs,host
> >>   call mpi_finalize(ierr)
> >>   end
> >>
> >>
> >>
> >> ___
> >> users mailing list
> >> us...@open-mpi.org
> >> http://www.open-mpi.org/mailman/listinfo.cgi/users
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] Issue with Profiling Fortran code

2008-12-05 Thread Anthony Chan
Hi Nick,

- "Nick Wright"  wrote:

> For the first case you describe, as OPENMPI is now, the call sequence
> 
> from fortran is
> 
> mpi_comm_rank -> MPI_Comm_rank -> PMPI_Comm_rank
> 
> For the second case, as MPICH is now, its
> 
> mpi_comm_rank -> PMPI_Comm_rank
> 

AFAIK, all known/popular MPI implemention's fortran binding 
layer is implemented with C MPI functions, including
MPICH2 and OpenMPI.   If MPICH2's fortran layer was implemented 
the way you said, typical profiling tools including MPE will
fail to work with fortran applications.

e.g. check mpich2-xxx/src/binding/f77/sendf.c.

A.Chan

> So for the first case if I have a pure fortran/C++ code I have to 
> profile at the C interface.
> 
> So is the patch now retracted ?
> 
> Nick.
> 
> > I think you have an incorrect deffinition of "correctly" :). 
> According 
> > to the MPI standard, an MPI implementation is free to either layer 
> > language bindings (and only allow profiling at the lowest layer)  or
> not
> > layer the language bindings (and require profiling libraries
> intercept 
> > each language).  The only requirement is that the implementation 
> > document what it has done.
> > 
> > Since everyone is pretty clear on what Open MPI has done, I don't
> think 
> > you can claim Open MPI is doing it "incorrectly".  Different from
> MPICH 
> > is not necessarily incorrect.  (BTW, LAM/MPI handles profiling the
> same 
> > way as Open MPI).
> > 
> > Brian
> > 
> > On Fri, 5 Dec 2008, Nick Wright wrote:
> > 
> >> Hi Antony
> >>
> >> That will work yes, but its not portable to other MPI's that do 
> >> implement the profiling layer correctly unfortunately.
> >>
> >> I guess we will just need to detect that we are using openmpi when
> our 
> >> tool is configured and add some macros to deal with that
> accordingly. 
> >> Is there an easy way to do this built into openmpi?
> >>
> >> Thanks
> >>
> >> Nick.
> >>
> >> Anthony Chan wrote:
> >>> Hope I didn't misunderstand your question.  If you implement
> >>> your profiling library in C where you do your real
> instrumentation,
> >>> you don't need to implement the fortran layer, you can simply
> link
> >>> with Fortran to C MPI wrapper library -lmpi_f77. i.e.
> >>>
> >>> /bin/mpif77 -o foo foo.f -L/lib -lmpi_f77
> -lYourProfClib
> >>>
> >>> where libYourProfClib.a is your profiling tool written in C. If
> you 
> >>> don't want to intercept the MPI call twice for fortran program,
> >>> you need to implment fortran layer.  In that case, I would think
> you
> >>> can just call C version of PMPI_xxx directly from your fortran
> layer, 
> >>> e.g.
> >>>
> >>> void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
> >>> printf("mpi_comm_rank call successfully intercepted\n");
> >>> *info = PMPI_Comm_rank(comm,rank);
> >>> }
> >>>
> >>> A.Chan
> >>>
> >>> - "Nick Wright"  wrote:
> >>>
>  Hi
> 
>  I am trying to use the PMPI interface with OPENMPI to profile a
>  fortran program.
> 
>  I have tried with 1.28 and 1.3rc1 with --enable-mpi-profile
> switched
>  on.
> 
>  The problem seems to be that if one eg. intercepts to call to 
>  mpi_comm_rank_ (the fortran hook) then calls pmpi_comm_rank_ this
> then
> 
>  calls MPI_Comm_rank (the C hook) not PMPI_Comm_rank as it
> should.
> 
>  So if one wants to create a library that can profile C and
> Fortran
>  codes at the same time one ends up intercepting the mpi call
> twice. 
>  Which is
> 
>  not desirable and not what should happen (and indeed doesn't
> happen in
> 
>  other MPI implementations).
> 
>  A simple example to illustrate is below. If somebody knows of a
> fix to
> 
>  avoid this issue that would be great !
> 
>  Thanks
> 
>  Nick.
> 
>  pmpi_test.c: mpicc pmpi_test.c -c
> 
>  #include
>  #include "mpi.h"
>  void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
> printf("mpi_comm_rank call successfully intercepted\n");
> pmpi_comm_rank_(comm,rank,info);
>  }
>  int MPI_Comm_rank(MPI_Comm comm, int *rank) {
> printf("MPI_comm_rank call successfully intercepted\n");
> PMPI_Comm_rank(comm,rank);
>  }
> 
>  hello_mpi.f: mpif77 hello_mpi.f pmpi_test.o
> 
> program hello
>  implicit none
>  include 'mpif.h'
>  integer ierr
>  integer myid,nprocs
>  character*24 fdate,host
>  call MPI_Init( ierr )
> myid=0
> call mpi_comm_rank(MPI_COMM_WORLD, myid, ierr )
> call mpi_comm_size(MPI_COMM_WORLD , nprocs, ierr )
> call getenv('HOST',host)
> write (*,*) 'Hello World from proc',myid,' out
> of',nprocs,host
> call mpi_finalize(ierr)
> end
> 
> 
> 
>  ___
>  users mailing list
>  us...@open-mpi.org
>  http://w

Re: [OMPI users] Issue with Profiling Fortran code

2008-12-05 Thread Anthony Chan
Hi Nick,

- "Nick Wright"  wrote:

> Hi Antony
> 
> That will work yes, but its not portable to other MPI's that do 
> implement the profiling layer correctly unfortunately.

I guess I must have missed something here.  What is not portable ?

> 
> I guess we will just need to detect that we are using openmpi when our
> 
> tool is configured and add some macros to deal with that accordingly.
> Is 
> there an easy way to do this built into openmpi?

MPE by default provides a fortran to C wrapper library, that way user
does not have to know about the MPI implementation's fortran to C layer.
MPE user can specify the fortran to C layer that implementation have
during MPE configure.

Since MPI implementation's fortran to C library does not change often,
so writing a configure test to check for libmpi_f77.*, libfmpich.*,
or libfmpi.* should get you covered for most platforms.

A.Chan
> 
> Thanks
> 
> Nick.
> 
> Anthony Chan wrote:
> > Hope I didn't misunderstand your question.  If you implement
> > your profiling library in C where you do your real instrumentation,
> > you don't need to implement the fortran layer, you can simply link
> > with Fortran to C MPI wrapper library -lmpi_f77. i.e.
> > 
> > /bin/mpif77 -o foo foo.f -L/lib -lmpi_f77
> -lYourProfClib
> > 
> > where libYourProfClib.a is your profiling tool written in C. 
> > If you don't want to intercept the MPI call twice for fortran
> program,
> > you need to implment fortran layer.  In that case, I would think
> you
> > can just call C version of PMPI_xxx directly from your fortran
> layer, e.g.
> > 
> > void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
> > printf("mpi_comm_rank call successfully intercepted\n");
> > *info = PMPI_Comm_rank(comm,rank);
> > }
> > 
> > A.Chan
> > 
> > - "Nick Wright"  wrote:
> > 
> >> Hi
> >>
> >> I am trying to use the PMPI interface with OPENMPI to profile a
> >> fortran 
> >> program.
> >>
> >> I have tried with 1.28 and 1.3rc1 with --enable-mpi-profile
> switched
> >> on.
> >>
> >> The problem seems to be that if one eg. intercepts to call to 
> >> mpi_comm_rank_ (the fortran hook) then calls pmpi_comm_rank_ this
> then
> >>
> >> calls MPI_Comm_rank (the C hook) not PMPI_Comm_rank as it should.
> >>
> >> So if one wants to create a library that can profile C and Fortran
> >> codes 
> >> at the same time one ends up intercepting the mpi call twice. Which
> is
> >>
> >> not desirable and not what should happen (and indeed doesn't happen
> in
> >>
> >> other MPI implementations).
> >>
> >> A simple example to illustrate is below. If somebody knows of a fix
> to
> >>
> >> avoid this issue that would be great !
> >>
> >> Thanks
> >>
> >> Nick.
> >>
> >> pmpi_test.c: mpicc pmpi_test.c -c
> >>
> >> #include
> >> #include "mpi.h"
> >> void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
> >>printf("mpi_comm_rank call successfully intercepted\n");
> >>pmpi_comm_rank_(comm,rank,info);
> >> }
> >> int MPI_Comm_rank(MPI_Comm comm, int *rank) {
> >>printf("MPI_comm_rank call successfully intercepted\n");
> >>PMPI_Comm_rank(comm,rank);
> >> }
> >>
> >> hello_mpi.f: mpif77 hello_mpi.f pmpi_test.o
> >>
> >>program hello
> >> implicit none
> >> include 'mpif.h'
> >> integer ierr
> >> integer myid,nprocs
> >> character*24 fdate,host
> >> call MPI_Init( ierr )
> >>myid=0
> >>call mpi_comm_rank(MPI_COMM_WORLD, myid, ierr )
> >>call mpi_comm_size(MPI_COMM_WORLD , nprocs, ierr )
> >>call getenv('HOST',host)
> >>write (*,*) 'Hello World from proc',myid,' out
> of',nprocs,host
> >>call mpi_finalize(ierr)
> >>end
> >>
> >>
> >>
> >> ___
> >> users mailing list
> >> us...@open-mpi.org
> >> http://www.open-mpi.org/mailman/listinfo.cgi/users
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] Issue with Profiling Fortran code

2008-12-05 Thread George Bosilca
After spending few hours pondering about this problem, we came to the  
conclusion that the best approach is to keep what we had before (i.e.  
the original approach). This means I'll undo my patch in the trunk,  
and not change the behavior on the next releases (1.3 and 1.2.9). This  
approach, while different from others MPI implementations, is as legal  
as possible from the MPI standard point of view. Any suggestions on  
this topic or about the inconsistent behavior between the MPI  
implementations, should be directed to the MPI Forum Tools group for  
further evaluation.


The main reason for this is being nice with tool developers. In the  
current incarnation, they can either catch the Fortran calls or the C  
calls. If they provide both, then they will have to figure out how to  
cope with the double calls (as your example highlight).


Here is the behavior Open MPI will stick too:
Fortran MPI  -> C MPI
Fortran PMPI -> C MPI

  george.

PS: There was another possible approach, which could avoid the double  
calls while preserving the tool writers friendliness. This possible  
approach will do:

Fortran MPI  -> C MPI
Fortran PMPI -> C PMPI
  ^
Unfortunately, we will have to heavily modify all files in the Fortran  
interface layer in order to support this approach. We're too close to  
a major release to start such time consuming work.


  george.

On Dec 5, 2008, at 13:27 , Nick Wright wrote:


Brian

Sorry I picked the wrong word there. I guess this is more  
complicated than I thought it was.


For the first case you describe, as OPENMPI is now, the call  
sequence from fortran is


mpi_comm_rank -> MPI_Comm_rank -> PMPI_Comm_rank

For the second case, as MPICH is now, its

mpi_comm_rank -> PMPI_Comm_rank

So for the first case if I have a pure fortran/C++ code I have to  
profile at the C interface.


So is the patch now retracted ?

Nick.

I think you have an incorrect deffinition of "correctly" :).   
According to the MPI standard, an MPI implementation is free to  
either layer language bindings (and only allow profiling at the  
lowest layer)  or not
layer the language bindings (and require profiling libraries  
intercept each language).  The only requirement is that the  
implementation document what it has done.
Since everyone is pretty clear on what Open MPI has done, I don't  
think you can claim Open MPI is doing it "incorrectly".  Different  
from MPICH is not necessarily incorrect.  (BTW, LAM/MPI handles  
profiling the same way as Open MPI).

Brian
On Fri, 5 Dec 2008, Nick Wright wrote:

Hi Antony

That will work yes, but its not portable to other MPI's that do  
implement the profiling layer correctly unfortunately.


I guess we will just need to detect that we are using openmpi when  
our tool is configured and add some macros to deal with that  
accordingly. Is there an easy way to do this built into openmpi?


Thanks

Nick.

Anthony Chan wrote:

Hope I didn't misunderstand your question.  If you implement
your profiling library in C where you do your real instrumentation,
you don't need to implement the fortran layer, you can simply link
with Fortran to C MPI wrapper library -lmpi_f77. i.e.

/bin/mpif77 -o foo foo.f -L/lib -lmpi_f77 - 
lYourProfClib


where libYourProfClib.a is your profiling tool written in C. If  
you don't want to intercept the MPI call twice for fortran program,
you need to implment fortran layer.  In that case, I would think  
you
can just call C version of PMPI_xxx directly from your fortran  
layer, e.g.


void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
   printf("mpi_comm_rank call successfully intercepted\n");
   *info = PMPI_Comm_rank(comm,rank);
}

A.Chan

- "Nick Wright"  wrote:


Hi

I am trying to use the PMPI interface with OPENMPI to profile a
fortran program.

I have tried with 1.28 and 1.3rc1 with --enable-mpi-profile  
switched

on.

The problem seems to be that if one eg. intercepts to call to  
mpi_comm_rank_ (the fortran hook) then calls pmpi_comm_rank_  
this then


calls MPI_Comm_rank (the C hook) not PMPI_Comm_rank as it should.

So if one wants to create a library that can profile C and Fortran
codes at the same time one ends up intercepting the mpi call  
twice. Which is


not desirable and not what should happen (and indeed doesn't  
happen in


other MPI implementations).

A simple example to illustrate is below. If somebody knows of a  
fix to


avoid this issue that would be great !

Thanks

Nick.

pmpi_test.c: mpicc pmpi_test.c -c

#include
#include "mpi.h"
void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
  printf("mpi_comm_rank call successfully intercepted\n");
  pmpi_comm_rank_(comm,rank,info);
}
int MPI_Comm_rank(MPI_Comm comm, int *rank) {
  printf("MPI_comm_rank call successfully intercepted\n");
  PMPI_Comm_rank(comm,rank);
}

hello_mpi.f: mpif77 hello_mpi.f pmpi_test.o

  program hello
   implicit none
   include 'mpif.h'
   integer ierr
   integer

Re: [OMPI users] Issue with Profiling Fortran code

2008-12-05 Thread George Bosilca

On Dec 5, 2008, at 03:16 , Anthony Chan wrote:


void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
   printf("mpi_comm_rank call successfully intercepted\n");
   *info = PMPI_Comm_rank(comm,rank);
}


Unfortunately this example is not correct. The real Fortran prototype  
for the MPI_Comm_rank function is

void mpi_comm_rank_(MPI_Fint *comm, MPI_Fint *rank, MPI_Fint *ierr).

As you might notice there is no MPI_Comm (and believe me for Open MPI  
MPI_Comm is different than MPI_Fint), and there is no guarantee that  
the C int is the same as the Fortran int (looks weird but true).  
Therefore, several conversions are required in order to be able to go  
from the Fortran layer into the C one.


As a result, a tool should never cross the language boundary by  
itself. Instead it should call the pmpi function as provided by the  
MPI library. This doesn't really fix the issue that started this email  
thread, but at least clarify it a little bit.


  george.



A.Chan

- "Nick Wright"  wrote:


Hi

I am trying to use the PMPI interface with OPENMPI to profile a
fortran
program.

I have tried with 1.28 and 1.3rc1 with --enable-mpi-profile switched
on.

The problem seems to be that if one eg. intercepts to call to
mpi_comm_rank_ (the fortran hook) then calls pmpi_comm_rank_ this  
then


calls MPI_Comm_rank (the C hook) not PMPI_Comm_rank as it should.

So if one wants to create a library that can profile C and Fortran
codes
at the same time one ends up intercepting the mpi call twice. Which  
is


not desirable and not what should happen (and indeed doesn't happen  
in


other MPI implementations).

A simple example to illustrate is below. If somebody knows of a fix  
to


avoid this issue that would be great !

Thanks

Nick.

pmpi_test.c: mpicc pmpi_test.c -c

#include
#include "mpi.h"
void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
  printf("mpi_comm_rank call successfully intercepted\n");
  pmpi_comm_rank_(comm,rank,info);
}
int MPI_Comm_rank(MPI_Comm comm, int *rank) {
  printf("MPI_comm_rank call successfully intercepted\n");
  PMPI_Comm_rank(comm,rank);
}

hello_mpi.f: mpif77 hello_mpi.f pmpi_test.o

  program hello
   implicit none
   include 'mpif.h'
   integer ierr
   integer myid,nprocs
   character*24 fdate,host
   call MPI_Init( ierr )
  myid=0
  call mpi_comm_rank(MPI_COMM_WORLD, myid, ierr )
  call mpi_comm_size(MPI_COMM_WORLD , nprocs, ierr )
  call getenv('HOST',host)
  write (*,*) 'Hello World from proc',myid,' out of',nprocs,host
  call mpi_finalize(ierr)
  end



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Issue with Profiling Fortran code

2008-12-05 Thread Nick Wright

Brian

Sorry I picked the wrong word there. I guess this is more complicated 
than I thought it was.


For the first case you describe, as OPENMPI is now, the call sequence 
from fortran is


mpi_comm_rank -> MPI_Comm_rank -> PMPI_Comm_rank

For the second case, as MPICH is now, its

mpi_comm_rank -> PMPI_Comm_rank

So for the first case if I have a pure fortran/C++ code I have to 
profile at the C interface.


So is the patch now retracted ?

Nick.

I think you have an incorrect deffinition of "correctly" :).  According 
to the MPI standard, an MPI implementation is free to either layer 
language bindings (and only allow profiling at the lowest layer)  or not
layer the language bindings (and require profiling libraries intercept 
each language).  The only requirement is that the implementation 
document what it has done.


Since everyone is pretty clear on what Open MPI has done, I don't think 
you can claim Open MPI is doing it "incorrectly".  Different from MPICH 
is not necessarily incorrect.  (BTW, LAM/MPI handles profiling the same 
way as Open MPI).


Brian

On Fri, 5 Dec 2008, Nick Wright wrote:


Hi Antony

That will work yes, but its not portable to other MPI's that do 
implement the profiling layer correctly unfortunately.


I guess we will just need to detect that we are using openmpi when our 
tool is configured and add some macros to deal with that accordingly. 
Is there an easy way to do this built into openmpi?


Thanks

Nick.

Anthony Chan wrote:

Hope I didn't misunderstand your question.  If you implement
your profiling library in C where you do your real instrumentation,
you don't need to implement the fortran layer, you can simply link
with Fortran to C MPI wrapper library -lmpi_f77. i.e.

/bin/mpif77 -o foo foo.f -L/lib -lmpi_f77 -lYourProfClib

where libYourProfClib.a is your profiling tool written in C. If you 
don't want to intercept the MPI call twice for fortran program,

you need to implment fortran layer.  In that case, I would think you
can just call C version of PMPI_xxx directly from your fortran layer, 
e.g.


void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
printf("mpi_comm_rank call successfully intercepted\n");
*info = PMPI_Comm_rank(comm,rank);
}

A.Chan

- "Nick Wright"  wrote:


Hi

I am trying to use the PMPI interface with OPENMPI to profile a
fortran program.

I have tried with 1.28 and 1.3rc1 with --enable-mpi-profile switched
on.

The problem seems to be that if one eg. intercepts to call to 
mpi_comm_rank_ (the fortran hook) then calls pmpi_comm_rank_ this then


calls MPI_Comm_rank (the C hook) not PMPI_Comm_rank as it should.

So if one wants to create a library that can profile C and Fortran
codes at the same time one ends up intercepting the mpi call twice. 
Which is


not desirable and not what should happen (and indeed doesn't happen in

other MPI implementations).

A simple example to illustrate is below. If somebody knows of a fix to

avoid this issue that would be great !

Thanks

Nick.

pmpi_test.c: mpicc pmpi_test.c -c

#include
#include "mpi.h"
void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
   printf("mpi_comm_rank call successfully intercepted\n");
   pmpi_comm_rank_(comm,rank,info);
}
int MPI_Comm_rank(MPI_Comm comm, int *rank) {
   printf("MPI_comm_rank call successfully intercepted\n");
   PMPI_Comm_rank(comm,rank);
}

hello_mpi.f: mpif77 hello_mpi.f pmpi_test.o

   program hello
implicit none
include 'mpif.h'
integer ierr
integer myid,nprocs
character*24 fdate,host
call MPI_Init( ierr )
   myid=0
   call mpi_comm_rank(MPI_COMM_WORLD, myid, ierr )
   call mpi_comm_size(MPI_COMM_WORLD , nprocs, ierr )
   call getenv('HOST',host)
   write (*,*) 'Hello World from proc',myid,' out of',nprocs,host
   call mpi_finalize(ierr)
   end



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] Issue with Profiling Fortran code

2008-12-05 Thread Brian W. Barrett

Nick -

I think you have an incorrect deffinition of "correctly" :).  According to 
the MPI standard, an MPI implementation is free to either layer language 
bindings (and only allow profiling at the lowest layer) or not layer the 
language bindings (and require profiling libraries intercept each 
language).  The only requirement is that the implementation document what 
it has done.


Since everyone is pretty clear on what Open MPI has done, I don't think 
you can claim Open MPI is doing it "incorrectly".  Different from MPICH is 
not necessarily incorrect.  (BTW, LAM/MPI handles profiling the same way 
as Open MPI).


Brian

On Fri, 5 Dec 2008, Nick Wright wrote:


Hi Antony

That will work yes, but its not portable to other MPI's that do implement the 
profiling layer correctly unfortunately.


I guess we will just need to detect that we are using openmpi when our tool 
is configured and add some macros to deal with that accordingly. Is there an 
easy way to do this built into openmpi?


Thanks

Nick.

Anthony Chan wrote:

Hope I didn't misunderstand your question.  If you implement
your profiling library in C where you do your real instrumentation,
you don't need to implement the fortran layer, you can simply link
with Fortran to C MPI wrapper library -lmpi_f77. i.e.

/bin/mpif77 -o foo foo.f -L/lib -lmpi_f77 -lYourProfClib

where libYourProfClib.a is your profiling tool written in C. If you don't 
want to intercept the MPI call twice for fortran program,

you need to implment fortran layer.  In that case, I would think you
can just call C version of PMPI_xxx directly from your fortran layer, e.g.

void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
printf("mpi_comm_rank call successfully intercepted\n");
*info = PMPI_Comm_rank(comm,rank);
}

A.Chan

- "Nick Wright"  wrote:


Hi

I am trying to use the PMPI interface with OPENMPI to profile a
fortran program.

I have tried with 1.28 and 1.3rc1 with --enable-mpi-profile switched
on.

The problem seems to be that if one eg. intercepts to call to 
mpi_comm_rank_ (the fortran hook) then calls pmpi_comm_rank_ this then


calls MPI_Comm_rank (the C hook) not PMPI_Comm_rank as it should.

So if one wants to create a library that can profile C and Fortran
codes at the same time one ends up intercepting the mpi call twice. Which 
is


not desirable and not what should happen (and indeed doesn't happen in

other MPI implementations).

A simple example to illustrate is below. If somebody knows of a fix to

avoid this issue that would be great !

Thanks

Nick.

pmpi_test.c: mpicc pmpi_test.c -c

#include
#include "mpi.h"
void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
   printf("mpi_comm_rank call successfully intercepted\n");
   pmpi_comm_rank_(comm,rank,info);
}
int MPI_Comm_rank(MPI_Comm comm, int *rank) {
   printf("MPI_comm_rank call successfully intercepted\n");
   PMPI_Comm_rank(comm,rank);
}

hello_mpi.f: mpif77 hello_mpi.f pmpi_test.o

   program hello
implicit none
include 'mpif.h'
integer ierr
integer myid,nprocs
character*24 fdate,host
call MPI_Init( ierr )
   myid=0
   call mpi_comm_rank(MPI_COMM_WORLD, myid, ierr )
   call mpi_comm_size(MPI_COMM_WORLD , nprocs, ierr )
   call getenv('HOST',host)
   write (*,*) 'Hello World from proc',myid,' out of',nprocs,host
   call mpi_finalize(ierr)
   end



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Issue with Profiling Fortran code

2008-12-05 Thread Nick Wright
I hope you are aware, that *many* tools and application actually profile 
the fortran MPI layer by intercepting the C function calls. This allows 
them to not have to deal with f2c translation of MPI objects and not 
worry about the name mangling issue. Would there be a way to have both 
options  e.g. as a configure flag? The current commit basically breaks 
all of these applications...


Edgar,

I haven't seen the fix so I can't comment on that.

Anyway, in general though this can't be true. Such a profiling tool 
would *only* work with openmpi if it were written that way today. I 
guess such a fix will break openmpi specific tools (are there any?).


For MPICH for example, one must provide a hook into eg mpi_comm_rank_ as 
that calls PMPI_Comm_rank (as it should) and thus if one was only 
intercepting C calls one would not see any fortran profiling information.


Nick.



George Bosilca wrote:

Nick,

Thanks for noticing this. It's unbelievable that nobody noticed that 
over the last 5 years. Anyway, I think we have a one line fix for this 
problem. I'll test it asap, and then push it in the 1.3.


  Thanks,
george.

On Dec 5, 2008, at 10:14 , Nick Wright wrote:


Hi Antony

That will work yes, but its not portable to other MPI's that do 
implement the profiling layer correctly unfortunately.


I guess we will just need to detect that we are using openmpi when 
our tool is configured and add some macros to deal with that 
accordingly. Is there an easy way to do this built into openmpi?


Thanks

Nick.

Anthony Chan wrote:

Hope I didn't misunderstand your question.  If you implement
your profiling library in C where you do your real instrumentation,
you don't need to implement the fortran layer, you can simply link
with Fortran to C MPI wrapper library -lmpi_f77. i.e.
/bin/mpif77 -o foo foo.f -L/lib -lmpi_f77 -lYourProfClib
where libYourProfClib.a is your profiling tool written in C. If you 
don't want to intercept the MPI call twice for fortran program,

you need to implment fortran layer.  In that case, I would think you
can just call C version of PMPI_xxx directly from your fortran 
layer, e.g.

void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
   printf("mpi_comm_rank call successfully intercepted\n");
   *info = PMPI_Comm_rank(comm,rank);
}
A.Chan
- "Nick Wright"  wrote:

Hi

I am trying to use the PMPI interface with OPENMPI to profile a
fortran program.

I have tried with 1.28 and 1.3rc1 with --enable-mpi-profile switched
on.

The problem seems to be that if one eg. intercepts to call to 
mpi_comm_rank_ (the fortran hook) then calls pmpi_comm_rank_ this then


calls MPI_Comm_rank (the C hook) not PMPI_Comm_rank as it should.

So if one wants to create a library that can profile C and Fortran
codes at the same time one ends up intercepting the mpi call twice. 
Which is


not desirable and not what should happen (and indeed doesn't happen in

other MPI implementations).

A simple example to illustrate is below. If somebody knows of a fix to

avoid this issue that would be great !

Thanks

Nick.

pmpi_test.c: mpicc pmpi_test.c -c

#include
#include "mpi.h"
void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
  printf("mpi_comm_rank call successfully intercepted\n");
  pmpi_comm_rank_(comm,rank,info);
}
int MPI_Comm_rank(MPI_Comm comm, int *rank) {
  printf("MPI_comm_rank call successfully intercepted\n");
  PMPI_Comm_rank(comm,rank);
}

hello_mpi.f: mpif77 hello_mpi.f pmpi_test.o

  program hello
   implicit none
   include 'mpif.h'
   integer ierr
   integer myid,nprocs
   character*24 fdate,host
   call MPI_Init( ierr )
  myid=0
  call mpi_comm_rank(MPI_COMM_WORLD, myid, ierr )
  call mpi_comm_size(MPI_COMM_WORLD , nprocs, ierr )
  call getenv('HOST',host)
  write (*,*) 'Hello World from proc',myid,' out of',nprocs,host
  call mpi_finalize(ierr)
  end



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Issue with Profiling Fortran code

2008-12-05 Thread Edgar Gabriel
actually I am wondering whether my previous statement was correct. If 
you do not intercept the fortran MPI call, than it still goes to the C 
MPI call, which you can intercept. Only if you intercept the fortran MPI 
call we do not call the C MPI but the C PMPI call, correct? So in 
theory, it could still work...


Jeff Squyres wrote:

On Dec 5, 2008, at 12:22 PM, Edgar Gabriel wrote:

I hope you are aware, that *many* tools and application actually 
profile the fortran MPI layer by intercepting the C function calls. 
This allows them to not have to deal with f2c translation of MPI 
objects and not worry about the name mangling issue. Would there be a 
way to have both options  e.g. as a configure flag? The current commit 
basically breaks all of these applications...


I was unaware of this, actually.

So it'd be pretty easy to have a configure switch for this (it would be 
a bunch more work for a run-time switch; I don't know if it's really 
worth it?).  Should we default to the current behavior, and have the 
configure switch enable call stacks like this:


  MPI_Comm_rank_f
  PMPI_Comm_rank_f
  PMPI_Comm_rank

?



--
Edgar Gabriel
Assistant Professor
Parallel Software Technologies Lab  http://pstl.cs.uh.edu
Department of Computer Science  University of Houston
Philip G. Hoffman Hall, Room 524Houston, TX-77204, USA
Tel: +1 (713) 743-3857  Fax: +1 (713) 743-3335


Re: [OMPI users] Issue with Profiling Fortran code

2008-12-05 Thread Jeff Squyres

On Dec 5, 2008, at 12:22 PM, Edgar Gabriel wrote:

I hope you are aware, that *many* tools and application actually  
profile the fortran MPI layer by intercepting the C function calls.  
This allows them to not have to deal with f2c translation of MPI  
objects and not worry about the name mangling issue. Would there be  
a way to have both options  e.g. as a configure flag? The current  
commit basically breaks all of these applications...


I was unaware of this, actually.

So it'd be pretty easy to have a configure switch for this (it would  
be a bunch more work for a run-time switch; I don't know if it's  
really worth it?).  Should we default to the current behavior, and  
have the configure switch enable call stacks like this:


  MPI_Comm_rank_f
  PMPI_Comm_rank_f
  PMPI_Comm_rank

?

--
Jeff Squyres
Cisco Systems



Re: [OMPI users] Issue with Profiling Fortran code

2008-12-05 Thread Jeff Squyres

On Dec 5, 2008, at 11:29 AM, Nick Wright wrote:


I think we can just look at OPEN_MPI as you say and then

OMPI_MAJOR_VERSION, OMPI_MINOR_VERSION & OMPI_RELEASE_VERSION

from mpi.h and if version is less than 1.2.9 implement a work around  
as Antony suggested. Its not the most elegant solution but it will  
work I think?


Ya, that should work.

--
Jeff Squyres
Cisco Systems



Re: [OMPI users] Issue with Profiling Fortran code

2008-12-05 Thread Edgar Gabriel

George,

I hope you are aware, that *many* tools and application actually profile 
the fortran MPI layer by intercepting the C function calls. This allows 
them to not have to deal with f2c translation of MPI objects and not 
worry about the name mangling issue. Would there be a way to have both 
options  e.g. as a configure flag? The current commit basically breaks 
all of these applications...


Thanks
Edgar

George Bosilca wrote:

Nick,

Thanks for noticing this. It's unbelievable that nobody noticed that 
over the last 5 years. Anyway, I think we have a one line fix for this 
problem. I'll test it asap, and then push it in the 1.3.


  Thanks,
george.

On Dec 5, 2008, at 10:14 , Nick Wright wrote:


Hi Antony

That will work yes, but its not portable to other MPI's that do 
implement the profiling layer correctly unfortunately.


I guess we will just need to detect that we are using openmpi when our 
tool is configured and add some macros to deal with that accordingly. 
Is there an easy way to do this built into openmpi?


Thanks

Nick.

Anthony Chan wrote:

Hope I didn't misunderstand your question.  If you implement
your profiling library in C where you do your real instrumentation,
you don't need to implement the fortran layer, you can simply link
with Fortran to C MPI wrapper library -lmpi_f77. i.e.
/bin/mpif77 -o foo foo.f -L/lib -lmpi_f77 -lYourProfClib
where libYourProfClib.a is your profiling tool written in C. If you 
don't want to intercept the MPI call twice for fortran program,

you need to implment fortran layer.  In that case, I would think you
can just call C version of PMPI_xxx directly from your fortran layer, 
e.g.

void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
   printf("mpi_comm_rank call successfully intercepted\n");
   *info = PMPI_Comm_rank(comm,rank);
}
A.Chan
- "Nick Wright"  wrote:

Hi

I am trying to use the PMPI interface with OPENMPI to profile a
fortran program.

I have tried with 1.28 and 1.3rc1 with --enable-mpi-profile switched
on.

The problem seems to be that if one eg. intercepts to call to 
mpi_comm_rank_ (the fortran hook) then calls pmpi_comm_rank_ this then


calls MPI_Comm_rank (the C hook) not PMPI_Comm_rank as it should.

So if one wants to create a library that can profile C and Fortran
codes at the same time one ends up intercepting the mpi call twice. 
Which is


not desirable and not what should happen (and indeed doesn't happen in

other MPI implementations).

A simple example to illustrate is below. If somebody knows of a fix to

avoid this issue that would be great !

Thanks

Nick.

pmpi_test.c: mpicc pmpi_test.c -c

#include
#include "mpi.h"
void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
  printf("mpi_comm_rank call successfully intercepted\n");
  pmpi_comm_rank_(comm,rank,info);
}
int MPI_Comm_rank(MPI_Comm comm, int *rank) {
  printf("MPI_comm_rank call successfully intercepted\n");
  PMPI_Comm_rank(comm,rank);
}

hello_mpi.f: mpif77 hello_mpi.f pmpi_test.o

  program hello
   implicit none
   include 'mpif.h'
   integer ierr
   integer myid,nprocs
   character*24 fdate,host
   call MPI_Init( ierr )
  myid=0
  call mpi_comm_rank(MPI_COMM_WORLD, myid, ierr )
  call mpi_comm_size(MPI_COMM_WORLD , nprocs, ierr )
  call getenv('HOST',host)
  write (*,*) 'Hello World from proc',myid,' out of',nprocs,host
  call mpi_finalize(ierr)
  end



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Edgar Gabriel
Assistant Professor
Parallel Software Technologies Lab  http://pstl.cs.uh.edu
Department of Computer Science  University of Houston
Philip G. Hoffman Hall, Room 524Houston, TX-77204, USA
Tel: +1 (713) 743-3857  Fax: +1 (713) 743-3335


Re: [OMPI users] Issue with Profiling Fortran code

2008-12-05 Thread Nick Wright

I think we can just look at OPEN_MPI as you say and then

OMPI_MAJOR_VERSION, OMPI_MINOR_VERSION & OMPI_RELEASE_VERSION

from mpi.h and if version is less than 1.2.9 implement a work around as 
Antony suggested. Its not the most elegant solution but it will work I 
think?


Nick.

Jeff Squyres wrote:

On Dec 5, 2008, at 10:55 AM, David Skinner wrote:


FWIW, if that one-liner fix works (George and I just chatted about this
on the phone), we can probably also push it into v1.2.9.


great! thanks.



It occurs to me that this is likely not going to be enough for you, 
though.  :-\


Like it or not, there's still installed OMPI's out there that will show 
this old behavior.  Do you need to know / adapt for those?  If so, I can 
see two ways of you figuring it out:


1. At run time, do a simple call to (Fortran) MPI_INITIALIZED and see if 
you intercept it twice (both in Fortran and in C).


2. If that's not attractive, we can probably add a line into the 
ompi_info output that you can grep for when using OMPI (you can look for 
the OPEN_MPI macro from our  to know if it's Open MPI or not).  
Specifically, this line can be there for the "fixed" versions, and it 
simply won't be there for non-fixed versions.




Re: [OMPI users] Issue with Profiling Fortran code

2008-12-05 Thread Jeff Squyres

On Dec 5, 2008, at 10:55 AM, David Skinner wrote:

FWIW, if that one-liner fix works (George and I just chatted about  
this

on the phone), we can probably also push it into v1.2.9.


great! thanks.



It occurs to me that this is likely not going to be enough for you,  
though.  :-\


Like it or not, there's still installed OMPI's out there that will  
show this old behavior.  Do you need to know / adapt for those?  If  
so, I can see two ways of you figuring it out:


1. At run time, do a simple call to (Fortran) MPI_INITIALIZED and see  
if you intercept it twice (both in Fortran and in C).


2. If that's not attractive, we can probably add a line into the  
ompi_info output that you can grep for when using OMPI (you can look  
for the OPEN_MPI macro from our  to know if it's Open MPI or  
not).  Specifically, this line can be there for the "fixed" versions,  
and it simply won't be there for non-fixed versions.


--
Jeff Squyres
Cisco Systems



Re: [OMPI users] Issue with Profiling Fortran code

2008-12-05 Thread Jeff Squyres
FWIW, if that one-liner fix works (George and I just chatted about  
this on the phone), we can probably also push it into v1.2.9.



On Dec 5, 2008, at 10:49 AM, George Bosilca wrote:


Nick,

Thanks for noticing this. It's unbelievable that nobody noticed that  
over the last 5 years. Anyway, I think we have a one line fix for  
this problem. I'll test it asap, and then push it in the 1.3.


 Thanks,
   george.

On Dec 5, 2008, at 10:14 , Nick Wright wrote:


Hi Antony

That will work yes, but its not portable to other MPI's that do  
implement the profiling layer correctly unfortunately.


I guess we will just need to detect that we are using openmpi when  
our tool is configured and add some macros to deal with that  
accordingly. Is there an easy way to do this built into openmpi?


Thanks

Nick.

Anthony Chan wrote:

Hope I didn't misunderstand your question.  If you implement
your profiling library in C where you do your real instrumentation,
you don't need to implement the fortran layer, you can simply link
with Fortran to C MPI wrapper library -lmpi_f77. i.e.
/bin/mpif77 -o foo foo.f -L/lib -lmpi_f77 -lYourProfClib
where libYourProfClib.a is your profiling tool written in C. If  
you don't want to intercept the MPI call twice for fortran program,

you need to implment fortran layer.  In that case, I would think you
can just call C version of PMPI_xxx directly from your fortran  
layer, e.g.

void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
  printf("mpi_comm_rank call successfully intercepted\n");
  *info = PMPI_Comm_rank(comm,rank);
}
A.Chan
- "Nick Wright"  wrote:

Hi

I am trying to use the PMPI interface with OPENMPI to profile a
fortran program.

I have tried with 1.28 and 1.3rc1 with --enable-mpi-profile  
switched

on.

The problem seems to be that if one eg. intercepts to call to  
mpi_comm_rank_ (the fortran hook) then calls pmpi_comm_rank_ this  
then


calls MPI_Comm_rank (the C hook) not PMPI_Comm_rank as it should.

So if one wants to create a library that can profile C and Fortran
codes at the same time one ends up intercepting the mpi call  
twice. Which is


not desirable and not what should happen (and indeed doesn't  
happen in


other MPI implementations).

A simple example to illustrate is below. If somebody knows of a  
fix to


avoid this issue that would be great !

Thanks

Nick.

pmpi_test.c: mpicc pmpi_test.c -c

#include
#include "mpi.h"
void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
 printf("mpi_comm_rank call successfully intercepted\n");
 pmpi_comm_rank_(comm,rank,info);
}
int MPI_Comm_rank(MPI_Comm comm, int *rank) {
 printf("MPI_comm_rank call successfully intercepted\n");
 PMPI_Comm_rank(comm,rank);
}

hello_mpi.f: mpif77 hello_mpi.f pmpi_test.o

 program hello
  implicit none
  include 'mpif.h'
  integer ierr
  integer myid,nprocs
  character*24 fdate,host
  call MPI_Init( ierr )
 myid=0
 call mpi_comm_rank(MPI_COMM_WORLD, myid, ierr )
 call mpi_comm_size(MPI_COMM_WORLD , nprocs, ierr )
 call getenv('HOST',host)
 write (*,*) 'Hello World from proc',myid,' out of',nprocs,host
 call mpi_finalize(ierr)
 end



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
Cisco Systems



Re: [OMPI users] Issue with Profiling Fortran code

2008-12-05 Thread George Bosilca

Nick,

Thanks for noticing this. It's unbelievable that nobody noticed that  
over the last 5 years. Anyway, I think we have a one line fix for this  
problem. I'll test it asap, and then push it in the 1.3.


  Thanks,
george.

On Dec 5, 2008, at 10:14 , Nick Wright wrote:


Hi Antony

That will work yes, but its not portable to other MPI's that do  
implement the profiling layer correctly unfortunately.


I guess we will just need to detect that we are using openmpi when  
our tool is configured and add some macros to deal with that  
accordingly. Is there an easy way to do this built into openmpi?


Thanks

Nick.

Anthony Chan wrote:

Hope I didn't misunderstand your question.  If you implement
your profiling library in C where you do your real instrumentation,
you don't need to implement the fortran layer, you can simply link
with Fortran to C MPI wrapper library -lmpi_f77. i.e.
/bin/mpif77 -o foo foo.f -L/lib -lmpi_f77 -lYourProfClib
where libYourProfClib.a is your profiling tool written in C. If you  
don't want to intercept the MPI call twice for fortran program,

you need to implment fortran layer.  In that case, I would think you
can just call C version of PMPI_xxx directly from your fortran  
layer, e.g.

void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
   printf("mpi_comm_rank call successfully intercepted\n");
   *info = PMPI_Comm_rank(comm,rank);
}
A.Chan
- "Nick Wright"  wrote:

Hi

I am trying to use the PMPI interface with OPENMPI to profile a
fortran program.

I have tried with 1.28 and 1.3rc1 with --enable-mpi-profile switched
on.

The problem seems to be that if one eg. intercepts to call to  
mpi_comm_rank_ (the fortran hook) then calls pmpi_comm_rank_ this  
then


calls MPI_Comm_rank (the C hook) not PMPI_Comm_rank as it should.

So if one wants to create a library that can profile C and Fortran
codes at the same time one ends up intercepting the mpi call  
twice. Which is


not desirable and not what should happen (and indeed doesn't  
happen in


other MPI implementations).

A simple example to illustrate is below. If somebody knows of a  
fix to


avoid this issue that would be great !

Thanks

Nick.

pmpi_test.c: mpicc pmpi_test.c -c

#include
#include "mpi.h"
void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
  printf("mpi_comm_rank call successfully intercepted\n");
  pmpi_comm_rank_(comm,rank,info);
}
int MPI_Comm_rank(MPI_Comm comm, int *rank) {
  printf("MPI_comm_rank call successfully intercepted\n");
  PMPI_Comm_rank(comm,rank);
}

hello_mpi.f: mpif77 hello_mpi.f pmpi_test.o

  program hello
   implicit none
   include 'mpif.h'
   integer ierr
   integer myid,nprocs
   character*24 fdate,host
   call MPI_Init( ierr )
  myid=0
  call mpi_comm_rank(MPI_COMM_WORLD, myid, ierr )
  call mpi_comm_size(MPI_COMM_WORLD , nprocs, ierr )
  call getenv('HOST',host)
  write (*,*) 'Hello World from proc',myid,' out of',nprocs,host
  call mpi_finalize(ierr)
  end



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Issue with Profiling Fortran code

2008-12-05 Thread Nick Wright

Hi Antony

That will work yes, but its not portable to other MPI's that do 
implement the profiling layer correctly unfortunately.


I guess we will just need to detect that we are using openmpi when our 
tool is configured and add some macros to deal with that accordingly. Is 
there an easy way to do this built into openmpi?


Thanks

Nick.

Anthony Chan wrote:

Hope I didn't misunderstand your question.  If you implement
your profiling library in C where you do your real instrumentation,
you don't need to implement the fortran layer, you can simply link
with Fortran to C MPI wrapper library -lmpi_f77. i.e.

/bin/mpif77 -o foo foo.f -L/lib -lmpi_f77 -lYourProfClib

where libYourProfClib.a is your profiling tool written in C. 
If you don't want to intercept the MPI call twice for fortran program,

you need to implment fortran layer.  In that case, I would think you
can just call C version of PMPI_xxx directly from your fortran layer, e.g.

void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
printf("mpi_comm_rank call successfully intercepted\n");
*info = PMPI_Comm_rank(comm,rank);
}

A.Chan

- "Nick Wright"  wrote:


Hi

I am trying to use the PMPI interface with OPENMPI to profile a
fortran 
program.


I have tried with 1.28 and 1.3rc1 with --enable-mpi-profile switched
on.

The problem seems to be that if one eg. intercepts to call to 
mpi_comm_rank_ (the fortran hook) then calls pmpi_comm_rank_ this then


calls MPI_Comm_rank (the C hook) not PMPI_Comm_rank as it should.

So if one wants to create a library that can profile C and Fortran
codes 
at the same time one ends up intercepting the mpi call twice. Which is


not desirable and not what should happen (and indeed doesn't happen in

other MPI implementations).

A simple example to illustrate is below. If somebody knows of a fix to

avoid this issue that would be great !

Thanks

Nick.

pmpi_test.c: mpicc pmpi_test.c -c

#include
#include "mpi.h"
void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
   printf("mpi_comm_rank call successfully intercepted\n");
   pmpi_comm_rank_(comm,rank,info);
}
int MPI_Comm_rank(MPI_Comm comm, int *rank) {
   printf("MPI_comm_rank call successfully intercepted\n");
   PMPI_Comm_rank(comm,rank);
}

hello_mpi.f: mpif77 hello_mpi.f pmpi_test.o

   program hello
implicit none
include 'mpif.h'
integer ierr
integer myid,nprocs
character*24 fdate,host
call MPI_Init( ierr )
   myid=0
   call mpi_comm_rank(MPI_COMM_WORLD, myid, ierr )
   call mpi_comm_size(MPI_COMM_WORLD , nprocs, ierr )
   call getenv('HOST',host)
   write (*,*) 'Hello World from proc',myid,' out of',nprocs,host
   call mpi_finalize(ierr)
   end



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] Issue with Profiling Fortran code

2008-12-05 Thread Anthony Chan

Hope I didn't misunderstand your question.  If you implement
your profiling library in C where you do your real instrumentation,
you don't need to implement the fortran layer, you can simply link
with Fortran to C MPI wrapper library -lmpi_f77. i.e.

/bin/mpif77 -o foo foo.f -L/lib -lmpi_f77 -lYourProfClib

where libYourProfClib.a is your profiling tool written in C. 
If you don't want to intercept the MPI call twice for fortran program,
you need to implment fortran layer.  In that case, I would think you
can just call C version of PMPI_xxx directly from your fortran layer, e.g.

void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
printf("mpi_comm_rank call successfully intercepted\n");
*info = PMPI_Comm_rank(comm,rank);
}

A.Chan

- "Nick Wright"  wrote:

> Hi
> 
> I am trying to use the PMPI interface with OPENMPI to profile a
> fortran 
> program.
> 
> I have tried with 1.28 and 1.3rc1 with --enable-mpi-profile switched
> on.
> 
> The problem seems to be that if one eg. intercepts to call to 
> mpi_comm_rank_ (the fortran hook) then calls pmpi_comm_rank_ this then
> 
> calls MPI_Comm_rank (the C hook) not PMPI_Comm_rank as it should.
> 
> So if one wants to create a library that can profile C and Fortran
> codes 
> at the same time one ends up intercepting the mpi call twice. Which is
> 
> not desirable and not what should happen (and indeed doesn't happen in
> 
> other MPI implementations).
> 
> A simple example to illustrate is below. If somebody knows of a fix to
> 
> avoid this issue that would be great !
> 
> Thanks
> 
> Nick.
> 
> pmpi_test.c: mpicc pmpi_test.c -c
> 
> #include
> #include "mpi.h"
> void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
>printf("mpi_comm_rank call successfully intercepted\n");
>pmpi_comm_rank_(comm,rank,info);
> }
> int MPI_Comm_rank(MPI_Comm comm, int *rank) {
>printf("MPI_comm_rank call successfully intercepted\n");
>PMPI_Comm_rank(comm,rank);
> }
> 
> hello_mpi.f: mpif77 hello_mpi.f pmpi_test.o
> 
>program hello
> implicit none
> include 'mpif.h'
> integer ierr
> integer myid,nprocs
> character*24 fdate,host
> call MPI_Init( ierr )
>myid=0
>call mpi_comm_rank(MPI_COMM_WORLD, myid, ierr )
>call mpi_comm_size(MPI_COMM_WORLD , nprocs, ierr )
>call getenv('HOST',host)
>write (*,*) 'Hello World from proc',myid,' out of',nprocs,host
>call mpi_finalize(ierr)
>end
> 
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


[OMPI users] Issue with Profiling Fortran code

2008-12-04 Thread Nick Wright

Hi

I am trying to use the PMPI interface with OPENMPI to profile a fortran 
program.


I have tried with 1.28 and 1.3rc1 with --enable-mpi-profile switched on.

The problem seems to be that if one eg. intercepts to call to 
mpi_comm_rank_ (the fortran hook) then calls pmpi_comm_rank_ this then 
calls MPI_Comm_rank (the C hook) not PMPI_Comm_rank as it should.


So if one wants to create a library that can profile C and Fortran codes 
at the same time one ends up intercepting the mpi call twice. Which is 
not desirable and not what should happen (and indeed doesn't happen in 
other MPI implementations).


A simple example to illustrate is below. If somebody knows of a fix to 
avoid this issue that would be great !


Thanks

Nick.

pmpi_test.c: mpicc pmpi_test.c -c

#include
#include "mpi.h"
void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
  printf("mpi_comm_rank call successfully intercepted\n");
  pmpi_comm_rank_(comm,rank,info);
}
int MPI_Comm_rank(MPI_Comm comm, int *rank) {
  printf("MPI_comm_rank call successfully intercepted\n");
  PMPI_Comm_rank(comm,rank);
}

hello_mpi.f: mpif77 hello_mpi.f pmpi_test.o

  program hello
   implicit none
   include 'mpif.h'
   integer ierr
   integer myid,nprocs
   character*24 fdate,host
   call MPI_Init( ierr )
  myid=0
  call mpi_comm_rank(MPI_COMM_WORLD, myid, ierr )
  call mpi_comm_size(MPI_COMM_WORLD , nprocs, ierr )
  call getenv('HOST',host)
  write (*,*) 'Hello World from proc',myid,' out of',nprocs,host
  call mpi_finalize(ierr)
  end