Re: [OMPI users] Issue with Profiling Fortran code

2008-12-05 Thread Nick Wright
I think this issue is now resolved and thanks everybody for your help. I 
certainly learnt a lot!



For the first case you describe, as OPENMPI is now, the call sequence

from fortran is

mpi_comm_rank -> MPI_Comm_rank -> PMPI_Comm_rank

For the second case, as MPICH is now, its

mpi_comm_rank -> PMPI_Comm_rank



AFAIK, all known/popular MPI implemention's fortran binding 
layer is implemented with C MPI functions, including
MPICH2 and OpenMPI.   If MPICH2's fortran layer was implemented 
the way you said, typical profiling tools including MPE will

fail to work with fortran applications.

e.g. check mpich2-xxx/src/binding/f77/sendf.c.


To answer this specific point see for example the comment in

src/binding/f77/comm_sizef.c

/* This defines the routine that we call, which must be the PMPI version
   since we're renameing the Fortran entry as the pmpi version */

and the workings of the definition in MPICH

#ifndef MPICH_MPI_FROM_PMPI

This is what makes MPICH behaviour different than OPENMPI's in this matter.

Regards, Nick.


A.Chan

So for the first case if I have a pure fortran/C++ code I have to 
profile at the C interface.


So is the patch now retracted ?

Nick.

I think you have an incorrect deffinition of "correctly" :). 
According 
to the MPI standard, an MPI implementation is free to either layer 
language bindings (and only allow profiling at the lowest layer)  or

not

layer the language bindings (and require profiling libraries
intercept 
each language).  The only requirement is that the implementation 
document what it has done.


Since everyone is pretty clear on what Open MPI has done, I don't
think 

you can claim Open MPI is doing it "incorrectly".  Different from
MPICH 

is not necessarily incorrect.  (BTW, LAM/MPI handles profiling the
same 

way as Open MPI).

Brian

On Fri, 5 Dec 2008, Nick Wright wrote:


Hi Antony

That will work yes, but its not portable to other MPI's that do 
implement the profiling layer correctly unfortunately.


I guess we will just need to detect that we are using openmpi when
our 

tool is configured and add some macros to deal with that
accordingly. 

Is there an easy way to do this built into openmpi?

Thanks

Nick.

Anthony Chan wrote:

Hope I didn't misunderstand your question.  If you implement
your profiling library in C where you do your real

instrumentation,

you don't need to implement the fortran layer, you can simply

link

with Fortran to C MPI wrapper library -lmpi_f77. i.e.

/bin/mpif77 -o foo foo.f -L/lib -lmpi_f77

-lYourProfClib

where libYourProfClib.a is your profiling tool written in C. If
you 

don't want to intercept the MPI call twice for fortran program,
you need to implment fortran layer.  In that case, I would think

you

can just call C version of PMPI_xxx directly from your fortran
layer, 

e.g.

void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
printf("mpi_comm_rank call successfully intercepted\n");
*info = PMPI_Comm_rank(comm,rank);
}

A.Chan

- "Nick Wright" <nwri...@sdsc.edu> wrote:


Hi

I am trying to use the PMPI interface with OPENMPI to profile a
fortran program.

I have tried with 1.28 and 1.3rc1 with --enable-mpi-profile

switched

on.

The problem seems to be that if one eg. intercepts to call to 
mpi_comm_rank_ (the fortran hook) then calls pmpi_comm_rank_ this

then

calls MPI_Comm_rank (the C hook) not PMPI_Comm_rank as it

should.

So if one wants to create a library that can profile C and

Fortran

codes at the same time one ends up intercepting the mpi call
twice. 

Which is

not desirable and not what should happen (and indeed doesn't

happen in

other MPI implementations).

A simple example to illustrate is below. If somebody knows of a

fix to

avoid this issue that would be great !

Thanks

Nick.

pmpi_test.c: mpicc pmpi_test.c -c

#include
#include "mpi.h"
void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
   printf("mpi_comm_rank call successfully intercepted\n");
   pmpi_comm_rank_(comm,rank,info);
}
int MPI_Comm_rank(MPI_Comm comm, int *rank) {
   printf("MPI_comm_rank call successfully intercepted\n");
   PMPI_Comm_rank(comm,rank);
}

hello_mpi.f: mpif77 hello_mpi.f pmpi_test.o

   program hello
implicit none
include 'mpif.h'
integer ierr
integer myid,nprocs
character*24 fdate,host
call MPI_Init( ierr )
   myid=0
   call mpi_comm_rank(MPI_COMM_WORLD, myid, ierr )
   call mpi_comm_size(MPI_COMM_WORLD , nprocs, ierr )
   call getenv('HOST',host)
   write (*,*) 'Hello World from proc',myid,' out

of',nprocs,host

   call mpi_finalize(ierr)
   end



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/list

Re: [OMPI users] Issue with Profiling Fortran code

2008-12-05 Thread Nick Wright

Brian

Sorry I picked the wrong word there. I guess this is more complicated 
than I thought it was.


For the first case you describe, as OPENMPI is now, the call sequence 
from fortran is


mpi_comm_rank -> MPI_Comm_rank -> PMPI_Comm_rank

For the second case, as MPICH is now, its

mpi_comm_rank -> PMPI_Comm_rank

So for the first case if I have a pure fortran/C++ code I have to 
profile at the C interface.


So is the patch now retracted ?

Nick.

I think you have an incorrect deffinition of "correctly" :).  According 
to the MPI standard, an MPI implementation is free to either layer 
language bindings (and only allow profiling at the lowest layer)  or not
layer the language bindings (and require profiling libraries intercept 
each language).  The only requirement is that the implementation 
document what it has done.


Since everyone is pretty clear on what Open MPI has done, I don't think 
you can claim Open MPI is doing it "incorrectly".  Different from MPICH 
is not necessarily incorrect.  (BTW, LAM/MPI handles profiling the same 
way as Open MPI).


Brian

On Fri, 5 Dec 2008, Nick Wright wrote:


Hi Antony

That will work yes, but its not portable to other MPI's that do 
implement the profiling layer correctly unfortunately.


I guess we will just need to detect that we are using openmpi when our 
tool is configured and add some macros to deal with that accordingly. 
Is there an easy way to do this built into openmpi?


Thanks

Nick.

Anthony Chan wrote:

Hope I didn't misunderstand your question.  If you implement
your profiling library in C where you do your real instrumentation,
you don't need to implement the fortran layer, you can simply link
with Fortran to C MPI wrapper library -lmpi_f77. i.e.

/bin/mpif77 -o foo foo.f -L/lib -lmpi_f77 -lYourProfClib

where libYourProfClib.a is your profiling tool written in C. If you 
don't want to intercept the MPI call twice for fortran program,

you need to implment fortran layer.  In that case, I would think you
can just call C version of PMPI_xxx directly from your fortran layer, 
e.g.


void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
printf("mpi_comm_rank call successfully intercepted\n");
*info = PMPI_Comm_rank(comm,rank);
}

A.Chan

- "Nick Wright" <nwri...@sdsc.edu> wrote:


Hi

I am trying to use the PMPI interface with OPENMPI to profile a
fortran program.

I have tried with 1.28 and 1.3rc1 with --enable-mpi-profile switched
on.

The problem seems to be that if one eg. intercepts to call to 
mpi_comm_rank_ (the fortran hook) then calls pmpi_comm_rank_ this then


calls MPI_Comm_rank (the C hook) not PMPI_Comm_rank as it should.

So if one wants to create a library that can profile C and Fortran
codes at the same time one ends up intercepting the mpi call twice. 
Which is


not desirable and not what should happen (and indeed doesn't happen in

other MPI implementations).

A simple example to illustrate is below. If somebody knows of a fix to

avoid this issue that would be great !

Thanks

Nick.

pmpi_test.c: mpicc pmpi_test.c -c

#include
#include "mpi.h"
void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
   printf("mpi_comm_rank call successfully intercepted\n");
   pmpi_comm_rank_(comm,rank,info);
}
int MPI_Comm_rank(MPI_Comm comm, int *rank) {
   printf("MPI_comm_rank call successfully intercepted\n");
   PMPI_Comm_rank(comm,rank);
}

hello_mpi.f: mpif77 hello_mpi.f pmpi_test.o

   program hello
implicit none
include 'mpif.h'
integer ierr
integer myid,nprocs
character*24 fdate,host
call MPI_Init( ierr )
   myid=0
   call mpi_comm_rank(MPI_COMM_WORLD, myid, ierr )
   call mpi_comm_size(MPI_COMM_WORLD , nprocs, ierr )
   call getenv('HOST',host)
   write (*,*) 'Hello World from proc',myid,' out of',nprocs,host
   call mpi_finalize(ierr)
   end



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] Issue with Profiling Fortran code

2008-12-05 Thread Nick Wright
I hope you are aware, that *many* tools and application actually profile 
the fortran MPI layer by intercepting the C function calls. This allows 
them to not have to deal with f2c translation of MPI objects and not 
worry about the name mangling issue. Would there be a way to have both 
options  e.g. as a configure flag? The current commit basically breaks 
all of these applications...


Edgar,

I haven't seen the fix so I can't comment on that.

Anyway, in general though this can't be true. Such a profiling tool 
would *only* work with openmpi if it were written that way today. I 
guess such a fix will break openmpi specific tools (are there any?).


For MPICH for example, one must provide a hook into eg mpi_comm_rank_ as 
that calls PMPI_Comm_rank (as it should) and thus if one was only 
intercepting C calls one would not see any fortran profiling information.


Nick.



George Bosilca wrote:

Nick,

Thanks for noticing this. It's unbelievable that nobody noticed that 
over the last 5 years. Anyway, I think we have a one line fix for this 
problem. I'll test it asap, and then push it in the 1.3.


  Thanks,
george.

On Dec 5, 2008, at 10:14 , Nick Wright wrote:


Hi Antony

That will work yes, but its not portable to other MPI's that do 
implement the profiling layer correctly unfortunately.


I guess we will just need to detect that we are using openmpi when 
our tool is configured and add some macros to deal with that 
accordingly. Is there an easy way to do this built into openmpi?


Thanks

Nick.

Anthony Chan wrote:

Hope I didn't misunderstand your question.  If you implement
your profiling library in C where you do your real instrumentation,
you don't need to implement the fortran layer, you can simply link
with Fortran to C MPI wrapper library -lmpi_f77. i.e.
/bin/mpif77 -o foo foo.f -L/lib -lmpi_f77 -lYourProfClib
where libYourProfClib.a is your profiling tool written in C. If you 
don't want to intercept the MPI call twice for fortran program,

you need to implment fortran layer.  In that case, I would think you
can just call C version of PMPI_xxx directly from your fortran 
layer, e.g.

void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
   printf("mpi_comm_rank call successfully intercepted\n");
   *info = PMPI_Comm_rank(comm,rank);
}
A.Chan
----- "Nick Wright" <nwri...@sdsc.edu> wrote:

Hi

I am trying to use the PMPI interface with OPENMPI to profile a
fortran program.

I have tried with 1.28 and 1.3rc1 with --enable-mpi-profile switched
on.

The problem seems to be that if one eg. intercepts to call to 
mpi_comm_rank_ (the fortran hook) then calls pmpi_comm_rank_ this then


calls MPI_Comm_rank (the C hook) not PMPI_Comm_rank as it should.

So if one wants to create a library that can profile C and Fortran
codes at the same time one ends up intercepting the mpi call twice. 
Which is


not desirable and not what should happen (and indeed doesn't happen in

other MPI implementations).

A simple example to illustrate is below. If somebody knows of a fix to

avoid this issue that would be great !

Thanks

Nick.

pmpi_test.c: mpicc pmpi_test.c -c

#include
#include "mpi.h"
void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
  printf("mpi_comm_rank call successfully intercepted\n");
  pmpi_comm_rank_(comm,rank,info);
}
int MPI_Comm_rank(MPI_Comm comm, int *rank) {
  printf("MPI_comm_rank call successfully intercepted\n");
  PMPI_Comm_rank(comm,rank);
}

hello_mpi.f: mpif77 hello_mpi.f pmpi_test.o

  program hello
   implicit none
   include 'mpif.h'
   integer ierr
   integer myid,nprocs
   character*24 fdate,host
   call MPI_Init( ierr )
  myid=0
  call mpi_comm_rank(MPI_COMM_WORLD, myid, ierr )
  call mpi_comm_size(MPI_COMM_WORLD , nprocs, ierr )
  call getenv('HOST',host)
  write (*,*) 'Hello World from proc',myid,' out of',nprocs,host
  call mpi_finalize(ierr)
  end



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Issue with Profiling Fortran code

2008-12-05 Thread Nick Wright

I think we can just look at OPEN_MPI as you say and then

OMPI_MAJOR_VERSION, OMPI_MINOR_VERSION & OMPI_RELEASE_VERSION

from mpi.h and if version is less than 1.2.9 implement a work around as 
Antony suggested. Its not the most elegant solution but it will work I 
think?


Nick.

Jeff Squyres wrote:

On Dec 5, 2008, at 10:55 AM, David Skinner wrote:


FWIW, if that one-liner fix works (George and I just chatted about this
on the phone), we can probably also push it into v1.2.9.


great! thanks.



It occurs to me that this is likely not going to be enough for you, 
though.  :-\


Like it or not, there's still installed OMPI's out there that will show 
this old behavior.  Do you need to know / adapt for those?  If so, I can 
see two ways of you figuring it out:


1. At run time, do a simple call to (Fortran) MPI_INITIALIZED and see if 
you intercept it twice (both in Fortran and in C).


2. If that's not attractive, we can probably add a line into the 
ompi_info output that you can grep for when using OMPI (you can look for 
the OPEN_MPI macro from our  to know if it's Open MPI or not).  
Specifically, this line can be there for the "fixed" versions, and it 
simply won't be there for non-fixed versions.




Re: [OMPI users] Issue with Profiling Fortran code

2008-12-05 Thread Nick Wright

Hi Antony

That will work yes, but its not portable to other MPI's that do 
implement the profiling layer correctly unfortunately.


I guess we will just need to detect that we are using openmpi when our 
tool is configured and add some macros to deal with that accordingly. Is 
there an easy way to do this built into openmpi?


Thanks

Nick.

Anthony Chan wrote:

Hope I didn't misunderstand your question.  If you implement
your profiling library in C where you do your real instrumentation,
you don't need to implement the fortran layer, you can simply link
with Fortran to C MPI wrapper library -lmpi_f77. i.e.

/bin/mpif77 -o foo foo.f -L/lib -lmpi_f77 -lYourProfClib

where libYourProfClib.a is your profiling tool written in C. 
If you don't want to intercept the MPI call twice for fortran program,

you need to implment fortran layer.  In that case, I would think you
can just call C version of PMPI_xxx directly from your fortran layer, e.g.

void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
printf("mpi_comm_rank call successfully intercepted\n");
*info = PMPI_Comm_rank(comm,rank);
}

A.Chan

----- "Nick Wright" <nwri...@sdsc.edu> wrote:


Hi

I am trying to use the PMPI interface with OPENMPI to profile a
fortran 
program.


I have tried with 1.28 and 1.3rc1 with --enable-mpi-profile switched
on.

The problem seems to be that if one eg. intercepts to call to 
mpi_comm_rank_ (the fortran hook) then calls pmpi_comm_rank_ this then


calls MPI_Comm_rank (the C hook) not PMPI_Comm_rank as it should.

So if one wants to create a library that can profile C and Fortran
codes 
at the same time one ends up intercepting the mpi call twice. Which is


not desirable and not what should happen (and indeed doesn't happen in

other MPI implementations).

A simple example to illustrate is below. If somebody knows of a fix to

avoid this issue that would be great !

Thanks

Nick.

pmpi_test.c: mpicc pmpi_test.c -c

#include
#include "mpi.h"
void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
   printf("mpi_comm_rank call successfully intercepted\n");
   pmpi_comm_rank_(comm,rank,info);
}
int MPI_Comm_rank(MPI_Comm comm, int *rank) {
   printf("MPI_comm_rank call successfully intercepted\n");
   PMPI_Comm_rank(comm,rank);
}

hello_mpi.f: mpif77 hello_mpi.f pmpi_test.o

   program hello
implicit none
include 'mpif.h'
integer ierr
integer myid,nprocs
character*24 fdate,host
call MPI_Init( ierr )
   myid=0
   call mpi_comm_rank(MPI_COMM_WORLD, myid, ierr )
   call mpi_comm_size(MPI_COMM_WORLD , nprocs, ierr )
   call getenv('HOST',host)
   write (*,*) 'Hello World from proc',myid,' out of',nprocs,host
   call mpi_finalize(ierr)
   end



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


[OMPI users] Issue with Profiling Fortran code

2008-12-04 Thread Nick Wright

Hi

I am trying to use the PMPI interface with OPENMPI to profile a fortran 
program.


I have tried with 1.28 and 1.3rc1 with --enable-mpi-profile switched on.

The problem seems to be that if one eg. intercepts to call to 
mpi_comm_rank_ (the fortran hook) then calls pmpi_comm_rank_ this then 
calls MPI_Comm_rank (the C hook) not PMPI_Comm_rank as it should.


So if one wants to create a library that can profile C and Fortran codes 
at the same time one ends up intercepting the mpi call twice. Which is 
not desirable and not what should happen (and indeed doesn't happen in 
other MPI implementations).


A simple example to illustrate is below. If somebody knows of a fix to 
avoid this issue that would be great !


Thanks

Nick.

pmpi_test.c: mpicc pmpi_test.c -c

#include
#include "mpi.h"
void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
  printf("mpi_comm_rank call successfully intercepted\n");
  pmpi_comm_rank_(comm,rank,info);
}
int MPI_Comm_rank(MPI_Comm comm, int *rank) {
  printf("MPI_comm_rank call successfully intercepted\n");
  PMPI_Comm_rank(comm,rank);
}

hello_mpi.f: mpif77 hello_mpi.f pmpi_test.o

  program hello
   implicit none
   include 'mpif.h'
   integer ierr
   integer myid,nprocs
   character*24 fdate,host
   call MPI_Init( ierr )
  myid=0
  call mpi_comm_rank(MPI_COMM_WORLD, myid, ierr )
  call mpi_comm_size(MPI_COMM_WORLD , nprocs, ierr )
  call getenv('HOST',host)
  write (*,*) 'Hello World from proc',myid,' out of',nprocs,host
  call mpi_finalize(ierr)
  end