On Dec 5, 2008, at 6:58 PM, Anthony Chan wrote:
AFAIK, all known/popular MPI implemention's fortran binding
layer is implemented with C MPI functions, including
MPICH2 and OpenMPI. If MPICH2's fortran layer was implemented
the way you said, typical profiling tools including MPE will
fail to wo
I think this issue is now resolved and thanks everybody for your help. I
certainly learnt a lot!
For the first case you describe, as OPENMPI is now, the call sequence
from fortran is
mpi_comm_rank -> MPI_Comm_rank -> PMPI_Comm_rank
For the second case, as MPICH is now, its
mpi_comm_rank ->
Hi George,
- "George Bosilca" wrote:
> On Dec 5, 2008, at 03:16 , Anthony Chan wrote:
>
> > void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
> >printf("mpi_comm_rank call successfully intercepted\n");
> >*info = PMPI_Comm_rank(comm,rank);
> > }
>
> Unfortunately this exa
Hi Nick,
- "Nick Wright" wrote:
> For the first case you describe, as OPENMPI is now, the call sequence
>
> from fortran is
>
> mpi_comm_rank -> MPI_Comm_rank -> PMPI_Comm_rank
>
> For the second case, as MPICH is now, its
>
> mpi_comm_rank -> PMPI_Comm_rank
>
AFAIK, all known/popular
Hi Nick,
- "Nick Wright" wrote:
> Hi Antony
>
> That will work yes, but its not portable to other MPI's that do
> implement the profiling layer correctly unfortunately.
I guess I must have missed something here. What is not portable ?
>
> I guess we will just need to detect that we are
After spending few hours pondering about this problem, we came to the
conclusion that the best approach is to keep what we had before (i.e.
the original approach). This means I'll undo my patch in the trunk,
and not change the behavior on the next releases (1.3 and 1.2.9). This
approach, wh
On Dec 5, 2008, at 03:16 , Anthony Chan wrote:
void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
printf("mpi_comm_rank call successfully intercepted\n");
*info = PMPI_Comm_rank(comm,rank);
}
Unfortunately this example is not correct. The real Fortran prototype
for the MPI_Com
Brian
Sorry I picked the wrong word there. I guess this is more complicated
than I thought it was.
For the first case you describe, as OPENMPI is now, the call sequence
from fortran is
mpi_comm_rank -> MPI_Comm_rank -> PMPI_Comm_rank
For the second case, as MPICH is now, its
mpi_comm_rank
Nick -
I think you have an incorrect deffinition of "correctly" :). According to
the MPI standard, an MPI implementation is free to either layer language
bindings (and only allow profiling at the lowest layer) or not layer the
language bindings (and require profiling libraries intercept each
I hope you are aware, that *many* tools and application actually profile
the fortran MPI layer by intercepting the C function calls. This allows
them to not have to deal with f2c translation of MPI objects and not
worry about the name mangling issue. Would there be a way to have both
options e
actually I am wondering whether my previous statement was correct. If
you do not intercept the fortran MPI call, than it still goes to the C
MPI call, which you can intercept. Only if you intercept the fortran MPI
call we do not call the C MPI but the C PMPI call, correct? So in
theory, it coul
On Dec 5, 2008, at 12:22 PM, Edgar Gabriel wrote:
I hope you are aware, that *many* tools and application actually
profile the fortran MPI layer by intercepting the C function calls.
This allows them to not have to deal with f2c translation of MPI
objects and not worry about the name mangli
On Dec 5, 2008, at 11:29 AM, Nick Wright wrote:
I think we can just look at OPEN_MPI as you say and then
OMPI_MAJOR_VERSION, OMPI_MINOR_VERSION & OMPI_RELEASE_VERSION
from mpi.h and if version is less than 1.2.9 implement a work around
as Antony suggested. Its not the most elegant solution b
George,
I hope you are aware, that *many* tools and application actually profile
the fortran MPI layer by intercepting the C function calls. This allows
them to not have to deal with f2c translation of MPI objects and not
worry about the name mangling issue. Would there be a way to have both
I think we can just look at OPEN_MPI as you say and then
OMPI_MAJOR_VERSION, OMPI_MINOR_VERSION & OMPI_RELEASE_VERSION
from mpi.h and if version is less than 1.2.9 implement a work around as
Antony suggested. Its not the most elegant solution but it will work I
think?
Nick.
Jeff Squyres wro
On Dec 5, 2008, at 10:55 AM, David Skinner wrote:
FWIW, if that one-liner fix works (George and I just chatted about
this
on the phone), we can probably also push it into v1.2.9.
great! thanks.
It occurs to me that this is likely not going to be enough for you,
though. :-\
Like it or
FWIW, if that one-liner fix works (George and I just chatted about
this on the phone), we can probably also push it into v1.2.9.
On Dec 5, 2008, at 10:49 AM, George Bosilca wrote:
Nick,
Thanks for noticing this. It's unbelievable that nobody noticed that
over the last 5 years. Anyway, I t
Nick,
Thanks for noticing this. It's unbelievable that nobody noticed that
over the last 5 years. Anyway, I think we have a one line fix for this
problem. I'll test it asap, and then push it in the 1.3.
Thanks,
george.
On Dec 5, 2008, at 10:14 , Nick Wright wrote:
Hi Antony
That w
Hi Antony
That will work yes, but its not portable to other MPI's that do
implement the profiling layer correctly unfortunately.
I guess we will just need to detect that we are using openmpi when our
tool is configured and add some macros to deal with that accordingly. Is
there an easy way t
Hope I didn't misunderstand your question. If you implement
your profiling library in C where you do your real instrumentation,
you don't need to implement the fortran layer, you can simply link
with Fortran to C MPI wrapper library -lmpi_f77. i.e.
/bin/mpif77 -o foo foo.f -L/lib -lmpi_f77 -lYou
20 matches
Mail list logo