On Tue 2008-03-25 12:11, Lisandro Dalcin wrote:
> Well, then that would mean that I was using ML through PETSc in
> PARALLEL runs with no MPI support !!! Do you believe that scenario is
> possible?
No, ML was built correctly. The build output has -DHAVE_CONFIG_H on every build
line. What *is* ha
On Tue 2008-03-25 10:21, Lisandro Dalcin wrote:
> However, I still do not understand why I never had this problem. Jed,
> you built ML yourself, or were you letting PETSc to automatically
> download and build it? Or perhaps I did not noticed the problem
> because of MPICH2?
I let PETSc build it fo
OK. Now all is clear to me. Sorry about my confusion. So I have to
conclude that ML machinery for including headers is a bit broken, I
think. Many thanks for your explanation.
On 3/25/08, Jed Brown wrote:
> No, ML was built correctly. The build output has -DHAVE_CONFIG_H on every
> build
> l
Well, then that would mean that I was using ML through PETSc in
PARALLEL runs with no MPI support !!! Do you believe that scenario is
possible?
Looking at ML configure script and generated makefiles, in them there
is a line saying>
DEFS = -DHAVE_CONFIG_H
Do you have that line? Next, this $(DEFS)
I think our usage of ML is broken. We use "#include "ml_config.h"" in
ml.c. However looks like ml requires HAVE_CONFIG_H defined for other
things aswell.
So the correct fix is to change the above line to:
#define HAVE_CONFIG_H
Satish
On Tue, 25 Mar 2008, Lisandro Dalcin wrote:
> OK. Now all
However, I still do not understand why I never had this problem. Jed,
you built ML yourself, or were you letting PETSc to automatically
download and build it? Or perhaps I did not noticed the problem
because of MPICH2?
On 3/25/08, Barry Smith wrote:
>
> I have pushed this fix to petsc-dev
>
>
On Sat 2008-03-22 10:19, Lisandro Dalcin wrote:
> Give a try. When using MPICH2, PETSc just passes
> "--with-mpi=PATH_TO_MPI" and ML get it right. Perhaps ML have some
> trouble with OpenMPI, I've never tried. If you built OpenMPI yourself
> and with shared libs, do not forget to define LD_LIBRARY_
I have pushed this fix to petsc-dev
Thank you for figuring this out,
Barry
On Mar 25, 2008, at 2:38 AM, Jed Brown wrote:
> On Sat 2008-03-22 10:19, Lisandro Dalcin wrote:
>> Give a try. When using MPICH2, PETSc just passes
>> "--with-mpi=PATH_TO_MPI" and ML get it right. Perhaps ML
Give a try. When using MPICH2, PETSc just passes
"--with-mpi=PATH_TO_MPI" and ML get it right. Perhaps ML have some
trouble with OpenMPI, I've never tried. If you built OpenMPI yourself
and with shared libs, do not forget to define LD_LIBRARY_PATH to point
to the dir with the OpenMPI libs. If not,
On Fri 2008-03-21 19:31, Lisandro Dalcin wrote:
> Mmm... I believe this is a configuration issue... if ML_MPI were
> defined, then ML_USR_COMM should be MPI_Comm. But the problem is
> perhaps on the ML side, not the PETSc side.
>
> "ml_common.h" #define ML_MPI if macro HAVE_MPI is defined. In turn
The MPI standard does not specify that MPI_Comm = int and in fact
OpenMPI uses a pointer value which lets the compiler do slightly more
type checking. This type checking recently caused me trouble when
building with-download-ml.
There is a line in ml_comm.h which defines their communicator to be
Mmm... I believe this is a configuration issue... if ML_MPI were
defined, then ML_USR_COMM should be MPI_Comm. But the problem is
perhaps on the ML side, not the PETSc side.
"ml_common.h" #define ML_MPI if macro HAVE_MPI is defined. In turn
HAVE_MPI is at "ml_config.h", and that file is surelly ge
Jed,
You can take a look at config/PETSc/packages/ml.py; essentially we
call their configure with a given set of compilers (and MPI
information).
So I would say you have to report the bug to those folks; their
configure
should handle that issue, shouldn't it.
Barry
On Mar 21,
13 matches
Mail list logo