Well, then that would mean that I was using ML through PETSc in PARALLEL runs with no MPI support !!! Do you believe that scenario is possible?
Looking at ML configure script and generated makefiles, in them there is a line saying> DEFS = -DHAVE_CONFIG_H Do you have that line? Next, this $(DEFS) is included in compiler command definition. Additionally, I did $ nm -C libml.a | grep MPI and undefined references to the MPI functions appered as expected. Sorry about my insinstence, but I believe we need to figure out what's exactly going on. On 3/25/08, Jed Brown <jed at 59a2.org> wrote: > I let PETSc build it for me. I think you did not notice the problem because > MPICH2 defines MPI_Comm to be an int which happens to be the same as used by > ML > in their dummy MPI so there is no type mismatch. From the contents of > ml_common.h, it looks like you would still run into trouble if you were using > optional features of ML. The reason I like OpenMPI is exactly this stronger > type checking and that it seems to crash sooner when I have a bug. > > > Jed > > -- Lisandro Dalc?n --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594