Dear libmesh users,

I am trying to link matlab and libmesh. I wrote a very stupide mex file 
"mex_loadlibmesh.cxx" (see below) to test the inital call of libmesh 
from matlab. The problem is that my mex file never stop ! I suspect of 
course a problem related to MPI but I have no idea how to fix this. Some 
additional informations : I am using a standard libmesh installation 
(+MPICH2+PETSC+SLEPC) on linux 64b. Every libmesh examples run very well 
with one or more processors. You will find below the console ouput with 
some details on my installation.

I compiled this file on the command line (outside of matlab) by the 
following instructions (which are the standard instructions to build mex 
files) :
Any suggestions are welcome,

best regards,

Edouard.

#-----------------------------------------------------------------------------
# Compile
#-----------------------------------------------------------------------------
HOMEMATLAB=/usr/local/matlab
MATLABCOMPILE="-I${HOMEMATLAB}/extern/include -DMATLAB_MEX_FILE -ansi 
-D_GNU_SOURCE -fPIC -fno-omit-frame-pointer -pthread  -DTETLIBRARY 
-DMX_COMPAT_32  -DNDEBUG -O -w"
MATLABLINK="-O -pthread -shared 
--version-script,${HOMEMATLAB}/extern/lib/glnxa64/mexFunction.map 
-Wl,--no-undefined -o  mexversion.o  
-Wl,-rpath-link,${HOMEMATLAB}/bin/glnxa64 -L${HOMEMATLAB}/bin/glnxa64 
-lmx -lmex -lmat -lm"
FILETOCOMPILE=mex_loadlibmesh
${LIBMESH_CXX} -c ${HOMEMATLAB}/extern/src/mexversion.c ${MATLABCOMPILE}
${LIBMESH_CXX} -c ${FILETOCOMPILE}.cxx ${MATLABCOMPILE} 
${LIBMESH_CXXFLAGS} ${LIBMESH_INCLUDEPERSO}
${LIBMESH_CXX}    ${FILETOCOMPILE}.o      ${MATLABLINK} 
${LIBMESH_CXXFLAGS} ${LIBMESH_LIBS} ${LIBMESH_LDFLAGS} 
${LIBMESH_INCLUDEPERSO} -o ${FILETOCOMPILE}.mexa64
#-----------------------------------------------------------------------------


//-----------------------------------------------------------------------------
// mex_loadlibmesh.cxx
//-----------------------------------------------------------------------------

#include <iostream>
// Functions to initialize the library.
#include "libmesh.h"
// Matlab include
#include "mex.h"
void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[])
{
  int argc;char **argv;
   // Initialize libMesh and the dependent libraries.
  argc            = 2;
  argv            = new char*[argc];
  argv[0]         = new char[20];
  argv[0]         = "test";
  argv[1]         = new char[20];
  argv[1]         = "-log_summary";
  LibMeshInit init(argc, argv);
  std::cout << "LibMeshInit done "  << std::endl;
  return;
}
//-----------------------------------------------------------------------------

#-----------------------------------------------------------------------------
# OUTPUT
#-----------------------------------------------------------------------------

>> mex_loadlibmesh
LibMeshInit done
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r 
-fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: 
----------------------------------------------

test on a linux-gnu named lama.univ-savoie.fr with 1 processor, by oudet Tue 
Mar 16 11:55:07 2010
Using Petsc Release Version 3.0.0, Patch 11, Mon Feb  1 11:01:51 CST 2010

                         Max       Max/Min        Avg      Total
Time (sec):           2.091e-04      1.00000   2.091e-04
Objects:              0.000e+00      0.00000   0.000e+00
Flops:                0.000e+00      0.00000   0.000e+00  0.000e+00
Flops/sec:            0.000e+00      0.00000   0.000e+00  0.000e+00
MPI Messages:         0.000e+00      0.00000   0.000e+00  0.000e+00
MPI Message Lengths:  0.000e+00      0.00000   0.000e+00  0.000e+00
MPI Reductions:       0.000e+00      0.00000

Flop counting convention: 1 flop = 1 real number operation of type 
(multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N 
flops
                            and VecAXPY() for complex vectors of length N --> 
8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- 
Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     
Avg         %Total   counts   %Total
 0:      Main Stage: 2.0695e-04  99.0%  0.0000e+00   0.0%  0.000e+00   0.0%  
0.000e+00        0.0%  0.000e+00   0.0%

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting 
output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and 
PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in 
this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all 
processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                            
 --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct 
 %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions   Memory  Descendants' Mem.

--- Event Stage 0: Main Stage

========================================================================================================================
Average time to get PetscTime(): 0
#PETSc Option Table entries:
-log_summary
#End o PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 
sizeof(PetscScalar) 8
Configure run at: Mon Mar 15 08:51:22 2010
Configure options: --with-cc=gcc --with-fc=gfortran --with-mpi-compilers=0 
--with-shared=1 --with-debugging=0 --with-superlu=1 --download-superlu=1 
--with-superlu_dist=1 --download-superlu_dist=1 --with-umfpack=1 
--download-umfpack=1 --with-spooles=1 --download-spooles=1 --with-mpi 
--download-mpich=1 --download-f-blas-lapack=yes --with-parmetis 
--download-parmetis=1 --with-hypre --download-hypre=1
-----------------------------------------
Libraries compiled on Mon Mar 15 08:51:23 CET 2010 on lama.univ-savoie.fr
Machine characteristics: Linux lama.univ-savoie.fr 2.6.31-20-generic #57-Ubuntu 
SMP Mon Feb 8 09:02:26 UTC 2010 x86_64 GNU/Linux
Using PETSc directory:  ~/C++/Petsc/petsc-3.0.0-p11
Using PETSc arch: linux-gnu-opt
-----------------------------------------
Using C compiler: ~/C++/Petsc/petsc-3.0.0-p11/linux-gnu-opt/bin/mpicc -fPIC 
-Wall -Wwrite-strings -Wno-strict-aliasing -O  
Using Fortran compiler: ~/C++/Petsc/petsc-3.0.0-p11/linux-gnu-opt/bin/mpif90 
-fPIC  -Wall -Wno-unused-variable -O  
-----------------------------------------
Using include paths: -I/~/C++/Petsc/petsc-3.0.0-p11/linux-gnu-opt/include 
-I~/C++/Petsc/petsc-3.0.0-p11/include 
-I~/C++/Petsc/petsc-3.0.0-p11/linux-gnu-opt/include  
------------------------------------------
Using C linker: ~/C++/Petsc/petsc-3.0.0-p11/linux-gnu-opt/bin/mpicc -fPIC -Wall 
-Wwrite-strings -Wno-strict-aliasing -O
Using Fortran linker: ~/C++/Petsc/petsc-3.0.0-p11/linux-gnu-opt/bin/mpif90 
-fPIC  -Wall -Wno-unused-variable -O
Using libraries: -Wl,-rpath,~/C++/Petsc/petsc-3.0.0-p11/linux-gnu-opt/lib 
-L~/C++/Petsc/petsc-3.0.0-p11/linux-gnu-opt/lib -lpetscts -lpetscsnes 
-lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetsc        -lX11 
-Wl,-rpath,~/C++/Petsc/petsc-3.0.0-p11/linux-gnu-opt/lib 
-L~/C++/Petsc/petsc-3.0.0-p11/linux-gnu-opt/lib -lspooles -lHYPRE 
-Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.4.1 
-Wl,-rpath,/usr/lib/x86_64-linux-gnu -lmpichcxx -lstdc++ -lsuperlu_dist_2.3 
-lsuperlu_3.1 -lumfpack -lamd -lflapack -lfblas -lparmetis -lmetis -lnsl -lrt 
-Wl,-rpath,~/C++/Petsc/petsc-3.0.0-p11/linux-gnu-opt/lib 
-L~/C++/Petsc/petsc-3.0.0-p11/linux-gnu-opt/lib 
-Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.4.1 
-L/usr/lib/gcc/x86_64-linux-gnu/4.4.1 -Wl,-rpath,/usr/lib/x86_64-linux-gnu 
-L/usr/lib/x86_64-linux-gnu -ldl -lmpich -lpthread -lrt -lgcc_s -lmpichf90 
-lgfortranbegin -lgfortran -lm -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu 
-L/usr/lib/gcc/x86_64-linux-gnu -lm -lmpichcxx -lstdc++ -lmpichcxx -lstdc++ 
-ldl -lmpich -lpthread -lrt -lgcc_s -ldl
------------------------------------------






-- 
Edouard Oudet : http://www.lama.univ-savoie.fr/~oudet/
Université de Savoie
Laboratoire de Mathématiques (LAMA) UMR 5127
Campus Scientifique
73 376 Le-Bourget-Du-Lac
+33 (0)4 79 75 87 65  (office)
+33 (0)4 79 68 82 06  (home)




------------------------------------------------------------------------------
Download Intel&#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to