On 07/09/11 04:24, Vijay S. Mahadevan wrote:

Hi Christophe,

On a slightly related topic: where do you use PETSc specifically inside Gmsh ? 
I haven't looked at the sources in a
while but was curious. I will configure the latest source based on PETSc soon 
to understand the details. Any
help/pointers will be much appreciated.

Hi Vijay,

Gmsh uses Petsc i.e. for STL remeshing , see 
https://geuz.org/trac/gmsh/wiki/STLRemeshing

Regards,

Dave

--
David Colignon, Ph.D.
Collaborateur Logistique du F.R.S.-FNRS
CÉCI - Consortium des Équipements de Calcul Intensif
ACE - Applied & Computational Electromagnetics
Sart-Tilman B28
Université de Liège
4000 Liège - BELGIQUE
Tél: +32 (0)4 366 37 32
Fax: +32 (0)4 366 29 10
WWW:    http://www.ceci-hpc.be/
Agenda: http://www.google.com/calendar/embed?src=david.colignon%40gmail.com

Thanks,
Vijay

On Aug 23, 2011 7:08 AM, "Christophe Geuzaine" <[email protected] 
<mailto:[email protected]>> wrote:
 >
 > On 23 Aug 2011, at 09:47, Mikhail Artemiev wrote:
 >
 >> Hi all!
 >> When I run the following easiest program:
 >>
 >> #include <mpi.h>
 >> #include "Gmsh.h"
 >> int main(int argc, char **argv)
 >> {
 >> MPI_Init(&argc, &argv);
 >> GmshInitialize(argc, argv);
 >> GmshFinalize();
 >> MPI_Finalize();
 >> return 0;
 >> }
 >>
 >> I have the following error:
 >>
 >> [artemiev-desktop:19676] *** An error occurred in MPI_Init
 >> [artemiev-desktop:19676] *** on communicator MPI_COMM_WORLD
 >> [artemiev-desktop:19676] *** MPI_ERR_OTHER: known error not in list
 >> [artemiev-desktop:19676] *** MPI_ERRORS_ARE_FATAL (your MPI job will now 
abort)
 >> --------------------------------------------------------------------------
 >> mpirun has exited due to process rank 1 with PID 19676 on
 >> node artemiev-desktop exiting without calling "finalize". This may
 >> have caused other processes in the application to be
 >> terminated by signals sent by mpirun (as reported here).
 >> --------------------------------------------------------------------------
 >> --------------------------------------------------------------------------
 >> Calling MPI_Init or MPI_Init_thread twice is erroneous.
 >> --------------------------------------------------------------------------
 >> [artemiev-desktop:19674] 1 more process has sent help message 
help-mpi-errors.txt / mpi_errors_are_fatal
 >> [artemiev-desktop:19674] Set MCA parameter "orte_base_help_aggregate" to 0 
to see all help / error messages
 >>
 >> I thought that "GmshInitialize" call "MPI_Init". Therefore I recompiled 
Gmsh with -DENABLE_MPI=0 option.
 >> But the result was the same.
 >>
 >> Could you give a hint about how to use Gmsh API in parallel program?
 >
 > Hi Mikhail - MPI_Init might also be called (indirectly) in GmshInitialize if 
Gmsh is compiled with PETSc support. Is
this the case in your setup?
 >
 >
 >
 >>
 >> Thanks
 >> Mikhail Artemiev
 >> _______________________________________________
 >> gmsh mailing list
 >> [email protected] <mailto:[email protected]>
 >> http://www.geuz.org/mailman/listinfo/gmsh
 >
 > --
 > Prof. Christophe Geuzaine
 > University of Liege, Electrical Engineering and Computer Science
 > http://www.montefiore.ulg.ac.be/~geuzaine
 >
 >
 >
 >
 > _______________________________________________
 > gmsh mailing list
 > [email protected] <mailto:[email protected]>
 > http://www.geuz.org/mailman/listinfo/gmsh


_______________________________________________
gmsh mailing list
[email protected]
http://www.geuz.org/mailman/listinfo/gmsh


_______________________________________________
gmsh mailing list
[email protected]
http://www.geuz.org/mailman/listinfo/gmsh

Reply via email to