On Thu, 24 Jul, 2014 at 3:58 PM, Eugenio Gianniti <[email protected]> wrote:
Dear all,

I am trying to exploit the MPI features of FEniCS in the fem-fenics package for Octave, which was introduced on the list a couple of months ago. In this application the C++ code using DOLFIN is dynamically loaded into Octave. Unfortunately, it seems that this way the communication among threads is not established correctly. I am looking for advice on how to properly handle the initialisation of MPI and then pass the communicator on to DOLFIN.

DOLFIN checks if MPI has already been initialised. If MPI has been initialised, then DOLFIN will be not intialised (or finalise) MPI.

Many DOLFIN objects accept an MPI communicator which, you can pass in. This is still a work-in-progress. In other places, DOLFIN with use MPI_COMM_WORLD.

Garth


Kind regards,
Eugenio Gianniti
_______________________________________________
fenics mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics

_______________________________________________
fenics mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics

Reply via email to