On Fri, 06 Dec 2013 08:07:47 -0700
Jed Brown <[email protected]> wrote:

> Jan Blechta <[email protected]> writes:
> 
> > I'd like to introduce support for communicator being supplied by
> > user, stored within mesh and used by other objects based on the
> > mesh. There are few applications of this. Mikael has one being
> > rather non-trivial, I think.
> >
> > To achieve this it will require some changes in the interface,
> > roughly sketched:
> >
> >  - MPICommunicator::MPICommunicator()
> >  + MPICommunicator::MPICommunicator(const MPI_Comm&
> > comm=MPI_COMM_WORLD)
> 
> Just a comment that wrapping communicators is a crappy model for
> library interoperability.  What happens when a user creates two
> objects with the same communicator.  Do you end up with two distinct
> MPICommunicator wrappers actually sharing the same MPI_Comm?  (Or you
> MPI_Dup the communicator, leading to potentially many duplications,
> which uses a lot of memory at scale with some MPI implementations.)
> What if you pass an MPI_Comm to another library and it passes that
> MPI_Comm back to you? How do you find the wrapper?

I think that wrapper was originally created to take care of creating
duplicate of COMM_WORLD in routines which may not guarantee that
transfers are entirely finished before return. Programmer could, of
course, call MPI_Comm_dup but wrapper takes care of freeing MPI_Comm
when it goes out of scope.

Where appropriate, MPI_COMM_WORLD or PETSC_COMM_WORLD is used directly
instead of creating duplicate by the wrapper.

With transition to user-supplied communicators (most often COMM_WORLD,
COMM_SELF) these things must be thought over.

> 
> A better model is to pass around a plain MPI_Comm and use attribute
> caching for whatever internal stuff you need.  See how PETSc uses an
> inner and outer comm for one possible implementation.

Ok, I'll check this.

Jan
_______________________________________________
fenics mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics

Reply via email to