On Fri, 21 Nov, 2014 at 11:52 AM, Mikael Mortensen <[email protected]> wrote:

On 20 Nov 2014, at 13:08, Jan Blechta <[email protected]> wrote:

On Thu, 20 Nov 2014 11:33:26 +0100
Mikael Mortensen <[email protected]> wrote:


On 20 Nov 2014, at 10:54, Chris Richardson <[email protected]>
wrote:


There has been some discussion on bitbucket about how to best
support evaluation and interpolation in parallel.

At present, user code like this:

  mesh = UnitSquareMesh(10, 10)
  Q = FunctionSpace(mesh, "CG", 1)
  F = Function(Q)
  F.interpolate(Expression("x[0]"))
  print F(0.2, 0.2)

crashes in parallel. e.g. with mpirun -n 2

*** Error:   Unable to evaluate function at point.
*** Reason:  The point is not inside the domain. Consider setting
"allow_extrapolation" to allow extrapolation. *** Where:   This
error was encountered inside Function.cpp.

Clearly "allow_extrapolation" is not needed, and this is actually
confusing to users.

Agreed, allow_extrapolation is very confusing. It should not be
thrown as an option, at least not in parallel.

Please, do not remove it. allow_extrapolation is useful for
circumventing failing floating-point-based evaluation of
Cell::inside(Point& p) for p interior point near facet of the cell. The
problem is covered here
https://bitbucket.org/fenics-project/dolfin/issue/296.

Floating point robustness should probably be handled inside the collision routines and not by flags in Function::eval. Nevertheless, it is very misleading to throw a suggestion to allow_extrapolation in parallel. I think we should add something like

if (allow_extrapolation && MPI::size(_function_space->mesh()->mpi_comm()) == 1)

or simply not allow setting allow_extrapolation to true in parallel.

Chris, I think I’m leaning towards a brand new global_eval rather than modifying the existing eval. But I have no strong opinion on this.


The business of 'extrapolation' does need fixing, see also https://bitbucket.org/fenics-project/dolfin/issue/198.

I would also lean towards separate functions for collective vs local eval operations. We can mark clearly in the documentation which is collective.

Garth


M


Jan



Also, something like:

  mesh = refine(mesh)
  Q2 = FunctionSpace(mesh, "CG", 1)
  F2 = Function(Q2)
  F2.interpolate(F)

It should be mentioned that this latter can be handled already using
LagrangeInterpolator and code bit

lp = LagrangeInterpolator()
lp.interpolate(F2, F)

The LagrangeInterpolator class could for a smoother interface be
created and called inside the regular interpolate function.

Btw, the above code with refine:

  mesh = refine(mesh)
  Q2 = FunctionSpace(mesh, "CG", 1)
  F2 = Function(Q2)
  F2.interpolate(F)

works for me since refine apparently works on the local mesh? The
code breaks if the second mesh is, e.g., mesh = UnitSquareMesh(21, 21)


will crash in the same way, because the Mesh gets redistributed
during refine.

It should be possible to write the code for Function::eval to send
the results to the processes that need it. Clearly, however it
cannot always be a collective call, as local evaluation is an
equally common task.

One idea is to have two separate "Function::eval" methods, one
collective, and one local, or possibly differentiate using an
MPI_Comm. Alternatively, perhaps "Function::eval" should just
return std::pair<bool, Array<double> > to indicate whether
evaluation is valid on this process, and delegate
communication/extrapolation elsewhere.

It should also be mentioned that we are discussing using a different
function name for parallel eval.

M


It would be good to have get a wide range of ideas on how to fix
this problem in the best possible way, hopefully enhancing user
code, and not breaking it(!)

Thanks,

Chris R.


_______________________________________________
fenics mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics

_______________________________________________
fenics mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics


_______________________________________________
fenics mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics

Reply via email to