I'd like to coarse-grain a finite element field, where the coarse-graining procedure is just convolving the field with a Gaussian function. More concretely, I'd like to do the following:


For a regular Triangulation, we can calculate <X> at every quadrature point by iterating through the entire Triangulation and calculating the integrand at all of the quadrature points. In fact, because there is a Gaussian in distance in the integrand, we only have to look at cells a distance ~3a_0 away.

However, for a distributed triangulation it could be the case that cells 3a_0 away may be on a different MPI process. My first question is: is there a way to extend the number of ghost cells along the boundary such that they are some total distance away from the locally-owned cells on the edge of the domain?

p4est (our parallel mesh engine) actually allows for "thicker" ghost layers, but we never tried that in deal.II and I doubt that it would work without much trouble. Either way, p4est allows having a ghost layer of N cells width whereas you need one that has a specific Euclidean distance from the locally owned cells -- so these are different concepts anyway.



And if not, is there a nice utility to allow me to access neighboring cells on other domains (albeit, at the cost of some MPI messages)?

Check out the FEPointEvaluation and Utilities::MPI::RemotePointEvaluation classes. I think they will do what you need.


Otherwise, is there a good way to do this? My guess would be to do all of the local integrations first, and make a vector of cells which end up butting against the boundary. Then one could use the `ghost_owners` function to send information about the edge cells to the adjacent subdomains, which would then send information back. However that seems a little in the weeds of using MPI and also would require some error-prone bookkeeping, which I would like to avoid if possible.

Yes, and there's a bug in your algorithm as well: Just because the ghost cells are owned by a specific set of processed doesn't mean that the cell you need to evaluate the solution on is actually owned by one of the neighboring processes.

Secondly, what you *really* want to do is assess first which points you need information on from other processes, send those points around, and while you are waiting for these messages to be delivered and answered, you want to work on your local cells -- not the other way around: You want to hide communication latency behind local work.

Best
 W.

--
------------------------------------------------------------------------
Wolfgang Bangerth          email:                 bange...@colostate.edu
                           www: http://www.math.colostate.edu/~bangerth/


--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/dcb160d5-9bf8-f896-c548-2c4b16657339%40colostate.edu.

Reply via email to