On Thu, Oct 17, 2013 at 3:06 PM, Daniel Wheeler
<daniel.wheel...@gmail.com>wrote:

> On Thu, Oct 17, 2013 at 1:00 PM, James Snyder <jbsny...@fanplastic.org>
> wrote:
> > I guess I have part of my answer from an older question I asked last
> year:
> > http://permalink.gmane.org/gmane.comp.python.fipy/2353
> >
> > In there Jonathan mentioned:
> >>
> >> The issue is that flux is a FaceVariable and although we define a
> >> globalValue for FaceVariables, it
> >> doesn't look like it has a clear meaning. The issue is that there are
> >> cells and faces that overlap multiple
> >> processors. The globalValue for a CellVariable properly accounts for the
> >> ghost cells, but
> >> FaceVariables do not. There hasn't been any internal need to and we
> >> suspect that clearing it up will be a
> >> mess (there is an ambiguity with faces that doesn't exist with cells;
> some
> >> faces are shared between
> >> processors even for non-overlapping cells). We do need to examine
> whether
> >> globalValue should even be
> >> defined for anything besides CellVariables.
>
> Thanks for bringing this up again, we should move "globalValue" and
> "_getGlobalVaue" method to "CellVariable" and out of "_MeshVariable".
> As explained, a global face variable is hard to calculate in the
> general case.
>
> > Since what I'm trying to do is just get face centers so I can make a
> convex
> > hull around what is essentially an internal "hole" in the mesh. If I just
> > want to merge the face centers from each process should I use MPI comm
> > primitives to pull them together or is there a cleaner way to get this
> > within the existing framework?
>
> I'm not really sure what you are trying to do here. What is the
> purpose? Maybe there is another way.
>

I'll try and describe this using a 2-D case (I'm also trying to do this in
3-D which is why I'd like the parallel solvers).

In 2-D in Gmsh, I have a rectangular line loop and within that a
pill-shaped line loop (hemispheres connected by a rectangular region) and
then I make a surface out of the region between this inner pill-shaped
object and the rectangle.  On that surface I'm solving a diffusion problem
(electrostatics) where some of the faces on the pill are constrained.

In the end I want to plot a contour of the cell variable being solved for
(potential) and also the corresponding field (negative gradient of
potential) using a quiver plot.  Since the mesh is unstructured (and I also
want planar cross-sections for 3-D) I interpolate these onto a Grid2D.

The result of this is that it interpolates values inside the pill-shaped
region that should be empty so what I've been doing is getting the
faceCenters for the faces of that pill, computing a convex hull and then
using that to 1) mask out values that got interpolated in the region where
there were no cells 2) plot a nice shaded region covering that.

Here are some example animations of this where I "move" an "object" through
the mesh by setting the coefficient on the diffusion term to high/low
conductivity in a given region:
https://www.dropbox.com/s/1l17mood7og32n3/podisoplot_insulator_animation.mp4
https://www.dropbox.com/s/v2244k1onljizv9/podisoplot_conductor_animation.mp4
(they look better downloaded than in dropbox's transcoded web view)

I'm sure I could do something like cache the data for the convex hull, but
it would be nice to be able to do it on the fly based on the Gmsh mesh data.

Maybe I should break this into a parallel solving and an analysis/plotting
stage where I pickle the results of solving and then do some of the final
plotting in a non-parallel script? As it stands I'm using parallel.procID
and globalValue to get results onto one of the processes for plotting.

If that's not clear I can provide some example code.


> > Also, congrats on getting 3.1 out, especially in terms of timing :-)
>
> Thanks very much.
>
> How are you findining using FiPy in parallel? What sort of speed ups
> are you getting?
>

Qualitatively, the solver "feels" like it scales well with added cores, but
I haven't run numbers yet.  I'll try to do a little instrumentation on it
for upcoming solver runs.

One thing I have noticed is that the Python processes still seem to be
pegging the CPUs while I presume they should be waiting on Gmsh to compute
a mesh.


>
> --
> Daniel Wheeler
> _______________________________________________
> fipy mailing list
> fipy@nist.gov
> http://www.ctcms.nist.gov/fipy
>   [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
>



-- 
James Snyder
Biomedical Engineering
Northwestern University
ph: (847) 448-0386
_______________________________________________
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]

Reply via email to