Hi

Thanks for the response. I still have a bit of confusion here. To make sure
I follow I have setup a code block (at end) with options for overlap and
CreateGhostCells.

Using a 2x2x2 Box mesh I've seen the following behavior.

When I use overlap=1 I get approximately what I would expect.
If I run on 1 processor there are only 8cells and everything is normal
If I run on 2 processors each processor has 8cells (4 from the split domain
and an extra 4 extending into the neighboring cell)
My question here is how do I then differentiate the overlap cells from each
processor so that I can synchronize the extra cells as I iterate?
Both GetHeightStratum GetSimplex will identify the 8cells with no
differentiation between the partitions vs the overlap.
I believe this is the strategy that will get me the result I want where I
can synch the overlaps so that when I iterate through the "owned cells" the
stencil can reach into the overlap cells to get the information they need?


When I use ghostCellGenerate I also get what I would expect
If I run on 1 processor each of the outside faces of the cube gets an extra
cell attached so I get Box Cells 0 through 8 and then Ghost Cells 8 through
32.
If I run on 2 processors only the outside faces of the cub get the extra
cells so I get Box Cells 0 through 4 and then Ghost Cells 4 through 16
So I don't think this is what I want for my intended use?

Thanks Again

Sincerely
Nicholas

    PetscInt facecount = 10;
    PetscInt i, dim = 3, overlap = 0;
    ierr = PetscOptionsGetInt(NULL, NULL, "-facecount", &facecount, NULL);
    CHKERRQ(ierr);
    for (i = 0; i < dim; i++)
        faces[i] = facecount;
    PetscViewerASCIIPrintf(viewerstdout, "facecount is %d\n", facecount);
    PetscReal lower[3] = {0.0, 0.0, 0.0};
    PetscReal upper[3] = {2.0, 2.0, 2.0};
    ierr = DMPlexCreateBoxMesh(comm, dim, simplex, faces, lower, upper, /*
periodicity */ NULL, dmInterped, &dm);
    CHKERRQ(ierr);
    ierr = DMPlexDistribute(dm, overlap, &distributionSF, &dmDist);
    CHKERRQ(ierr);
    if (dmDist)
    {
        ierr = DMDestroy(&dm);
        CHKERRQ(ierr);
        dm = dmDist;
    }

    PetscInt ghostCells;

    PetscBool use_ghostcells = PETSC_TRUE;
    ierr = PetscOptionsGetBool(NULL, NULL, "-use_ghostcells", &use_ghostcells,
NULL);
    DM dmGhost;
    if (use_ghostcells)
    {
        ierr = DMPlexConstructGhostCells(dm, NULL, &ghostCells, &dmGhost);
        CHKERRQ(ierr);
        if (dmGhost)
        {
            PetscViewerASCIIPrintf(viewerstdout, "%dghost Cells generated\n",
ghostCells);
            ierr = DMDestroy(&dm);
            CHKERRQ(ierr);
            dm = dmGhost;
        }
        else
            PetscViewerASCIIPrintf(viewerstdout, "ghost Cells not generated
\n");
    }
    else
        PetscViewerASCIIPrintf(viewerstdout, "ghost Cells not generated\n");

    PetscInt c0, c1, f0, f1, e0, e1, v0, v1, s0, s1, g0, g1;

    DMPlexGetHeightStratum(dm, 0, &c0, &c1);
    DMPlexGetHeightStratum(dm, 1, &f0, &f1);
    DMPlexGetHeightStratum(dm, 2, &e0, &e1);
    DMPlexGetHeightStratum(dm, 3, &v0, &v1);
    DMPlexGetSimplexOrBoxCells(dm, 0, &s0, &s1);
    DMPlexGetGhostCellStratum(dm, &g0, &g1);
    PetscViewerASCIIPrintf(viewerstdout, "Cells:    p0:%d ,p1:%d \n", c0, c1
);
    PetscViewerASCIIPrintf(viewerstdout, "Faces:    p0:%d ,p1:%d \n", f0, f1
);
    PetscViewerASCIIPrintf(viewerstdout, "Edges:    p0:%d ,p1:%d \n", e0, e1
);
    PetscViewerASCIIPrintf(viewerstdout, "Vertices: p0:%d ,p1:%d \n", v0, v1
);
    PetscViewerASCIIPrintf(viewerstdout, "Box Cells:    p0:%d ,p1:%d \n", s0,
s1);
    PetscViewerASCIIPrintf(viewerstdout, "Ghost Cells:    p0:%d ,p1:%d \n",
g0, g1);



On Sun, Jul 3, 2022 at 12:42 AM Matthew Knepley <knep...@gmail.com> wrote:

> On Sat, Jul 2, 2022 at 6:20 PM Nicholas Arnold-Medabalimi <
> narno...@umich.edu> wrote:
>
>> Dear PETSc users,
>>
>> I have some general novice questions regarding how to properly access a
>> distributed DMPLEX vector that has ghost cells for a finite volume use case.
>>
>> My process for setup
>> 1) generated a simple box mesh using DMPlexCreateBoxMesh
>> 2) distribute the mesh using DMPlexDistribute (at this point everything
>> looks fine when I View (as paraview file) and I can see the partitions)
>> 3) I generate a section with 5 variables located at the cells. (I don't
>> think it matters if I use 5 fields or 1 field with 5 components?) I can
>> alternatively do this using the PetscFV variable with 5 components.
>> 4)  I then have the dm setup properly (with no ghost cells at this point)
>> and I use DMCreateGlobalVector and DMCreateLocalVector to get the Vectors
>> to work with
>> 5) I'm just setting a quadratic IC. The overall loop over the cells is
>> constructed via getting the cell stratum which has the Cell point list and
>> then using the SectionOffset to assign each of the components at each cell
>> using DMPlexComputeCellGeometryFVM to get the cell centroid.
>>
>> As far as I can tell this is an adequate solution but now I'm moving into
>> learning how to access neighboring cells for flux calculations which isn't
>> just the local cell but requires access to neighboring cells which at some
>> point will be located on neighboring processors and synched to be
>> accessible. When I did this for a DMDA it was pretty straightforward
>> insofar as I just needed to call DMDAGetCorners and the LocalVector
>> indexing would allow the stencil to extend gracefully into the ghost
>> values.
>>
>> My general question is how do I achieve a similar loop structure for
>> DMPlex as far as the indexing over the owned cells while being able to
>> access the neighbor halo cells. I have been looking at example 52 and 11 in
>> the ts tutorial examples but I'm struggling to extract exactly what I need.
>>
>> To boil it down into 3 starting questions
>> 1) What is the preferred way to setup the cells and then access the
>> internal cells for say N cell halos for the boundaries between processors
>> in a DMPlex?
>>
>
> If you want a halo during distribution, you can give the "overlap"
> argument to DMPlexDistribute().
>
>
>> 2) What is the distinction between the overlap argument in
>> DMPlexDistribute and DMPlexConstructGhostCells (which I see used in the
>> examples)? (at least based on my tests it seems GhostCells is generating
>> cells at the overall boundaries not the internal partition boundaries?)
>>
>
> The examples use both. DMPlexConstructGhostCells() puts a layer of cells
> around the domain boundary that are special. They have no cell boundary,
> just the face they are attached to. The overlap adds true cells to each
> domain across the parallel boundary.
>
>
>> 3) Once I do have a DMPlex object that has the appropriate halos, how
>> would I get the access range for those cells? Based on my current
>> assumption the Stratum range for the Cells would include the ghost cells as
>> well which I would like to avoid?
>>
>
> I use
> https://petsc.org/main/docs/manualpages/DMPLEX/DMPlexGetSimplexOrBoxCells
> to avoid the FV ghost cells. However, you can also do it by hand by
> checking the type, or use
> https://petsc.org/main/docs/manualpages/DMPLEX/DMPlexGetGhostCellStratum
>
>   Thanks,
>
>      Matt
>
>
>> (Ultimately in the examples I see cases where there is the synch step for
>> Local to Global and vice versa but I'm trying to just get to the point
>> where I have the DM setup properly to handle the internal mesh boundaries
>> and have the correct indexing from the StratumGet commands.)
>>
>> Any advice and corrections would be welcome.
>>
>> Sincerely
>> Nicholas
>>
>> --
>> Nicholas Arnold-Medabalimi
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>


-- 
Nicholas Arnold-Medabalimi

Ph.D. Candidate
Computational Aeroscience Lab
University of Michigan

Reply via email to