Thank you! That's exactly what I need.

Sent with [Proton Mail](https://proton.me/) secure email.

------- Original Message -------
On Wednesday, October 11th, 2023 at 4:17 PM, Matthew Knepley 
<knep...@gmail.com> wrote:

> On Wed, Oct 11, 2023 at 4:42 AM erdemguer <erdemg...@proton.me> wrote:
>
>> Hi again,
>
> I see the problem. FV ghosts mean extra boundary cells added in FV methods 
> using DMPlexCreateGhostCells() in order to impose boundary conditions. They 
> are not the "ghost" cells for overlapping parallel decompositions. I have 
> changed your code to give you what you want. It is attached.
>
> Thanks,
>
> Matt
>
>> Here is my code:
>> #include <petsc.h>
>> static char help[] = "dmplex";
>>
>> int main(int argc, char **argv)
>> {
>> PetscCall(PetscInitialize(&argc, &argv, NULL, help));
>> DM dm, dm_dist;
>> PetscSection section;
>> PetscInt cStart, cEndInterior, cEnd, rank;
>> PetscInt nc[3] = {3, 3, 3};
>> PetscReal upper[3] = {1, 1, 1};
>>
>> PetscCallMPI(MPI_Comm_rank(PETSC_COMM_WORLD, &rank));
>>
>> DMPlexCreateBoxMesh(PETSC_COMM_WORLD, 3, PETSC_FALSE, nc, NULL, upper, NULL, 
>> PETSC_TRUE, &dm);
>> DMViewFromOptions(dm, NULL, "-dm1_view");
>> PetscCall(DMSetFromOptions(dm));
>> DMViewFromOptions(dm, NULL, "-dm2_view");
>>
>> PetscCall(DMPlexGetDepthStratum(dm, 3, &cStart, &cEnd));
>> DMPlexComputeCellTypes(dm);
>> PetscCall(DMPlexGetCellTypeStratum(dm, DM_POLYTOPE_INTERIOR_GHOST, 
>> &cEndInterior, NULL));
>> PetscPrintf(PETSC_COMM_SELF, "Before Distribution Rank: %d, cStart: %d, 
>> cEndInterior: %d, cEnd: %d\n", rank, cStart,
>> cEndInterior, cEnd);
>>
>> PetscInt nField = 1, nDof = 3, field = 0;
>> PetscCall(PetscSectionCreate(PETSC_COMM_WORLD, &section));
>> PetscSectionSetNumFields(section, nField);
>> PetscCall(PetscSectionSetChart(section, cStart, cEnd));
>> for (PetscInt p = cStart; p < cEnd; p++)
>> {
>> PetscCall(PetscSectionSetFieldDof(section, p, field, nDof));
>> PetscCall(PetscSectionSetDof(section, p, nDof));
>> }
>>
>> PetscCall(PetscSectionSetUp(section));
>>
>> DMSetLocalSection(dm, section);
>> DMViewFromOptions(dm, NULL, "-dm3_view");
>>
>> DMSetAdjacency(dm, field, PETSC_TRUE, PETSC_TRUE);
>> DMViewFromOptions(dm, NULL, "-dm4_view");
>> PetscCall(DMPlexDistribute(dm, 1, NULL, &dm_dist));
>> if (dm_dist)
>> {
>> DMDestroy(&dm);
>> dm = dm_dist;
>> }
>> DMViewFromOptions(dm, NULL, "-dm5_view");
>> PetscCall(DMPlexGetDepthStratum(dm, 3, &cStart, &cEnd));
>> DMPlexComputeCellTypes(dm);
>> PetscCall(DMPlexGetCellTypeStratum(dm, DM_POLYTOPE_FV_GHOST, &cEndInterior, 
>> NULL));
>> PetscPrintf(PETSC_COMM_SELF, "After Distribution Rank: %d, cStart: %d, 
>> cEndInterior: %d, cEnd: %d\n", rank, cStart,
>> cEndInterior, cEnd);
>>
>> DMDestroy(&dm);
>> PetscCall(PetscFinalize());}
>>
>> This codes output is currently (on 2 processors) is:
>> Before Distribution Rank: 1, cStart: 0, cEndInterior: -1, cEnd: 14
>> Before Distribution Rank: 0, cStart: 0, cEndInterior: -1, cEnd: 13
>> After Distribution Rank: 0, cStart: 0, cEndInterior: -1, cEnd: 27After 
>> Distribution Rank: 1, cStart: 0, cEndInterior: -1, cEnd: 24
>>
>> DMView outputs:
>> dm1_view (after creation):
>> DM Object: 2 MPI processes
>> type: plex
>> DM_0x84000004_0 in 3 dimensions:
>> Number of 0-cells per rank: 64 0
>> Number of 1-cells per rank: 144 0
>> Number of 2-cells per rank: 108 0
>> Number of 3-cells per rank: 27 0
>> Labels:
>> marker: 1 strata with value/size (1 (218))
>> Face Sets: 6 strata with value/size (6 (9), 5 (9), 3 (9), 4 (9), 1 (9), 2 
>> (9))
>> depth: 4 strata with value/size (0 (64), 1 (144), 2 (108), 3 (27)) celltype: 
>> 4 strata with value/size (7 (27), 0 (64), 4 (108), 1 (144))
>>
>> dm2_view (after setfromoptions):
>> DM Object: 2 MPI processes
>> type: plex
>> DM_0x84000004_0 in 3 dimensions:
>> Number of 0-cells per rank: 40 46
>> Number of 1-cells per rank: 83 95
>> Number of 2-cells per rank: 57 64
>> Number of 3-cells per rank: 13 14
>> Labels:
>> depth: 4 strata with value/size (0 (40), 1 (83), 2 (57), 3 (13))
>> marker: 1 strata with value/size (1 (109))
>> Face Sets: 5 strata with value/size (1 (6), 2 (1), 3 (7), 5 (5), 6 (4)) 
>> celltype: 4 strata with value/size (0 (40), 1 (83), 4 (57), 7 (13))
>>
>> dm3_view (after setting local section):
>> DM Object: 2 MPI processes
>> type: plex
>> DM_0x84000004_0 in 3 dimensions:
>> Number of 0-cells per rank: 40 46
>> Number of 1-cells per rank: 83 95
>> Number of 2-cells per rank: 57 64
>> Number of 3-cells per rank: 13 14
>> Labels:
>> depth: 4 strata with value/size (0 (40), 1 (83), 2 (57), 3 (13))
>> marker: 1 strata with value/size (1 (109))
>> Face Sets: 5 strata with value/size (1 (6), 2 (1), 3 (7), 5 (5), 6 (4))
>> celltype: 4 strata with value/size (0 (40), 1 (83), 4 (57), 7 (13))
>> Field Field_0: adjacency FEM
>>
>> dm4_view (after setting adjacency):
>> DM Object: 2 MPI processes
>> type: plex
>> DM_0x84000004_0 in 3 dimensions:
>> Number of 0-cells per rank: 40 46
>> Number of 1-cells per rank: 83 95
>> Number of 2-cells per rank: 57 64
>> Number of 3-cells per rank: 13 14
>> Labels:
>> depth: 4 strata with value/size (0 (40), 1 (83), 2 (57), 3 (13))
>> marker: 1 strata with value/size (1 (109))
>> Face Sets: 5 strata with value/size (1 (6), 2 (1), 3 (7), 5 (5), 6 (4))
>> celltype: 4 strata with value/size (0 (40), 1 (83), 4 (57), 7 (13))
>> Field Field_0: adjacency FVM++
>>
>> dm5_view (after distribution):
>> DM Object: Parallel Mesh 2 MPI processes
>> type: plex
>> Parallel Mesh in 3 dimensions:
>> Number of 0-cells per rank: 64 60
>> Number of 1-cells per rank: 144 133
>> Number of 2-cells per rank: 108 98
>> Number of 3-cells per rank: 27 24
>> Labels:
>> depth: 4 strata with value/size (0 (64), 1 (144), 2 (108), 3 (27))
>> marker: 1 strata with value/size (1 (218))
>> Face Sets: 6 strata with value/size (1 (9), 2 (9), 3 (9), 4 (9), 5 (9), 6 
>> (9))
>> celltype: 4 strata with value/size (0 (64), 1 (144), 4 (108), 7 (27))
>> Field Field_0: adjacency FVM++
>>
>> Thanks,
>> Guer.
>>
>> Sent with [Proton Mail](https://proton.me/) secure email.
>>
>> ------- Original Message -------
>> On Wednesday, October 11th, 2023 at 3:33 AM, Matthew Knepley 
>> <knep...@gmail.com> wrote:
>>
>>> On Tue, Oct 10, 2023 at 7:01 PM erdemguer <erdemg...@proton.me> wrote:
>>>
>>>> Hi,
>>>> Sorry for my late response. I tried with your suggestions and I think I 
>>>> made a progress. But I still got issues. Let me explain my latest mesh 
>>>> routine:
>>>>
>>>> - DMPlexCreateBoxMesh
>>>>
>>>> - DMSetFromOptions
>>>> - PetscSectionCreate
>>>> - PetscSectionSetNumFields
>>>> - PetscSectionSetFieldDof
>>>>
>>>> - PetscSectionSetDof
>>>>
>>>> - PetscSectionSetUp
>>>> - DMSetLocalSection
>>>> - DMSetAdjacency
>>>> - DMPlexDistribute
>>>>
>>>> It's still not working but it's promising, if I call DMPlexGetDepthStratum 
>>>> for cells, I can see that after distribution processors have more cells.
>>>
>>> Please send the output of DMPlexView() for each incarnation of the mesh. 
>>> What I do is put
>>>
>>> DMViewFromOptions(dm, NULL, "-dm1_view")
>>>
>>> with a different string after each call.
>>>
>>>> But I couldn't figure out how to decide where the ghost/processor boundary 
>>>> cells start.
>>>
>>> Please send the actual code because the above is not specific enough. For 
>>> example, you will not have
>>> "ghost cells" unless you partition with overlap. This is because by default 
>>> cells are the partitioned quantity,
>>> so each process gets a unique set.
>>>
>>> Thanks,
>>>
>>> Matt
>>>
>>>> In older mails I saw there is a function DMPlexGetHybridBounds but I think 
>>>> that function is deprecated. I tried to use, DMPlexGetCellTypeStratumas in 
>>>> ts/tutorials/ex11_sa.c but I'm getting -1 as cEndInterior before and after 
>>>> distribution. I tried it for DM_POLYTOPE_FV_GHOST, 
>>>> DM_POLYTOPE_INTERIOR_GHOST polytope types. I also tried calling 
>>>> DMPlexComputeCellTypes before DMPlexGetCellTypeStratum but nothing 
>>>> changed. I think I can calculate the ghost cell indices using cStart/cEnd 
>>>> before & after distribution but I think there is a better way I'm 
>>>> currently missing.
>>>>
>>>> Thanks again,
>>>> Guer.
>>>>
>>>> ------- Original Message -------
>>>> On Thursday, September 28th, 2023 at 10:42 PM, Matthew Knepley 
>>>> <knep...@gmail.com> wrote:
>>>>
>>>>> On Thu, Sep 28, 2023 at 3:38 PM erdemguer via petsc-users 
>>>>> <petsc-users@mcs.anl.gov> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I am currently using DMPlex in my code. It runs serially at the moment, 
>>>>>> but I'm interested in adding parallel options. Here is my workflow:
>>>>>>
>>>>>> Create a DMPlex mesh from GMSH.
>>>>>> Reorder it with DMPlexPermute.
>>>>>> Create necessary pre-processing arrays related to the mesh/problem.
>>>>>> Create field(s) with multi-dofs.
>>>>>> Create residual vectors.
>>>>>> Define a function to calculate the residual for each cell and, use SNES.
>>>>>> As you can see, I'm not using FV or FE structures (most examples do). 
>>>>>> Now, I'm trying to implement this in parallel using a similar approach. 
>>>>>> However, I'm struggling to understand how to create corresponding 
>>>>>> vectors and how to obtain index sets for each processor. Is there a 
>>>>>> tutorial or paper that covers this topic?
>>>>>
>>>>> The intention was that there is enough information in the manual to do 
>>>>> this.
>>>>>
>>>>> Using PetscFE/PetscFV is not required. However, I strongly encourage you 
>>>>> to use PetscSection. Without this, it would be incredibly hard to do what 
>>>>> you want. Once the DM has a Section, it can do things like automatically 
>>>>> create vectors and matrices for you. It can redistribute them, subset 
>>>>> them, etc. The Section describes how dofs are assigned to pieces of the 
>>>>> mesh (mesh points). This is in the manual, and there are a few examples 
>>>>> that do it by hand.
>>>>>
>>>>> So I suggest changing your code to use PetscSection, and then letting us 
>>>>> know if things still do not work.
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Matt
>>>>>
>>>>>> Thank you.
>>>>>> Guer.
>>>>>>
>>>>>> Sent with [Proton Mail](https://proton.me/) secure email.
>>>>>
>>>>> --
>>>>>
>>>>> What most experimenters take for granted before they begin their 
>>>>> experiments is infinitely more interesting than any results to which 
>>>>> their experiments lead.
>>>>> -- Norbert Wiener
>>>>>
>>>>> [https://www.cse.buffalo.edu/~knepley/](http://www.cse.buffalo.edu/~knepley/)
>>>
>>> --
>>>
>>> What most experimenters take for granted before they begin their 
>>> experiments is infinitely more interesting than any results to which their 
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>> [https://www.cse.buffalo.edu/~knepley/](http://www.cse.buffalo.edu/~knepley/)
>
> --
>
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
>
> [https://www.cse.buffalo.edu/~knepley/](http://www.cse.buffalo.edu/~knepley/)

Reply via email to