Dear Matt,

Please find attached a test for writing a DMPlex with hanging nodes, which is based on a refined DMForest. I've linked the code with the current main git version of Petsc.

When the DMPlex gets written to disc, the code crashes with

[0]PETSC ERROR: Unknown discretization type for field 0

although I specifically set the discretization for the DMPlex.

The DMPlex based on a DMForest has "double" faces when there is a jump in cell size: the larger cell has one large face towards the refined cells, and the adjacent 4 smaller cells each have a face as well. I have written a function to remove the large face in such instances, rebuilding the DM, which seems to work. But I can only do this on 1 process and therefore lose the connectivity between the DM and the locations of the data points of the vector.

I can open an issue on the Petsc git, if you prefer?

Thanks and best regards,

Berend.



On 1/22/24 20:30, Matthew Knepley wrote:
On Mon, Jan 22, 2024 at 2:26 PM Berend van Wachem <berend.vanwac...@ovgu.de <mailto:berend.vanwac...@ovgu.de>> wrote:

    Dear Matt,

    The problem is that I haven't figured out how to write a polyhedral
    DMplex in parallel. So, currently, I can write the Vec data
    in parallel, but the cones for the cells/faces/edges/nodes for the
    mesh from just one process to a file (after gathering the
    DMplex to a single process).


Ah shoot. Can you send me a polyhedral mesh (or code to generate one) so I can fix the parallel write problem? Or maybe it is already an issue and I forgot?

      From the restart, I can then read the cone information from one
    process from the file, recreate the DMPlex, and then
    redistribute it. In this scenario, the Vec data I read in (in
    parallel) will not match the correct cells of the DMplex. Hence, I
    need to put it in the right place afterwards.


Yes, then searching makes sense. You could call DMLocatePoints(), but maybe you are doing that.

   Thanks,

      Matt

    Best, Berend.

    On 1/22/24 20:03, Matthew Knepley wrote:
     > On Mon, Jan 22, 2024 at 1:57 PM Berend van Wachem
    <berend.vanwac...@ovgu.de <mailto:berend.vanwac...@ovgu.de>
    <mailto:berend.vanwac...@ovgu.de <mailto:berend.vanwac...@ovgu.de>>>
    wrote:
     >
     >     Dear Matt,
     >
     >     Thanks for your quick response.
     >     I have a DMPlex with a polyhedral mesh, and have defined a
    number of vectors with data at the cell center. I have generated
     >     data
     >     for a number of timesteps, and I write the data for each
    point to a file together with the (x,y,z) co-ordinate of the cell
     >     center.
     >
     >     When I want to do a restart from the DMPlex, I recreate the
    DMplex with the polyhedral mesh, redistribute it, and for each cell
     >     center find the corresponding (x,y,z) co-ordinate and insert
    the data that corresponds to it. This is quite expensive, as it
     >     means I need to compare doubles very often.
     >
     >     But reading your response, this may not be a bad way of doing it?
     >
     >
     > It always seems to be a game of "what do you want to assume?". I
    tend to assume that I wrote the DM and Vec in the same order,
     > so when I load them they match. This is how Firedrake I/O works,
    so that you can load up on a different number of processes
     > (https://arxiv.org/abs/2401.05868
    <https://arxiv.org/abs/2401.05868> <https://arxiv.org/abs/2401.05868
    <https://arxiv.org/abs/2401.05868>>).
     >
     > So, are you writing a Vec, and then redistributing and writing
    another Vec? In the scheme above, you would have to write both
     > DMs. Are you trying to avoid this?
     >
     >    Thanks,
     >
     >       Matt
     >
     >     Thanks,
     >
     >     Berend.
     >
     >     On 1/22/24 18:58, Matthew Knepley wrote:
     >      > On Mon, Jan 22, 2024 at 10:49 AM Berend van Wachem
    <berend.vanwac...@ovgu.de <mailto:berend.vanwac...@ovgu.de>
    <mailto:berend.vanwac...@ovgu.de <mailto:berend.vanwac...@ovgu.de>>
     >     <mailto:berend.vanwac...@ovgu.de
    <mailto:berend.vanwac...@ovgu.de> <mailto:berend.vanwac...@ovgu.de
    <mailto:berend.vanwac...@ovgu.de>>>> wrote:
     >      >
     >      >     Dear Petsc-Team,
     >      >
     >      >     Is there a good way to define a unique integer number
    in each element
     >      >     (e.g. a cell) of a DMPlex mesh, which is in the same
    location,
     >      >     regardless of the number of processors or the
    distribution of the mesh
     >      >     over the processors?
     >      >
     >      >     So, for instance, if I have a DMPlex box mesh, the
    top-right-front
     >      >     corner element (e.g. cell) will always have the same
    unique number,
     >      >     regardless of the number of processors the mesh is
    distributed over?
     >      >
     >      >     I want to be able to link the results I have achieved
    with a mesh from
     >      >     DMPlex on a certain number of cores to the same mesh
    from a DMPlex on a
     >      >     different number of cores.
     >      >
     >      >     Of course, I could make a tree based on the distance
    of each element to
     >      >     a certain point (based on the X,Y,Z co-ordinates of
    the element), and go
     >      >     through this tree in the same way and define an
    integer based on this,
     >      >     but that seems rather cumbersome.
     >      >
     >      >
     >      > I think this is harder than it sounds. The distance will
    not work because it can be very degenerate.
     >      > You could lexicographically sort the coordinates, but this
    is hard in parallel. It is fine if you are willing
     >      > to gather everything on one process. You could put down a
    p4est, use the Morton order to number them since this is stable
     >     for a
     >      > given refinement. And then within each box
    lexicographically sort the centroids. This is definitely cumbersome,
    but I cannot
     >      > think of anything else. This also might have parallel
    problems since you need to know how much overlap you need to fill
     >     each box.
     >      >
     >      >    Thanks,
     >      >
     >      >        Matt
     >      >
     >      >     Thanks and best regards, Berend.
     >      >
     >      > --
     >      > What most experimenters take for granted before they begin
    their experiments is infinitely more interesting than any
     >     results to
     >      > which their experiments lead.
     >      > -- Norbert Wiener
     >      >
     >      > https://www.cse.buffalo.edu/~knepley/
    <https://www.cse.buffalo.edu/~knepley/>
    <https://www.cse.buffalo.edu/~knepley/
    <https://www.cse.buffalo.edu/~knepley/>>
    <http://www.cse.buffalo.edu/~knepley/
    <http://www.cse.buffalo.edu/~knepley/>
     >     <http://www.cse.buffalo.edu/~knepley/
    <http://www.cse.buffalo.edu/~knepley/>>>
     >
     >
     >
     > --
     > What most experimenters take for granted before they begin their
    experiments is infinitely more interesting than any results to
     > which their experiments lead.
     > -- Norbert Wiener
     >
     > https://www.cse.buffalo.edu/~knepley/
    <https://www.cse.buffalo.edu/~knepley/>
    <http://www.cse.buffalo.edu/~knepley/
    <http://www.cse.buffalo.edu/~knepley/>>



--
What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
const char help[] = "Write test of DMPlex based upon refined DMP8EST";

#include <petscdmplex.h>
#include <petscdmforest.h>
#include <petscviewerhdf5.h>
#include <petsc.h>
#include <petscsys.h>
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <sys/stat.h>

PetscErrorCode IntArraySetValue(PetscInt *A, PetscInt size, PetscInt value)
{
  PetscFunctionBegin;

  for (PetscInt i = 0; i < size; i++)
  {
    A[i] = value;
  }

  PetscFunctionReturn(PETSC_SUCCESS);
}

PetscErrorCode CreateSectiononDM(DM *dm)
{
  PetscSection section;
  const PetscInt dim = 3;
  PetscInt i;
  const PetscInt numFields = 1;
  PetscInt *numComp;
  PetscInt *numDof;

  PetscFunctionBegin;

  PetscCall(PetscCalloc2(numFields, &numComp, numFields * (dim + 1), &numDof));
  PetscCall(IntArraySetValue(numDof, numFields * (dim + 1), 0));

  /* Set all primary variables as cell center */
  for (i = 0; i < numFields; i++)
  {
    numComp[i] = 1;
    numDof[i * (dim + 1) + dim] = 1;
  }
  PetscCall(DMSetNumFields(*dm, numFields));
  PetscCall(DMPlexCreateSection(*dm, NULL, numComp, numDof, 0, NULL, NULL, NULL, NULL, &section));

  /* Set names to the sections */
  for (i = 0; i < numFields; i++)
  {
    PetscCall(PetscSectionSetFieldName(section, i, "TestField"));
  }

  /* Push section onto the DM */
  PetscCall(DMSetLocalSection(*dm, section));

  PetscCall(PetscFree2(numComp, numDof));
  PetscCall(PetscSectionDestroy(&section));

  /* Set a simple discretisation on the field */
  PetscFE fe;
  PetscCall(PetscFECreateDefault(PETSC_COMM_WORLD, dim, 1, PETSC_FALSE, NULL, PETSC_DEFAULT, &fe));
  for (i = 0; i < numFields; i++)
  {
    PetscCall(DMSetField(*dm, i, NULL, (PetscObject) fe));
  }
  PetscCall(PetscFEDestroy(&fe));
  PetscCall(DMCreateDS(*dm));
  PetscFunctionReturn(PETSC_SUCCESS);
}

/* Write the DM and a DataVector to an HDF5 file */
PetscErrorCode WriteDMToHDF5File(DM *dm, const char *FileName, Vec *DataVector)
{
  PetscViewer HDF5Viewer;

  PetscFunctionBegin;

  PetscCall(PetscViewerHDF5Open(PETSC_COMM_WORLD, FileName, FILE_MODE_WRITE, &HDF5Viewer));
  /* The correct name for the DM is set, as this is the name in which is occurs in the HDF5 file */
  PetscCall(PetscObjectSetName((PetscObject) *dm, "plexA"));
  PetscCall(PetscViewerPushFormat(HDF5Viewer, PETSC_VIEWER_HDF5_PETSC));
  //   PetscCall(DMView(*dm, HDF5Viewer));
  PetscCall(PetscPrintf(PETSC_COMM_WORLD, "Writing DM details to HDF5 file.\n"));
  PetscCall(DMPlexTopologyView(*dm, HDF5Viewer));
  PetscCall(DMPlexLabelsView(*dm, HDF5Viewer));
  PetscCall(DMPlexCoordinatesView(*dm, HDF5Viewer));
  PetscCall(PetscPrintf(PETSC_COMM_WORLD, "Writing data vector to HDF5 file.\n"));
  PetscCall(PetscObjectSetName((PetscObject) *DataVector, "UnknownVectorA"));
  PetscCall(DMPlexGlobalVectorView(*dm, HDF5Viewer, *dm, *DataVector));
  PetscCall(PetscViewerDestroy(&HDF5Viewer));
  PetscCall(PetscPrintf(PETSC_COMM_WORLD, "Finished writing to HDF5 file.\n"));
  PetscFunctionReturn(PETSC_SUCCESS);
}

int main(int argc, char **argv)
{
  PetscCall(PetscInitialize(&argc, &argv, NULL, help));
  MPI_Comm comm = PETSC_COMM_WORLD;
  PetscInt dim = 3;
  PetscInt overlap = 1;
  PetscInt cells_per_dir[] = {5, 5, 5};
  PetscReal dir_min[] = {0.0, 0.0, 0.0};
  PetscReal dir_max[] = {1.0, 1.0, 1.0};
  DMBoundaryType bcs[] = {DM_BOUNDARY_NONE, DM_BOUNDARY_NONE, DM_BOUNDARY_NONE};
  DM forest, plex;
  Vec DataVector;

  PetscCall(DMCreate(comm, &forest));
  PetscCall(DMSetType(forest, DMP8EST));

  /* Set up the DM_base, the parent of the DMForest */
  {
    DM dm_base;
    PetscCall(DMPlexCreateBoxMesh(comm, dim, PETSC_FALSE, cells_per_dir, dir_min, dir_max, bcs, PETSC_TRUE, &dm_base));

    PetscCall(DMSetBasicAdjacency(dm_base, PETSC_TRUE, PETSC_TRUE));
    {
      DM pdm = NULL;
      PetscCall(DMPlexDistribute(dm_base, overlap, NULL, &pdm));
      if (pdm)
      {
        PetscCall(DMDestroy(&dm_base));
        dm_base = pdm;
      }
    }

    PetscCall(DMSetFromOptions(dm_base));
    PetscCall(DMViewFromOptions(dm_base, NULL, "-dm_base_view"));
    PetscCall(DMCopyFields(dm_base, forest));

    PetscCall(DMForestSetBaseDM(forest, dm_base));
    PetscCall(DMDestroy(&dm_base));
  }
  PetscCall(DMSetFromOptions(forest));
  PetscCall(DMSetUp(forest));

  PetscCall(DMViewFromOptions(forest, NULL, "-dm_forest_view"));

  /* Convert the DMForest into a DMPlex */
  PetscCall(DMConvert(forest, DMPLEX, &plex));
  PetscCall(CreateSectiononDM(&plex));
  PetscCall(DMSetFromOptions(plex));
  PetscCall(DMSetUp(plex));

  /* Refine a point in the DMForest and convert again to Plex */
  PetscCall(PetscPrintf(PETSC_COMM_WORLD, "Refining a point in the DMForest.\n"));
  {
    DMLabel adapt_label;
    DM dmForest_adapted;
    PetscInt CellHeight, CellStart, CellEnd;
    PetscInt CellRefine;
    PetscBool adapted = PETSC_FALSE;

    PetscCall(DMLabelCreate(PETSC_COMM_SELF, "adapt", &adapt_label));
    PetscCall(DMLabelSetDefaultValue(adapt_label, DM_ADAPT_KEEP));
    PetscCall(DMPlexGetVTKCellHeight(plex, &CellHeight));
    PetscCall(DMPlexGetHeightStratum(plex, CellHeight, &CellStart, &CellEnd));
    CellRefine = 0.5 * (CellEnd + CellStart);
    PetscCall(DMLabelSetValue(adapt_label, CellRefine, DM_ADAPT_REFINE));

    PetscCall(DMForestTemplate(forest, PETSC_COMM_WORLD, &dmForest_adapted));
    PetscCall(DMForestSetAdaptivityLabel(dmForest_adapted, adapt_label));
    PetscCall(DMLabelDestroy(&adapt_label));
    PetscCall(DMSetUp(dmForest_adapted));
    PetscCall(DMForestGetAdaptivitySuccess(dmForest_adapted, &adapted));
    if (!adapted)
    {
      PetscCall(DMDestroy(&dmForest_adapted));
    }
    else
    {
      PetscCall(DMDestroy(&forest));
      PetscCall(DMDestroy(&plex));
      forest = dmForest_adapted;
      PetscCall(DMConvert(forest, DMPLEX, &plex));
    }
  }
  PetscCall(PetscPrintf(PETSC_COMM_WORLD, "Done refining a point in the DMForest.\n"));

  /* Create a section and discretisation on the DMPlex */
  PetscCall(PetscPrintf(PETSC_COMM_WORLD, "Creating a section on the DMPlex\n"));
  PetscCall(CreateSectiononDM(&plex));
  PetscCall(DMSetFromOptions(plex));
  PetscCall(DMSetUp(plex));

  /* Create a simple DataVector on the plex */
  PetscCall(DMGetGlobalVector(plex, &DataVector));
  PetscCall(VecSet(DataVector, 1.0));

  /* Write the DMPlex to an HDF5 file */
  PetscCall(WriteDMToHDF5File(&plex, "meshdatadmplex.h5", &DataVector));

  /* Delete the Vector, DMplex and DMforest*/
  PetscCall(DMRestoreGlobalVector(plex, &DataVector));
  PetscCall(DMDestroy(&plex));
  PetscCall(DMDestroy(&forest));

  PetscCall(PetscPrintf(PETSC_COMM_WORLD, "Done with writing DMPLEX object to HDF5 file.\n"));

  PetscCall(PetscFinalize());

  return 0;
}

Reply via email to