On Wed, 7 Aug 2019 at 13:52, Matthew Knepley <knep...@gmail.com> wrote:
>
> On Wed, Aug 7, 2019 at 7:13 AM Lawrence Mitchell via petsc-dev 
> <petsc-dev@mcs.anl.gov> wrote:
>>
>> Dear petsc-dev,
>>
>> I would like to run with a geometric multigrid hierarchy built out of DMPlex 
>> objects. On the coarse grids, I would like to reduce the size of the 
>> communicator being used. Since I build the hierarchy by regularly refining 
>> the coarse grid problem, this means I need a way of redistributing a DMPlex 
>> object from commA to commB.
>>
>> (I note that DMRefine has signature DMRefine(oldDM, newcomm, newDM), but as 
>> far as I can tell no concrete implementation of DMRefine has ever taken any 
>> notice of newcomm, and the newDM always ends up on the same communicator as 
>> the oldDM).
>>
>> OK, so I think what I want to use is DMPlexDistribute. This does 
>> many-to-many redistribution of DMPlex objects, but has a rather constrained 
>> interface:
>>
>> DMPlexDistribute(oldDM, overlap, *migrationSF, *newDM)
>>
>> It works by taking the oldDM, assuming that the communicator of that thing 
>> defines the number of partitions I want, partitioning the mesh dual graph, 
>> and migrating onto a newDM, with the same number of processes as the oldDM.
>>
>> So what are my options if I want to distribute from oldDM onto newDM where 
>> the communicators of the oldDM and the putative target newDM /don't/ match?
>>
>> Given my use case above, right now I can assume that 
>> MPI_Group_translate_ranks(oldcomm, newcomm) will work (although I can think 
>> of future use cases where it doesn't).
>>
>> In an ideal world, I think the interface for DMPlexDistribute should be 
>> something like:
>>
>> DMPlexDistribute(oldDM, overlap, newComm, *migrationSF, *newDM)
>>
>> where oldDM->comm and newComm are possibly disjoint sets of ranks, and the 
>> call is collective over the union of the groups of the two communicators.
>>
>> This would work by then asking the partitioner to construct a partition 
>> based on the size of newComm (rather than oldDM->comm) and doing all of the 
>> migration.
>>
>> I started to look at this, and swiftly threw my hands in the air, because 
>> the implicit assumption of the communicators matching is everywhere.
>>
>> So my stopgap solution would be (given that I know oldDM->comm \subset 
>> newComm) is to take my oldDM and move it onto newComm before calling the 
>> existing DMPlexDistribute.
>>
>> Does this sound like a reasonable approach? Effectively I need to create 
>> "empty" DM objects on all of the participating ranks in newComm that are not 
>> in oldComm: what information do they need to contain to align with oldDM?
>>
>>
>> Does anyone else just have code for doing this lying around? Am I completely 
>> barking up the wrong tree?
>
>
> That is how you do it. We are solidifying this pattern, as you can see from 
> Junchao's new example for pushing a Vec onto a subcomm.

OK, so I think I am getting there. Presently I am abusing
DMPlexCreateFromDAG to migrate a DM on oldComm onto newComm, but this
is very fragile. I attach what I have right now. You have to run it
with PTSCOTCH, because parmetis refuses to partition graphs with no
vertices on a process.: again, this would be avoided if the
partitioning was done on a source communicator with a number of
partitions given by a target communicator, anyway.

This example builds a simple DMPlex on a subcommunicator (built with
processes from COMM_WORLD where rank % 3 == 0), and then refines it,
and redistributes it onto COMM_WORLD.

Running with:

$ mpiexec -n 4 ./dmplex-redist -input_dm_view -refined_dm_view
-copied_dm_view -redistributed_dm_view

Builds a box mesh on 1 rank, redistributes it onto 2, refines that DM,
then expands it to live on 4 ranks, and subsequently redistributes
that.

DM Object: 2 MPI processes
  type: plex
DM_0x84000007_0 in 3 dimensions:
  0-cells: 27 0
  1-cells: 54 0
  2-cells: 36 0
  3-cells: 8 0
Labels:
  depth: 4 strata with value/size (0 (27), 1 (54), 2 (36), 3 (8))
  Face Sets: 6 strata with value/size (6 (4), 5 (4), 3 (4), 4 (4), 1 (4), 2 (4))
  marker: 1 strata with value/size (1 (72))
DM Object: 2 MPI processes
  type: plex
DM_0x84000007_1 in 3 dimensions:
  0-cells: 75 75
  1-cells: 170 170
  2-cells: 128 128
  3-cells: 32 32
Labels:
  marker: 1 strata with value/size (1 (192))
  Face Sets: 5 strata with value/size (1 (36), 3 (18), 4 (18), 5 (18), 6 (18))
  depth: 4 strata with value/size (0 (75), 1 (170), 2 (128), 3 (32))
DM Object: 4 MPI processes
  type: plex
DM_0xc4000003_0 in 3 dimensions:
  0-cells: 75 0 0 75
  1-cells: 170 0 0 170
  2-cells: 128 0 0 128
  3-cells: 32 0 0 32
Labels:
  depth: 4 strata with value/size (0 (75), 1 (170), 2 (128), 3 (32))
Field PetscContainer_0xc4000003_1:
  adjacency FEM
DM Object: Parallel Mesh 4 MPI processes
  type: plex
Parallel Mesh in 3 dimensions:
  0-cells: 45 45 45 45
  1-cells: 96 96 96 96
  2-cells: 68 68 68 68
  3-cells: 16 16 16 16
Labels:
  depth: 4 strata with value/size (0 (45), 1 (96), 2 (68), 3 (16))
Field PetscContainer_0xc4000003_2:
  adjacency FEM

I would ideally like some help in figuring out a better way of
"expanding" the DM onto the new communicator (rather than using
DMPlexCreateFromDAG as I do now).

I think one wants to think about the interface for these
redistribution functions in general. It seems that they want to
broadly match MPI_Intercomm_create. So something like:

DMPlexDistribute(oldDM, peerCommunicator, newCommunicator, overlap,
*migrationSF, *newDM)

This is collective over peerCommunicator and returns a newDM on
newCommunicator, and a migrationSF on peerCommunicator.

Note that I do not think it is safe to assume that PETSC_COMM_WORLD is
always suitable as peerCommunicator.

> I think the right way to do this would be to implement the hooks in 
> PCTELESCOPE for DMPlex. Dave and I have talked about this and
> it should be exactly the same work as you propose above, but it would allow 
> you to use the command line, do this recursively, interact nicely
> with the solvers, etc. I can help.

The telescope side of things now exists (c.f. 8d9f7141f511) to some
degree. But to do that, one needs the redistribution.

Cheers,

Lawrence
#include <petsc.h>
#include <petsc/private/isimpl.h>   /*I  "petscis.h"   I*/
#include <petsc/private/sfimpl.h>

/*@
 PetscSFCopyToComm - Copy a PetscSF onto a new communicator

 Translates the ranks in iremote into ranks in the new communicator.


 Input parameters:

 + sfA - The old PetscSF
 . comm - The new communicator

 Output parameters:

 . sfB - The new PetscSF on the provided communicator

 Level: developer
@*/
PetscErrorCode PetscSFCopyToComm(PetscSF sfA, MPI_Comm comm, PetscSF *sfB)
{
  MPI_Comm           ocomm       = MPI_COMM_NULL;
  MPI_Group          old_group   = MPI_GROUP_NULL, new_group = MPI_GROUP_NULL;
  PetscInt           nroots      = 0, nleaves = 0, i;
  const PetscInt    *ilocal      = NULL;
  const PetscSFNode *iremote     = NULL;
  PetscMPIInt       *old_ranks   = NULL, *new_ranks = NULL;
  PetscSFNode       *new_iremote = NULL;
  PetscInt          *new_ilocal  = NULL;
  PetscMPIInt        bcastRank = -1;
  PetscBool          graphSet = PETSC_FALSE;
  PetscErrorCode     ierr;

  PetscFunctionBegin;

  ierr = PetscSFCreate(comm, sfB);CHKERRQ(ierr);
  ierr = MPI_Comm_group(comm, &new_group);CHKERRQ(ierr);
  if (sfA) {
    graphSet = sfA->graphset;
    ierr = MPI_Comm_rank(comm, &bcastRank);CHKERRQ(ierr);
    if (sfA->graphset) {
      ocomm = PetscObjectComm((PetscObject)sfA);
      ierr = MPI_Comm_group(ocomm, &old_group);CHKERRQ(ierr);
      ierr = PetscSFGetGraph(sfA, &nroots, &nleaves, &ilocal, &iremote);CHKERRQ(ierr);
      ierr = PetscMalloc1(nleaves, &old_ranks);CHKERRQ(ierr);
      for (i = 0; i < nleaves; i++) {
        old_ranks[i] = (PetscMPIInt)iremote[i].rank;
      }
      ierr = PetscMalloc1(nleaves, &new_iremote);CHKERRQ(ierr);
      ierr = PetscMalloc1(nleaves, &new_ilocal);CHKERRQ(ierr);
      ierr = PetscMalloc1(nleaves, &new_ranks);CHKERRQ(ierr);

      ierr = MPI_Group_translate_ranks(old_group, nleaves, old_ranks, new_group, new_ranks);CHKERRQ(ierr);
      for (i = 0; i < nleaves; i++) {
        new_ilocal[i] = ilocal[i];
        new_iremote[i].rank = (PetscInt)new_ranks[i];
        new_iremote[i].index = iremote[i].index;
      }
      ierr = PetscFree(new_ranks);CHKERRQ(ierr);
      ierr = PetscFree(old_ranks);CHKERRQ(ierr);
    }
  }
  ierr = MPI_Allreduce(MPI_IN_PLACE, &bcastRank, 1, MPI_INT, MPI_MAX, comm);CHKERRQ(ierr);
  ierr = MPI_Bcast(&graphSet, 1, MPIU_BOOL, bcastRank, comm);CHKERRQ(ierr);
  if (graphSet) {
    ierr = PetscSFSetGraph(*sfB, nroots, nleaves, new_ilocal, PETSC_OWN_POINTER, new_iremote, PETSC_OWN_POINTER);CHKERRQ(ierr);
  }

  PetscFunctionReturn(0);
}

PetscErrorCode DMPlexCopyToComm(DM old, MPI_Comm comm, DM *new)
{
  PetscErrorCode ierr;
  PetscSF        origsf = NULL, newsf = NULL;
  PetscInt *numPoints = NULL;
  PetscInt *coneSize = NULL;
  PetscInt *cones = NULL;
  PetscInt *coneOrientations = NULL;
  const PetscScalar *vertexCoords = NULL;
  PetscInt depth;

  PetscFunctionBegin;

  ierr = DMPlexCreate(comm, new);CHKERRQ(ierr);

  /* Hardcode to 3 for now */
  ierr = DMSetDimension(*new, 3);CHKERRQ(ierr);
  ierr = DMSetCoordinateDim(*new, 3);CHKERRQ(ierr);
  depth = 3;
  ierr = PetscCalloc1(depth+1, &numPoints);CHKERRQ(ierr);
  if (old) {
    PetscInt i;
    Vec coords;
    PetscSection coneSizes;
    ierr = DMPlexGetDepth(old, &depth);CHKERRQ(ierr);
    for (i = 0; i < depth+1; i++) {
      PetscInt pStart, pEnd;
      ierr = DMPlexGetDepthStratum(old, i, &pStart, &pEnd);CHKERRQ(ierr);
      numPoints[i] = pEnd - pStart;
    }
    ierr = DMPlexGetCones(old, &cones);CHKERRQ(ierr);
    ierr = DMPlexGetConeOrientations(old, &coneOrientations);CHKERRQ(ierr);
    ierr = DMGetCoordinatesLocal(old, &coords);CHKERRQ(ierr);
    ierr = VecGetArrayRead(coords, &vertexCoords);CHKERRQ(ierr);
    
    ierr = DMPlexGetConeSection(old, &coneSizes);
    coneSize = coneSizes->atlasDof;
    ierr = DMGetPointSF(old, &origsf);CHKERRQ(ierr);
  }

  ierr = DMPlexCreateFromDAG(*new, depth, numPoints, coneSize, cones, coneOrientations, vertexCoords);CHKERRQ(ierr);

  ierr = PetscFree(numPoints);CHKERRQ(ierr);
  if (old) {
    Vec coords;
    ierr = DMGetCoordinatesLocal(old, &coords);CHKERRQ(ierr);
    ierr = VecRestoreArrayRead(coords, &vertexCoords);CHKERRQ(ierr);
  }
  ierr = PetscSFCopyToComm(origsf, comm, &newsf);CHKERRQ(ierr);
  ierr = DMSetPointSF(*new, newsf);CHKERRQ(ierr);
  PetscFunctionReturn(0);
}

int main(int argc, char **argv)
{
  PetscErrorCode   ierr;
  DM               origdm = NULL, origonnewdm = NULL, newdm = NULL;
  MPI_Comm         origcomm, newcomm;
  PetscPartitioner part;
  PetscMPIInt      rank;
  const PetscInt faces[3]      = {2,2,2};
    

  ierr = PetscInitialize(&argc, &argv, NULL, NULL); if (ierr) return ierr;

  ierr = MPI_Comm_rank(PETSC_COMM_WORLD, &rank);CHKERRQ(ierr);
  ierr = MPI_Comm_split(PETSC_COMM_WORLD, rank % 3, rank, &origcomm);

  newcomm = PETSC_COMM_WORLD;
  if (!(rank%3)) {
    PetscMPIInt orank;
    PetscSF sf;
    DM pdm;
    ierr = MPI_Comm_rank(origcomm, &orank); CHKERRQ(ierr);
    
    if (!orank) {
      ierr = DMPlexCreateBoxMesh(origcomm, 3, PETSC_FALSE, faces, NULL, NULL, NULL, PETSC_TRUE, &origdm); CHKERRQ(ierr);
    } else {
      ierr = DMPlexCreateFromCellList(origcomm, 3, 0, 0, 0, PETSC_FALSE, NULL, 3, NULL, &origdm);CHKERRQ(ierr);
    }
    ierr = DMPlexGetPartitioner(origdm, &part);CHKERRQ(ierr);
    ierr = PetscPartitionerSetType(part, PETSCPARTITIONERPTSCOTCH);CHKERRQ(ierr);
    ierr = DMViewFromOptions(origdm, NULL, "-input_dm_view");CHKERRQ(ierr);
    ierr = DMPlexDistribute(origdm, 0, &sf, &pdm); CHKERRQ(ierr);
    if (pdm) {
      ierr = DMDestroy(&origdm); CHKERRQ(ierr);
      origdm = pdm;
    }
    if (sf) {
      ierr = PetscSFDestroy(&sf); CHKERRQ(ierr);
    }
    ierr = DMPlexSetRefinementUniform(origdm, PETSC_TRUE);CHKERRQ(ierr);
    ierr = DMRefine(origdm, origcomm, &pdm);CHKERRQ(ierr);
    ierr = DMDestroy(&origdm);CHKERRQ(ierr);
    origdm = pdm;
    ierr = DMGetPointSF(origdm, &sf);CHKERRQ(ierr);
  } else {
    origdm = NULL;
  }

  ierr = DMPlexCopyToComm(origdm, newcomm, &origonnewdm);CHKERRQ(ierr);
  
  if (origonnewdm) {
    PetscSF migrationSF, fieldSF;
    PetscSection osec, nsec;
    PetscInt *remoteOffsets = NULL;
    const PetscInt nComp[] = {1};
    const PetscInt nDof[] = {0, 0, 0, 1};
    ierr = DMPlexGetPartitioner(origonnewdm, &part);CHKERRQ(ierr);
    ierr = PetscPartitionerSetType(part, PETSCPARTITIONERPTSCOTCH);CHKERRQ(ierr);
    ierr = DMPlexDistribute(origonnewdm, 0, &migrationSF, &newdm);CHKERRQ(ierr);
    ierr = PetscSFViewFromOptions(migrationSF, NULL, "-copied2redistributed_migration_sf_view");CHKERRQ(ierr);

    ierr = DMSetNumFields(origonnewdm, 1);CHKERRQ(ierr);
    ierr = DMSetNumFields(newdm, 1);CHKERRQ(ierr);
    ierr = DMPlexCreateSection(origonnewdm, NULL, nComp, nDof, 0, NULL, NULL, NULL, NULL, &osec);CHKERRQ(ierr);
    ierr = PetscSectionCreate(newcomm, &nsec);CHKERRQ(ierr);

    ierr = PetscSFDistributeSection(migrationSF, osec, &remoteOffsets, nsec);CHKERRQ(ierr);
    ierr = PetscSFCreateSectionSF(migrationSF, osec, remoteOffsets, nsec, &fieldSF);CHKERRQ(ierr);
    ierr = PetscSectionViewFromOptions(osec, NULL, "-osec_view");CHKERRQ(ierr);
    ierr = PetscSectionViewFromOptions(nsec, NULL, "-nsec_view");CHKERRQ(ierr);
    ierr = PetscSFViewFromOptions(fieldSF, NULL, "-copied2redistributed_field_sf_view");CHKERRQ(ierr);
    ierr = PetscSFDestroy(&fieldSF);CHKERRQ(ierr);
    ierr = PetscSFDestroy(&migrationSF);CHKERRQ(ierr);
    ierr = PetscSectionDestroy(&osec);CHKERRQ(ierr);
    ierr = PetscSectionDestroy(&nsec);CHKERRQ(ierr);
  }

  if (origdm) {
    ierr = DMViewFromOptions(origdm, NULL, "-refined_dm_view");CHKERRQ(ierr);
  }
  if (origonnewdm) {
    ierr = DMViewFromOptions(origonnewdm, NULL, "-copied_dm_view");CHKERRQ(ierr);
  }
  if (newdm) {
    ierr = DMViewFromOptions(newdm, NULL, "-redistributed_dm_view");CHKERRQ(ierr);
  }
  ierr = DMDestroy(&origdm); CHKERRQ(ierr);
  ierr = DMDestroy(&origonnewdm); CHKERRQ(ierr);
  ierr = DMDestroy(&newdm); CHKERRQ(ierr);
  
  ierr = MPI_Comm_free(&origcomm);CHKERRQ(ierr);

  ierr = PetscFinalize();
  return ierr;
}

Reply via email to