Brilliant, thanks a lot.

Michael

On 18/11/13 16:08, Matthew Knepley wrote:
On Mon, Nov 18, 2013 at 9:47 AM, Matthew Knepley <knep...@gmail.com <mailto:knep...@gmail.com>> wrote:

    On Mon, Nov 18, 2013 at 3:24 AM, Michael Lange
    <michael.la...@imperial.ac.uk
    <mailto:michael.la...@imperial.ac.uk>> wrote:

        Hi Matt,

        I think there is a misunderstanding here. I am referring to
        the case where DMPlexDistribute() is run with overlap=1 (which
        is not the case in SNES ex12) and vertices in the overlap/halo
        region are assigned to the wrong rank. This can lead to a case
        where a proc may own a vertex that is not in its original
        (non-overlapping) partition, although the attached cell is not
        owned and will be marked as "ghost" by
        DMPlexConstructGhostCells().


Your fix is now merged to next:

https://bitbucket.org/petsc/petsc/branch/knepley/fix-plex-partition-overlap

    Matt

        To illustrate this, I have attached an example consisting of a
        unit square with 3 faces in each dimension and a section with
        only vertex dofs. If run with two ranks, rank 1 will own all
        its vertices (13 roots), whereas rank 0 only owns vertices not
        in the overlap/halo of rank 1 (3 roots). My understanding is
        that, since the original partition splits the square along its
        diagonal, the vertex distribution should be 10 to 6 with the 4
        diagonal vertices assigned to rank 1 and all other vertices
        assigned according to the original partition. Is this correct,
        or am I missing something here?


    I have simplified the example so that I can easily do things in my
    head. Now we just have 2 faces per side. Here is the run with
    overlap = 0:

    next *$:/PETSc3/petsc/petsc-dev$
    /PETSc3/petsc/petsc-dev/arch-c-exodus-next/bin/mpiexec -host
    localhost -n 2
    /PETSc3/petsc/petsc-dev/arch-c-exodus-next/lib/plex_overlap-obj/plex_overlap
    -dm_view -overlap 0
    Parallel Mesh in 2 dimensions:
      0-cells: 6 6
      1-cells: 9 9
      2-cells: 4 4
    Labels:
      depth: 3 strata of sizes (6, 9, 4)
      exterior_facets: 1 strata of sizes (4)
      marker: 2 strata of sizes (9, 3)
    PetscSF Object: 2 MPI processes
      type: basic
        sort=rank-order
      [0] Number of roots=19, leaves=5, remote ranks=1
      [0] 4 <- (1,6)
      [0] 5 <- (1,8)
      [0] 7 <- (1,9)
      [0] 10 <- (1,13)
      [0] 11 <- (1,17)
      [1] Number of roots=19, leaves=0, remote ranks=0

    Each partition gets 4 cells and 6 vertices since it is split along
    the diagonal. The overlap
    region contains the 3 vertices and 2 faces that lie on the
    diagonal, and they are all owned by proc 1.
    Now if we run with an overlap of 1:

    next *$:/PETSc3/petsc/petsc-dev$
    /PETSc3/petsc/petsc-dev/arch-c-exodus-next/bin/mpiexec -host
    localhost -n 2
    /PETSc3/petsc/petsc-dev/arch-c-exodus-next/lib/plex_overlap-obj/plex_overlap
    -dm_view -overlap 1
    Parallel Mesh in 2 dimensions:
      0-cells: 8 8
      1-cells: 13 13
      2-cells: 6 6
    Labels:
      depth: 3 strata of sizes (8, 13, 6)
      exterior_facets: 1 strata of sizes (6)
      marker: 2 strata of sizes (13, 5)
    PetscSF Object: 2 MPI processes
      type: basic
        sort=rank-order
      [0] Number of roots=27, leaves=19, remote ranks=1
      [0] 0 <- (1,1)
      [0] 2 <- (1,4)
      [0] 6 <- (1,7)
      [0] 7 <- (1,8)
      [0] 8 <- (1,9)
      [0] 9 <- (1,10)
      [0] 10 <- (1,11)
      [0] 11 <- (1,12)
      [0] 12 <- (1,13)
      [0] 14 <- (1,17)
      [0] 15 <- (1,18)
      [0] 16 <- (1,19)
      [0] 17 <- (1,20)
      [0] 18 <- (1,21)
      [0] 19 <- (1,22)
      [0] 20 <- (1,23)
      [0] 21 <- (1,24)
      [0] 25 <- (1,25)
      [0] 26 <- (1,26)
      [1] Number of roots=27, leaves=2, remote ranks=1
      [1] 3 <- (0,1)
      [1] 5 <- (0,5)

    Each process gets 2 more cells (those who faces lie on the
    diagonal), 2 more vertices and 4 more edges. This is correct. The
    two overlap cells are ghost for proc 1, but the 4 edges and 2
    vertices are owned. So you are correct, I need to mark all those
    overlap points as "unownable" by the original process.

      Thanks for finding this,

          Matt

        Many thanks for all your help
        Michael

        On 16/11/13 13:54, Matthew Knepley wrote:
        On Sat, Nov 16, 2013 at 7:22 AM, Michael Lange
        <michael.la...@imperial.ac.uk
        <mailto:michael.la...@imperial.ac.uk>> wrote:

            Hi,

            I notice that, when creating the point SF for the
            parallel partition in DMPlexDistribute, cells are
            assigned to procs according to the original partition but
            vertices aren't. Was this done by design or is this a bug?


        If this were true, there would be no communication for the P1
        test of SNES ex12. Here is running it with
        -interpolate 1 and -dm_view ::ascii_info_detail

        PetscSF Object: 2 MPI processes
          type: basic
            sort=rank-order
          [0] Number of roots=19, leaves=5, remote ranks=1
          [0] 4 <- (1,6)
          [0] 5 <- (1,8)
          [0] 7 <- (1,9)
          [0] 10 <- (1,13)
          [0] 11 <- (1,17)
          [1] Number of roots=19, leaves=0, remote ranks=0
          [0] Roots referenced by my leaves, by rank
          [0] 1: 5 edges
          [0]    4 <- 6
          [0]    5 <- 8
          [0]    7 <- 9
          [0]    10 <- 13
          [0]    11 <- 17
          [1] Roots referenced by my leaves, by rank

        So there are 3 vertices and 2 edges in the point SF.

           Matt

            In case it is a bug, I have attached a patch that fixes
            this by using the closure of the original partition instead.

            Thanks and kind regards
            Michael




-- What most experimenters take for granted before they begin
        their experiments is infinitely more interesting than any
        results to which their experiments lead.
        -- Norbert Wiener




-- What most experimenters take for granted before they begin their
    experiments is infinitely more interesting than any results to
    which their experiments lead.
    -- Norbert Wiener




--
What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

Reply via email to