Hi Matthew,

I think I need the whole G2L not just the sizes.
Build my own matrix, create a “fake” DMDA, extract the G2L mapping and apply it 
to my preallocation/assembly routines (where I would basically replace my 
natural ordering with the DMDA ordering from the G2L mapping)

(cf the mail I sent to the user list approx. 1 hour ago)

Thibaut

On 22 Feb 2019, at 14:24, Matthew Knepley 
<knep...@gmail.com<mailto:knep...@gmail.com>> wrote:

On Thu, Feb 21, 2019 at 1:19 PM Thibaut Appel 
<t.appe...@imperial.ac.uk<mailto:t.appe...@imperial.ac.uk>> wrote:

Hi Matthew,

Is your first part of your answer (using DMDASetBlockFills) valid only in the 
case I create a DMDA object?

Yes I think that is the kind of stencil I am using. I could know how the 
stencil looks like exactly, but I preallocate looping, for each process on all 
the elements of the stencil, grid node by grid node (which is not that costly, 
and "exact")

If I do NOT use a DMDA object and create my MPIAIJ matrix myself, how do I get 
the global row indices owned by the process (the "DMDA-like" ones you 
mentioned)?

I think you want

  
https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMDA/DMDAGetLocalInfo.html

This tells you exactly what patch is owned by the process, and also how big the 
overlap is. Its not the same as G2L, but do you need
the whole G2L, or just the sizes?

  Thanks,

    Matt


The problem is that MatGetOwnershipRange cannot be called if the matrix hasn't 
been preallocated, and I need the global indices to preallocate.

Thibaut


On 21/02/2019 17:49, Matthew Knepley wrote:
On Thu, Feb 21, 2019 at 11:16 AM Thibaut Appel via petsc-users 
<petsc-users@mcs.anl.gov<mailto:petsc-users@mcs.anl.gov>> wrote:
Dear PETSc developers/users,

I’m solving linear PDEs on a regular grid with high-order finite differences, 
assembling an MPIAIJ matrix to solve linear systems or eigenvalue problems. 
I’ve been using vertex major, natural ordering for the parallelism with 
PetscSplitOwnership (yielding rectangular slices of the physical domain) and 
wanted to move to DMDA to have a more square-ish domain decomposition and 
minimize communication between processes.

However, my application is memory critical, and I have finely-tuned matrix 
preallocation routines for allocating memory “optimally”. It seems the memory 
of a DMDA matrix is allocated along the value of the stencil width of 
DMDACreate and the manual says about it

“These DMDA stencils have nothing directly to do with any finite difference 
stencils one might chose to use for a discretization”

And despite reading the manual pages there must be something I do not 
understand in the DM topology, what is that "stencil width" for then? I will 
not use ghost values for my FD-method, right?

What this is saying is, "You might be using some stencil that is not STAR or 
BOX, but we are preallocating according to one of those".
If you really care about how much memory is preallocated, which it seems you 
do, then you might be able to use

  
https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMDA/DMDASetBlockFills.html

to tell use exactly how to preallocate.

I was then wondering if I could just create a MPIAIJ matrix, and with a PETSc 
routine get the global indices of the domain for each process: in other words, 
an equivalent of PetscSplitOwnership that gives me the DMDA unknown ordering. 
So I can feed and loop on that in my preallocation and assembly routines.

You can make an MPIAIJ matrix yourself of course. It should have the same 
division of rows as the DMDA division of dofs. Also, MatSetValuesStencil() will 
not work for a custom matrix.

  Thanks,

     Matt

Thanks very much,

Thibaut


--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/<http://www.cse.buffalo.edu/~knepley/>


--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/<http://www.cse.buffalo.edu/~knepley/>

Reply via email to