Re: [petsc-users] flux vector

2021-06-13 Thread Adrian Croucher via petsc-users

hi, thanks for the suggestions!

On 12/06/21 12:19 am, Matthew Knepley wrote:
However, using overlap = 1 put in a bunch of new faces. We do not care 
about the ones on the process boundary. They will be
handled by the other process. We do care about faces between two ghost 
cells, since they will be a false positive. Luckily, these

are labeled by the "ghost" label.


I think we *do* have to care about the faces on the process boundary 
(assuming by that you mean the faces with support size 1 on the outside 
edge of the partition ghost cells), because they can be ghosts of flux 
faces on other processes. If we ignore them they will have no dofs on 
the current process, but they could have dofs on another one. That 
inconsistency is the problem that causes the error when you create the 
global section.


Also, I don't think we want to exclude faces between partition ghost 
cells. Those faces will definitely be ghosts of a flux face on another 
process. Again, if we exclude them the dofs will not be consistent 
across processes. We might not actually compute a flux on those faces 
locally, but there has to be a space for them anyway.


I have found a simple algorithm now which seems to work in all my test 
cases, though I still wonder if there is a better way. The algorithm is:


1) label the global boundary faces before distribution (all faces with 
support size 1), using DMPlexMarkBoundaryFaces()


2) label any open boundary faces (on which Dirichlet BCs are applied) - 
I didn't mention these before, but they need to be included as flux faces


3) after distribution, loop over all faces on current process:

    if face on open boundary: label face as a flux face

    else:

  if face not on global boundary: label face as a flux face

Here the only test of support size is in step 1), which ensures that the 
dofs are always consistent between processes. Ultimately, the support 
size on the local process is not really relevant or reliable.


The main thing I don't like about this algorithm is that it relies on 
looping over all mesh faces in serial during step 1). I would rather not 
add to the serial part of the code and would prefer if it could all be 
done in parallel, after distribution. Maybe I'm worrying unnecessarily, 
and just getting the support size of each face is cheap enough that this 
won't be a significant issue?


- Adrian

--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.crouc...@auckland.ac.nz
tel: +64 (0)9 923 4611



Re: [petsc-users] Change Amat in FormJacobian

2021-06-13 Thread Barry Smith

  Anton,

  -pc_mg_galerkin pmat

  Though it seems simple, there is some subtly in swapping out matrices with 
SNES.

  When using multigrid with SNES there are at least five distinct uses of the 
Jacobian operator. 
Perform matrix-vector product in line search to check Wolf sufficient decrease 
convergence criteria
Perform the matrix-vector product for the Krylov accelerator of the system
Perform smoothing on the finest level of MG
Perform the matrix-vector product needed on the finest level of MG to compute 
the residual that will be restricted to the coarser level of multigrid
When using Galerkin products to compute the coarser grid operator performing 
the Galerkin matrix triple product

when one swaps out the mat, which of these do they wish to change? The first 
two seem to naturally go together as do the last three. In your case I presume 
you want to swap for the first two, but always use pmat for the last three? To 
achieve this you will also need -pc_use_amat  false

If you run with -info and -snes_view it will print out some of the information 
about which operator it is using for each case, but probably not all of them.

Note: if the pmat is actually an accurate computation of the Jacobian then it 
is likely best not to ever use a matrix-free product. It is only when pmat is 
approximated in some specific way that using the matrix-free product would be 
useful to insure the "Newton" method actually computes a Newton step.

Barry



> On Jun 13, 2021, at 11:21 AM, Matthew Knepley  wrote:
> 
> On Sun, Jun 13, 2021 at 10:55 AM Anton Popov  > wrote:
> Hi,
> 
> I want a simple(?) thing. I want to decide and be able to assign the 
> Jacobian matrix (Amat) on the fly within the FormJacobian function (i.e. 
> during Newton iteration) to one of the following options:
> 
> 1) built-in MFFD approximation
> 2) assembled preconditioner matrix (Pmat)
> 
> I have not found this scenario demonstrated in any example, therefore 
> I’m asking how to do that.
> 
> Currently I do the following:
> 
> 1) setup Amat as a shell matrix with a MATOP_MULT operation that simply 
> retrieves a matrix object form its context and calls MatMult on it.
> 
> 2) if I need MFFD, I put a matrix generated with MatCreateSNESMF in the 
> Amat context (of course I also call MatMFFDComputeJacobian before that).
> 
> 3) if I need Pmat, I simply put Pmat in the Amat context.
> 
> 4) call MatAssemblyBegin/End on Amat
> 
> So far so good.
> 
> However, shell Amat and assembled Pmat generate a problem if Galerkin 
> multigrid is requested as a preconditioner (I just test on 1 CPU):
> 
> [0]PETSC ERROR: MatPtAP requires A, shell, to be compatible with P, 
> seqaij (Misses composed function MatPtAP_shell_seqaij_C)
> [0]PETSC ERROR: #1 MatPtAP()
> [0]PETSC ERROR: #2 MatGalerkin()
> [0]PETSC ERROR: #3 PCSetUp_MG()
> [0]PETSC ERROR: #4 PCSetUp()
> [0]PETSC ERROR: #5 KSPSetUp()
> [0]PETSC ERROR: #6 KSPSolve()
> [0]PETSC ERROR: #7 SNESSolve_NEWTONLS()
> [0]PETSC ERROR: #8 SNESSolve()
> 
> It seems like PETSc tries to apply Galerkin coarsening to the shell Amat 
> matrix instead of the assembled Pmat. Why?
> 
> Pmat is internally generated by SNES using a DMDA attached to SNES, so 
> it has everything to define Galerkin coarsening. And it actually works 
> if I just get rid of the shell Amat.
> 
> I can get around this problem by wrapping a PC object in a shell matrix 
> and pass it as Pmat. This is a rather ugly approach, so I wonder how to 
> resolve the situation in a better way, if it is possible.
> 
> Hi Anton,
> 
> You are correct that there is no specific test, so I can believe that a bug 
> might be lurking here.
> I believe the way it is supposed to work is that you set the type of Galerkin 
> coarsening that you
> want
> 
>   
> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCMGSetGalerkin.html
>  
> 
> 
> So I am thinking you want 'pmat' only, and you would be in charge of making 
> the coarse MFFD operator
> and telling PCMG about it. I could imagine that you might want us to 
> automatically do that, but we would
> need some way to know that it is MFFD, and with the scheme above we do not 
> have that.
> 
> What is the advantage of switching representations during the Newton 
> iteration?
> 
>   Thanks,
> 
>  Matt
>  
> Thanks.
> Anton
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ 



Re: [petsc-users] Change Amat in FormJacobian

2021-06-13 Thread Matthew Knepley
On Sun, Jun 13, 2021 at 10:55 AM Anton Popov  wrote:

> Hi,
>
> I want a simple(?) thing. I want to decide and be able to assign the
> Jacobian matrix (Amat) on the fly within the FormJacobian function (i.e.
> during Newton iteration) to one of the following options:
>
> 1) built-in MFFD approximation
> 2) assembled preconditioner matrix (Pmat)
>
> I have not found this scenario demonstrated in any example, therefore
> I’m asking how to do that.
>
> Currently I do the following:
>
> 1) setup Amat as a shell matrix with a MATOP_MULT operation that simply
> retrieves a matrix object form its context and calls MatMult on it.
>
> 2) if I need MFFD, I put a matrix generated with MatCreateSNESMF in the
> Amat context (of course I also call MatMFFDComputeJacobian before that).
>
> 3) if I need Pmat, I simply put Pmat in the Amat context.
>
> 4) call MatAssemblyBegin/End on Amat
>
> So far so good.
>
> However, shell Amat and assembled Pmat generate a problem if Galerkin
> multigrid is requested as a preconditioner (I just test on 1 CPU):
>
> [0]PETSC ERROR: MatPtAP requires A, shell, to be compatible with P,
> seqaij (Misses composed function MatPtAP_shell_seqaij_C)
> [0]PETSC ERROR: #1 MatPtAP()
> [0]PETSC ERROR: #2 MatGalerkin()
> [0]PETSC ERROR: #3 PCSetUp_MG()
> [0]PETSC ERROR: #4 PCSetUp()
> [0]PETSC ERROR: #5 KSPSetUp()
> [0]PETSC ERROR: #6 KSPSolve()
> [0]PETSC ERROR: #7 SNESSolve_NEWTONLS()
> [0]PETSC ERROR: #8 SNESSolve()
>
> It seems like PETSc tries to apply Galerkin coarsening to the shell Amat
> matrix instead of the assembled Pmat. Why?
>
> Pmat is internally generated by SNES using a DMDA attached to SNES, so
> it has everything to define Galerkin coarsening. And it actually works
> if I just get rid of the shell Amat.
>
> I can get around this problem by wrapping a PC object in a shell matrix
> and pass it as Pmat. This is a rather ugly approach, so I wonder how to
> resolve the situation in a better way, if it is possible.
>

Hi Anton,

You are correct that there is no specific test, so I can believe that a bug
might be lurking here.
I believe the way it is supposed to work is that you set the type of
Galerkin coarsening that you
want


https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCMGSetGalerkin.html

So I am thinking you want 'pmat' only, and you would be in charge of making
the coarse MFFD operator
and telling PCMG about it. I could imagine that you might want us to
automatically do that, but we would
need some way to know that it is MFFD, and with the scheme above we do not
have that.

What is the advantage of switching representations during the Newton
iteration?

  Thanks,

 Matt


> Thanks.
> Anton
>
-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


[petsc-users] Change Amat in FormJacobian

2021-06-13 Thread Anton Popov

Hi,

I want a simple(?) thing. I want to decide and be able to assign the 
Jacobian matrix (Amat) on the fly within the FormJacobian function (i.e. 
during Newton iteration) to one of the following options:


1) built-in MFFD approximation
2) assembled preconditioner matrix (Pmat)

I have not found this scenario demonstrated in any example, therefore 
I’m asking how to do that.


Currently I do the following:

1) setup Amat as a shell matrix with a MATOP_MULT operation that simply 
retrieves a matrix object form its context and calls MatMult on it.


2) if I need MFFD, I put a matrix generated with MatCreateSNESMF in the 
Amat context (of course I also call MatMFFDComputeJacobian before that).


3) if I need Pmat, I simply put Pmat in the Amat context.

4) call MatAssemblyBegin/End on Amat

So far so good.

However, shell Amat and assembled Pmat generate a problem if Galerkin 
multigrid is requested as a preconditioner (I just test on 1 CPU):


[0]PETSC ERROR: MatPtAP requires A, shell, to be compatible with P, 
seqaij (Misses composed function MatPtAP_shell_seqaij_C)

[0]PETSC ERROR: #1 MatPtAP()
[0]PETSC ERROR: #2 MatGalerkin()
[0]PETSC ERROR: #3 PCSetUp_MG()
[0]PETSC ERROR: #4 PCSetUp()
[0]PETSC ERROR: #5 KSPSetUp()
[0]PETSC ERROR: #6 KSPSolve()
[0]PETSC ERROR: #7 SNESSolve_NEWTONLS()
[0]PETSC ERROR: #8 SNESSolve()

It seems like PETSc tries to apply Galerkin coarsening to the shell Amat 
matrix instead of the assembled Pmat. Why?


Pmat is internally generated by SNES using a DMDA attached to SNES, so 
it has everything to define Galerkin coarsening. And it actually works 
if I just get rid of the shell Amat.


I can get around this problem by wrapping a PC object in a shell matrix 
and pass it as Pmat. This is a rather ugly approach, so I wonder how to 
resolve the situation in a better way, if it is possible.


Thanks.
Anton