You may be able to use 
https://petsc.org/main/manualpages/PC/PCSetApplicationContext/#pcsetapplicationcontext
 or KSPSetApplicationContext() or PetscObjectCompose() (this may be better 
since it can be eventually merged into PETSc).

  Barry


> On Jul 29, 2023, at 7:54 AM, Carl-Johan Thore <carl-johan.th...@liu.se> wrote:
> 
> Hi again,
>  
> I’ve now managed to get PCMG with PCREDISTRIBUTE to work on multiple ranks, 
> and convergence looks good on my
> examples. (the modification to redistribute.c was trivial – just skip the 
> case “if (size > 1) {“ in PCSetUp_Redistribute).
>  
> The final step is your suggestion
>  
> “2) the only problem with 1 is it is likely to be poorly load balanced (but 
> you can make some runs to see how imbalanced it is, that will depend exactly 
> on what parts are locked and what MPI processes they are on).  So if it is 
> poorly balanced then you would need to get out of redistribute.c a mapping 
> for each kept dof to what MPI rank it is moved to and use that to move the 
> entries in the reduced interpolation you have created. “
>  
> My idea for this is to do “VecScatterBegin(red->scatter …” on my vector 
> indicating fixed or free DOFs to redistribute it in the same way as the 
> global matrix is redistributed.
> It is tempting to do something like “KSPGetPC(ksp, &pc); PC_Redistribute  
> *red = (PC_Redistribute *)pc->data;” in my main code, and then do the scatter 
> with the red object. But this is not directly
> possible, resulting in PC_Redistribute not defined and so on. Another 
> alternative is perhaps to pass my vector into the PC via the “optional 
> user-defined context” in the _p_PC context.
> Do you have any suggestions or hints for a good (efficient and non-intrusive) 
> way to “get out of redistribute.c a mapping …”?
>  
> /Carl-Johan
>  
>  
> From: Carl-Johan Thore <carl-johan.th...@liu.se 
> <mailto:carl-johan.th...@liu.se>> 
> Sent: Tuesday, July 25, 2023 4:49 PM
> To: Barry Smith <bsm...@petsc.dev <mailto:bsm...@petsc.dev>>
> Subject: Re: [petsc-users] PCMG with PCREDISTRIBUTE
>  
> Ok thanks! Good to know I'm not doing anything wrong on the PETSc-side, so to 
> speak. It seems like it could be related to WSL then
> From: Barry Smith <bsm...@petsc.dev <mailto:bsm...@petsc.dev>>
> Sent: 25 July 2023 16:39:37
> To: Carl-Johan Thore <carl-johan.th...@liu.se 
> <mailto:carl-johan.th...@liu.se>>
> Subject: Re: [petsc-users] PCMG with PCREDISTRIBUTE
>  
>  
>   No idea why it would be particularly slow. Probably some issue with making 
> the PETSc shared library, normally that would take a few seconds on Linux or 
> a Mac.
>  
>  
> 
> 
> On Jul 25, 2023, at 6:12 AM, Carl-Johan Thore <carl-johan.th...@liu.se 
> <mailto:carl-johan.th...@liu.se>> wrote:
>  
> Hi again,
>  
> I’ve now gotten PCMG with PCREDISTRIBUTE to work nicely on simple examples on 
> one core
> (attached ppt). I therefore plan to test with multiple cores by first 
> modifying redistribute.c such
> that the dofs are not moved around as you suggested. For this I wonder what 
> is the recommended way
> of modifying a single file in my PETSc-installation? I’ve tried running make 
> in the folder containing redistribute.c:
>  
> <image001.png>
>  
> This works but is terribly slow on my computer (google indicates that this 
> could be related
> to me using Windows Subsystem for Linux, but I didn’t succeed with the 
> suggestions)
>  
> Kind regards,
> Carl-Johan
>  
>  
>  
> From: Carl-Johan Thore <carl-johan.th...@liu.se 
> <mailto:carl-johan.th...@liu.se>> 
> Sent: Sunday, July 2, 2023 8:07 PM
> To: Barry Smith <bsm...@petsc.dev <mailto:bsm...@petsc.dev>>
> Subject: Re: [petsc-users] PCMG with PCREDISTRIBUTE
>  
> "NN" is used to construct the row indices passed to matzerorowscolumns. That 
> works fine, so at least in that sense NN is correct. But I should probably 
> compare my IS:es with those row indices as well.
> 
> Yes, making a tiny problem is a good idea.
> 
> Thanks for taking time to look at this
> From: Barry Smith <bsm...@petsc.dev <mailto:bsm...@petsc.dev>>
> Sent: 02 July 2023 18:58:50
> To: Carl-Johan Thore <carl-johan.th...@liu.se 
> <mailto:carl-johan.th...@liu.se>>
> Cc: petsc-users <petsc-users@mcs.anl.gov <mailto:petsc-users@mcs.anl.gov>>
> Subject: Re: [petsc-users] PCMG with PCREDISTRIBUTE
>  
>  
>    Are you sure the NN is correct? I cannot see how you set that and know 
> that it exactly matches the way PCREDISTRIBUTE selects rows?
>  
>    I suggest making a tiny problem with artificial matrix values that you 
> select to "slice" of parts of the grid, so you can see exactly on the grid 
> that the selected rows and columns are correct as you expect.
>  
> 
> On Jul 2, 2023, at 2:16 AM, Carl-Johan Thore <carl-johan.th...@liu.se 
> <mailto:carl-johan.th...@liu.se>> wrote:
>  
> Hi,
>  
> I tried your suggestion
>  
> “1) Use PCREDISTRIBUTE but hack the code in redistribute.c to not move dof 
> between MPI ranks, just have it remove the locked rows/columns (to start just 
> run on one MPI rank since then nothing is moved) Then in your  code you just 
> need to pull out the appropriate rows and columns of the interpolation that 
> correspond to the dof you have kept and pass this smaller interpolation to 
> the inner KSP PCMG. This is straightforward and like what is in DMSetVI.  The 
> MG convergence should be just as good as on the full system.”
>  
> from below and got the size of the interpolation matrix correct. But the 
> convergence doesn’t seem right. In the attached .txt-file the code without 
> redistribute convergences in 8
> FGMRES iterations whereas with redistribute it takes 25 (I’ve tested this on 
> various meshes and the redistribute-code consistently performs much worse in 
> terms of number of iterations). The code without redistribute is very well 
> tested and always performs very well, so I’m fairly certain the error is in 
> my new code.
>  
> Would you be willing to take a quick look at the attached code snippet to see 
> if I’m doing some obvious mistake?
>  
> Kind regards,
> Carl-Johan
>  
> From: Barry Smith <bsm...@petsc.dev <mailto:bsm...@petsc.dev>> 
> Sent: Friday, June 30, 2023 5:21 PM
> To: Matthew Knepley <knep...@gmail.com <mailto:knep...@gmail.com>>
> Cc: Carl-Johan Thore <carl-johan.th...@liu.se 
> <mailto:carl-johan.th...@liu.se>>; petsc-users@mcs.anl.gov 
> <mailto:petsc-users@mcs.anl.gov>
> Subject: Re: [petsc-users] PCMG with PCREDISTRIBUTE
>  
>  
>  
> 
> On Jun 30, 2023, at 10:22 AM, Matthew Knepley <knep...@gmail.com 
> <mailto:knep...@gmail.com>> wrote:
>  
> On Fri, Jun 30, 2023 at 10:16 AM Carl-Johan Thore via petsc-users 
> <petsc-users@mcs.anl.gov <mailto:petsc-users@mcs.anl.gov>> wrote:
> Thanks for the quick reply and the suggestions!
>  
> “ … you should first check that the PCMG works quite well “
>  
> Yes, the PCMG works very well for the full system.
>  
> “I am guessing that your code is slightly different than ex42.c because you 
> take the interpolation matrix provided by the DM 
> and give it to the inner KSP PCMG?. So you solve problem 2 but not problem 1.”
>  
> Yes, it’s slightly different so problem 2 should be solved.
>  
> It looked somewhat complicated to get PCMG to work with redistribute, so I’ll 
> try with PCGAMG first
> (it ran immediately with redistribute, but was slower than PCMG on my, very 
> small, test problem. I’ll try
> to tune the settings).
>  
> A related question: I’m here using a DMDA for a structured grid but I’m 
> locking so many DOFs that for many of the elements
> all DOFs are locked. In such a case could it make sense to switch/convert the 
> DMDA to a DMPlex containing only those
> elements that actually have DOFs?
>  
> Possibly, but if you are doing FD, then there is built-in topology in DMDA 
> that is not present in Plex, so
> finding the neighbors in the right order is harder (possible, but harder, we 
> address this in some new work that is not yet merged). There is also 
> structured adaptive support with DMForest, but this also does not preserve 
> the stencil.
>  
>    The efficiency of active set VI solvers in PETSc demonstrates to me that 
> solving reduced systems can be done efficiently with geometric multigrid 
> using a structured grid so I would not suggest giving up on what you started. 
>  
>     You can do it in two steps
>  
> 1) Use PCREDISTRIBUTE but hack the code in redistribute.c to not move dof 
> between MPI ranks, just have it remove the locked rows/columns (to start just 
> run on one MPI rank since then nothing is moved) Then in your  code you just 
> need to pull out the appropriate rows and columns of the interpolation that 
> correspond to the dof you have kept and pass this smaller interpolation to 
> the inner KSP PCMG. This is straightforward and like what is in DMSetVI.  The 
> MG convergence should be just as good as on the full system.
>  
> 2) the only problem with 1 is it is likely to be poorly load balanced (but 
> you can make some runs to see how imbalanced it is, that will depend exactly 
> on what parts are locked and what MPI processes they are on).  So if it is 
> poorly balanced then you would need to get out of redistribute.c a mapping 
> for each kept dof to what MPI rank it is moved to and use that to move the 
> entries in the reduced interpolation you have created. 
>  
>   If you do succeed it would actually be useful code that we could add to 
> PCREDISTRIBUTE for more general use by others.
>  
>   Barry
>  
>  
>  
> 
>  
>   Thanks,
>  
>     Matt
>  
> From: Barry Smith <bsm...@petsc.dev <mailto:bsm...@petsc.dev>> 
> Sent: Friday, June 30, 2023 3:57 PM
> To: Carl-Johan Thore <carl-johan.th...@liu.se 
> <mailto:carl-johan.th...@liu.se>>
> Cc: petsc-users@mcs.anl.gov <mailto:petsc-users@mcs.anl.gov>
> Subject: Re: [petsc-users] PCMG with PCREDISTRIBUTE
>  
>  
>    Oh, I forgot to mention you should first check that the PCMG works quite 
> well for the full system (without the PCREDISTRIBUTE); the convergence
> on the redistributed system (assuming you did all the work to get PCMG to 
> work for you) should be very similar to (but not measurably better) than the 
> convergence on the full system.
>  
>  
>  
> 
> On Jun 30, 2023, at 9:17 AM, Barry Smith <bsm...@petsc.dev 
> <mailto:bsm...@petsc.dev>> wrote:
>  
>  
>    ex42.c provides directly the interpolation/restriction needed to move 
> between levels in the loop
>  
>  for (k = 1; k < nlevels; k++) {
>     PetscCall(DMCreateInterpolation(da_list[k - 1], da_list[k], &R, NULL));
>     PetscCall(PCMGSetInterpolation(pc, k, R));
>     PetscCall(MatDestroy(&R));
>   }
>  
> The more standard alternative to this is to call KSPSetDM() and have the PCMG 
> setup use the DM
> to construct the interpolations (I don't know why ex42.c does this 
> construction itself instead of having the KSPSetDM() process handle it but 
> that doesn't matter). The end result is the same in both cases.
>  
> Since PCREDISTRIBUTE  builds its own  new matrix (by using only certain rows 
> and columns of the original matrix) the original interpolation
> cannot be used for two reasons
>  
> 1) (since it is for the full system) It is for the wrong problem. 
>  
> 2) In addition, if you ran with ex42.c the inner KSP does not have access to 
> the interpolation that was constructed so you could not get PCMG to to work 
> as indicated below.
>  
> I am guessing that your code is slightly different than ex42.c because you 
> take the interpolation matrix provided by the DM 
> and give it to the inner KSP PCMG?. So you solve problem 2 but not problem 1.
>  
> So the short answer is that there is no "canned" way to use the PCMG process 
> trivially with PCDISTRIBUTE. 
>  
> To do what you want requires two additional steps
>  
> 1) after you construct the full interpolation matrix  (by using the DM) you 
> need to remove the rows associated with the dof that have been removed by the 
> "locked" variables (and the columns that are associated with coarse grid 
> points that live on the removed points) so that the interpolation is the 
> correct "size" for the smaller problem
>  
> 2) since PCREDISTRIBUTE actually moves dof of freedom between MPI processes 
> for load balancing after it has removed the locked variables you would need 
> to do the exact same movement for the rows of the interpolation matrix that 
> you have constructed (after you have removed the "locked" rows of the 
> interpolation.
>  
> Lots of bookkeeping to acheive 1 and 2 but conceptually simple.
>  
> As an experiment you can try using PCGAMG on the redistributed matrix 
> -redistribute_pc_type gamg to use algebraic multigrid just to see the time 
> and convergence rates. Since GAMG creates its own interpolation based on the 
> matrix and it will be built on the smaller redistributed matrix there will no 
> issue with the wrong "sized" interpolation. Of course you have the overhead 
> of algebraic multigrid and cannot take advantage of geometric multigrid.  The 
> GAMG approach may be satisfactory to your needs.
>  
> If you are game for looking more closely at using redistribute with geometric 
> multigrid and PETSc (which will require digging into PETSc source code and 
> using internal information in the PETSc source code) you can start by looking 
> at how we solve variational problems with SNES using reduced space active set 
> methods. SNESVINEWTONRSLS /src/snes/impls/vi/rs/virs.c This code solves 
> problem 1 see() it builds the entire interpolation and then pulls out the 
> required non-locked part. Reduced space active set methods essentially lock 
> the constrained dof and solve a smaller system without those dof at each 
> iteration.
>  
> But it does not solve problem 2. Moving the rows of the "smaller" 
> interpolation to the correct MPI process based on where PCREDISTRIBUTE moved 
> rows. To do this would requring looking at the PCREDISTRUBUTE code and 
> extracting the information of where each row was moving and performing the 
> process for the interpolation matrix.
> src/ksp/pc/impls/redistribute/redistribute.c
>  
>   Barry
>  
>  
>  
>  
>  
>  
>  
>  
>  
> 
> On Jun 30, 2023, at 8:21 AM, Carl-Johan Thore via petsc-users 
> <petsc-users@mcs.anl.gov <mailto:petsc-users@mcs.anl.gov>> wrote:
>  
> Hi,
>  
> I'm trying to run an iterative solver (FGMRES for example) with PCMG as 
> preconditioner. The setup of PCMG
> is done roughly as in ex42 of the PETSc-tutorials 
> (https://petsc.org/main/src/ksp/ksp/tutorials/ex42.c.html).
> Since I have many locked  degrees-of-freedom I would like to use 
> PCREDISTRIBUTE. However, this
> results in (30039 is the number of DOFs after redistribute and 55539 the 
> number before):
>  
> [0]PETSC ERROR: --------------------- Error Message 
> --------------------------------------------------------------
> [0]PETSC ERROR: Nonconforming object sizes
> [0]PETSC ERROR: Matrix dimensions of A and P are incompatible for 
> MatProductType PtAP: A 30039x30039, P 55539x7803
> [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
> [0]PETSC ERROR: Petsc Development GIT revision: v3.19.0-238-g512d1ae6db4  GIT 
> Date: 2023-04-24 16:37:00 +0200
> [0]PETSC ERROR: topopt on a arch-linux-c-opt Fri Jun 30 13:28:41 2023
> [0]PETSC ERROR: Configure options COPTFLAGS="-O3 -march=native" 
> CXXOPTFLAGS="-O3 -march=native" FOPTFLAGS="-O3 -march=native" 
> CUDAOPTFLAGS=-O3 --with-cuda --with-cusp --with-debugging=0 
> --download-scalapack --download-hdf5 --download-zlib --download-mumps 
> --download-parmetis --download-metis --download-ptscotch --download-hypre 
> --download-spai
> [0]PETSC ERROR: #1 MatProductSetFromOptions_Private() at 
> /mnt/c/mathware/petsc/src/mat/interface/matproduct.c:420
> [0]PETSC ERROR: #2 MatProductSetFromOptions() at 
> /mnt/c/mathware/petsc/src/mat/interface/matproduct.c:541
> [0]PETSC ERROR: #3 MatPtAP() at 
> /mnt/c/mathware/petsc/src/mat/interface/matrix.c:9868
> [0]PETSC ERROR: #4 MatGalerkin() at 
> /mnt/c/mathware/petsc/src/mat/interface/matrix.c:10899
> [0]PETSC ERROR: #5 PCSetUp_MG() at 
> /mnt/c/mathware/petsc/src/ksp/pc/impls/mg/mg.c:1029
> [0]PETSC ERROR: #6 PCSetUp() at 
> /mnt/c/mathware/petsc/src/ksp/pc/interface/precon.c:994
> [0]PETSC ERROR: #7 KSPSetUp() at 
> /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:406
> [0]PETSC ERROR: #8 PCSetUp_Redistribute() at 
> /mnt/c/mathware/petsc/src/ksp/pc/impls/redistribute/redistribute.c:327
> [0]PETSC ERROR: #9 PCSetUp() at 
> /mnt/c/mathware/petsc/src/ksp/pc/interface/precon.c:994
> [0]PETSC ERROR: #10 KSPSetUp() at 
> /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:406
> [0]PETSC ERROR: #11 KSPSolve_Private() at 
> /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:824
> [0]PETSC ERROR: #12 KSPSolve() at 
> /mnt/c/mathware/petsc/src/ksp/ksp/interface/itfunc.c:1070
>  
> It’s clear what happens I think, and it kind of make since not all levels are 
> redistributed as they should (?).
> Is it possible to use PCMG with PCREDISTRIBUTE in an easy way?
>  
> Kind regards,
> Carl-Johan
>  
>  
> 
>  
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
>  
> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
>  
>  
> <pcmg_redistribute_experiment_cantilever.pptx>

Reply via email to