Re: [petsc-users] ASM vs GASM

2019-04-30 Thread Smith, Barry F. via petsc-users
1) valgrind 2) confirm the IS are identical for ASM and GASM (this is easy on one process). Confirm the KSP,PC for subdomains are the same. 3) run with convergence monitoring for two subdomains (with ASM and GASM), turn on richardson on the inner solvers so it will plot the convergence history.

Re: [petsc-users] ASM vs GASM

2019-04-30 Thread Smith, Barry F. via petsc-users
Preconditioned residual falling nicely while true residual gets stuck usually indicates 1) null space not properly handled 2) nonlinearity (unintentional) inside PC 3) very small pivots in factorization What happens if you use -ksp_type fgmres ? Same behavior? -ksp_pc_side right (with

Re: [petsc-users] ASM vs GASM

2019-04-30 Thread Smith, Barry F. via petsc-users
Boyce, I noticed that in the KSPView you sent the solver inside GASM was fgmres, I don't know why! This would explain why the outer GMRES had inconsistent residuals. If you switch the inner solver to preonly + LU for GASM what happens? > On Apr 30, 2019, at 11:36 AM, Boyce Griffith

Re: [petsc-users] Quick question about ISCreateGeneral

2019-04-30 Thread Smith, Barry F. via petsc-users
Sajid, The comm in the IS isn't really important. It is the comm in the Vecs that matter. In the parallel vector to parallel vector case each process just provides "some" original locations (from) and their corresponding new locations (all in global numbering of the vector). Each

Re: [petsc-users] ASM vs GASM

2019-04-30 Thread Mark Adams via petsc-users
My question about the quality of the solution was to check if the model (eg, mesh) was messed up, not if the algebraic error was acceptable. So does an exact solution look OK. Using LU if you need to. If there is say a singularity it will mess up the solver as well as give you a bad solution. On

Re: [petsc-users] GlobalToLocal between DM and SubDM

2019-04-30 Thread Josh L via petsc-users
Thank you. I just have a coupled problem, and i want to solve it beginning with stagger then couple. Matthew Knepley 於 2019年4月30日 週二 下午12:57寫道: > On Tue, Apr 30, 2019 at 1:20 PM Josh L via petsc-users < > petsc-users@mcs.anl.gov> wrote: > >> Hi, >> >> I have a DM that that has 2 sections: >>

Re: [petsc-users] Quick question about ISCreateGeneral

2019-04-30 Thread Zhang, Junchao via petsc-users
On Tue, Apr 30, 2019 at 11:42 AM Sajid Ali via petsc-users mailto:petsc-users@mcs.anl.gov>> wrote: Hi PETSc Developers, I see that in the examples for ISCreateGeneral, the index sets are created by copying values from int arrays (which were created by PetscMalloc1 which is not collective).

[petsc-users] Quick question about ISCreateGeneral

2019-04-30 Thread Sajid Ali via petsc-users
Hi PETSc Developers, I see that in the examples for ISCreateGeneral, the index sets are created by copying values from int arrays (which were created by PetscMalloc1 which is not collective). If I the ISCreateGeneral is called with PETSC_COMM_WORLD and the int arrays on each rank are

Re: [petsc-users] ASM vs GASM

2019-04-30 Thread Mark Adams via petsc-users
When I said it was singular I was looking at "preconditioned residual norm to an rtol of 1e-12. If I look at the true residual norm, however, it stagnates around 1e-4." This is not what I am seeing in this output. It is just a poor PC. The big drop in the residual at the beginning is suspicious.

Re: [petsc-users] ASM vs GASM

2019-04-30 Thread Mark Adams via petsc-users
> > > > > Allowing GASM to construct the "outer" subdomains from the non-overlapping > "inner" subdomains, and using "exact" subdomain solvers (subdomain KSPs are > using FGMRES+ILU with an rtol of 1e-12), I get convergence in ~2 iterations > in the preconditioned residual norm to an rtol of

Re: [petsc-users] Bad memory scaling with PETSc 3.10

2019-04-30 Thread Mark Adams via petsc-users
On Tue, Mar 5, 2019 at 8:06 AM Matthew Knepley wrote: > On Tue, Mar 5, 2019 at 7:14 AM Myriam Peyrounette < > myriam.peyroune...@idris.fr> wrote: > >> Hi Matt, >> >> I plotted the memory scalings using different threshold values. The two >> scalings are slightly translated (from -22 to -88 mB)

Re: [petsc-users] Bad memory scaling with PETSc 3.10

2019-04-30 Thread Myriam Peyrounette via petsc-users
Hi, that's really good news for us, thanks! I will plot again the memory scaling using these new options and let you know. Next week I hope. Before that, I just need to clarify the situation. Throughout our discussions, we mentionned a number of options concerning the scalability: -matptatp_via