> On 13 Oct 2019, at 3:48 PM, Mark Adams <[email protected]> wrote:
> 
> Oh I see you have established that it is the overlap and got something that 
> looks like the load on a homogenous BC and the processor boundary ... very 
> odd.
> 
> Does the bad version converge to the correct solution if you iterate? It 
> looks like your BC is wrong.

Yeah, of course: I switched back from P_3 to P_1 and now the first iterate is 
OK.
I am indeed most definitely messing up with the RHS at some point before the 
KSPSolve.

Thanks!
Pierre

> It sure looks like a bug. Can you valgrind this?
> 
> 
> On Sun, Oct 13, 2019 at 9:34 AM Mark Adams <[email protected] 
> <mailto:[email protected]>> wrote:
> Try -pc_asm_overlap 0 with ASM.
> 
> And I trust the KO run works with 1 processor (direct solve)
> 
> On Sun, Oct 13, 2019 at 3:41 AM Pierre Jolivet via petsc-dev 
> <[email protected] <mailto:[email protected]>> wrote:
> Hello,
> I’m struggling to understand the following weirdness with PCASM with exact 
> subdomain solvers.
> I’m dealing with a very simple Poisson problem with Dirichlet + Neumann BCs.
> If I use PCBJACOBI + KSPPREONLY or 1 iteration of GMRES either preconditioned 
> on the right or on the left, I get the expected result, cf. attached 
> screenshot.
> If I use PCASM + KSPPREONLY or 1 iteration of GMRES preconditioned on the 
> left, I get the expected result as well.
> However, with PCASM + 1 iteration of GMRES preconditioned on the right, I 
> don’t get what I should (I believe).
> Furthermore, this problem is specific to -pc_asm_type restrict,none (I get 
> the expected result with basic,interpolate).
> 
> Any hint?
> 
> Thanks,
> Pierre
> 
> $ -sub_pc_type lu -ksp_max_it 1 -ksp_type gmres -pc_type bjacobi -ksp_pc_side 
> right -> bjacobi_OK
> $ -sub_pc_type lu -ksp_max_it 1 -ksp_type gmres -pc_type asm -ksp_pc_side 
> left -> asm_OK
> $ -sub_pc_type lu -ksp_max_it 1 -ksp_type gmres -pc_type asm -ksp_pc_side 
> right -> asm_KO
> 
> <bjacobi_OK.png><asm_OK.png><asm_KO.png>

Reply via email to