If you want a processor independent solve with mumps use '-pc_type lu' and
if you are configured with MUMPS it will give you a parallel LU solve. And
don't use any overlap in DM. If want a local LU with global 'asm' or
'bjacobi' then you have an iterative solver and use something like
-ksp_type gmr
Hi,
I used PETSc to assemble a FEM stiff matrix of an overlapped (overlap=2)
DMPlex and used the MUMPS solver to solve it. But I got different solution
by using 1 CPU and MPI parallel computation. I am wondering if I missed
some necessary step or setting during my implementation.
My calling p