Good morning,

I would like to ask about the importance of the initial choice of ordering the 
unknowns when feeding a matrix to PETSc. 

I have a regular grid, using high-order finite differences and I simply divide 
rows of the matrix with PetscSplitOwnership using vertex major, natural 
ordering for the parallelism (not using DMDA)

My understanding is that when using LU-MUMPS, this does not matter because 
either serial or parallel analysis is performed and all the rows are reordered 
‘optimally’ before the LU factorization. Quality of reordering might suffer 
from parallel analysis.

But if I use the default block Jacobi with ILU with one block per processor, 
the initial ordering seems to have an influence because some tightly coupled 
degrees of freedom might lay on different processes and the ILU becomes less 
powerful. You can change the ordering on each block but this won’t necessarily 
make things better.

Are my observations accurate? Is there a recommended ordering type for a block 
Jacobi approach in my case? Could I expect natural improvements in fill-in or 
better GMRES robustness opting for parallelism offered by DMDA?

Thank you,

Thibaut

Reply via email to