Hello,

I am trying to use petsc4py and slepc4py for parallel sparse matrix
diagonalization.
However I am a bit confused about matrix size increase when I switch from
single processor to multiple processors. For example 100 x 100 matrix with
298 nonzero elements consumes
8820 bytes of memory (mat.getInfo()["memory"]), however on two processes it
consumes 20552 bytes of memory  and on four 33528.  My matrix is taken from
the slepc4py/demo/ex1.py,
where nonzero elements are on three diagonals.

Why memory usage increases with MPI processes number?
I thought that each process stores its own rows and it should stay the
same. Or some elements are stored globally?

Lukas

Reply via email to