Dear Dave,

Yes, I observe this in parallel runs. How I can change the parallel layout of 
the matrix? In my implementation, I read the mesh file, and the I split the 
domain where the first rank gets the first N elements, the second rank gets the 
next N elements etc. Should I use metis to distribute elements? Note that I use 
continuous finite elements, which means that some values will be cached in a 
temporary buffer.

Thank you very much,
Pantelis
________________________________
From: Dave May <dave.mayhe...@gmail.com>
Sent: Tuesday, March 14, 2023 4:40 PM
To: Pantelis Moschopoulos <pmoschopou...@outlook.com>
Cc: petsc-users@mcs.anl.gov <petsc-users@mcs.anl.gov>
Subject: Re: [petsc-users] Memory Usage in Matrix Assembly.



On Tue 14. Mar 2023 at 07:15, Pantelis Moschopoulos 
<pmoschopou...@outlook.com<mailto:pmoschopou...@outlook.com>> wrote:
Hi everyone,

I am a new Petsc user that incorporates Petsc for FEM in a Fortran code.
My question concerns the sudden increase of the memory that Petsc needs during 
the assembly of the jacobian matrix. After this point, memory is freed. It 
seems to me like Petsc performs memory allocations and the deallocations during 
assembly.
I have used the following commands with no success:
CALL MatSetOption(petsc_A, MAT_NEW_NONZERO_LOCATIONS,PETSC_FALSE,ier)
CALL MatSetOption(petsc_A, MAT_NEW_NONZERO_LOCATION_ERR,PETSC_TRUE,ier)
CALL MatSetOption(petsc_A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_TRUE,ier).
CALL MatSetOption(petsc_A, MAT_KEEP_NONZERO_PATTERN,PETSC_TRUE,ier)

The structure of the matrix does not change during my simulation, just the 
values. I am expecting this behavior the first time that I create this matrix 
because the preallocation instructions that I use are not very accurate but 
this continues every time I assemble the matrix.
What I am missing here?

I am guessing this observation is seen when you run a parallel job.

MatSetValues() will cache values in a temporary memory buffer if the values are 
to be sent to a different MPI rank.
Hence if the parallel layout of your matrix doesn’t closely match the layout of 
the DOFs on each mesh sub-domain, then a huge number of values can potentially 
be cached. After you call MatAssemblyBegin(), MatAssemblyEnd() this cache will 
be freed.

Thanks,
Dave



Thank you very much,
Pantelis

Reply via email to