Hi all,
I have a problem with memory scaling in parallel.

I'm running a Navier-Stokes based on a dG-cG velocity-pressure space couple
on a 180k tetrahedral mesh.
I have two transient systems, the first one uses monomials for the velocity
approximation and the second one Lagrangian basis functions for the pressure
approximation.

Memory 1 process = 3.6 GB
Memory 4 processes = 1.5 GB * 4 = 6GB

This is my configure
  $ ./configure --disable-shared --disable-amr --enable-ghosted
--with-cc=icc --with-cxx=icpc --with-f77=ifort
I have tried also enabling the parallel mesh and the result doesn't change.

Is this awaited or the bad scaling could be related to my assembly code?
What could be done to improve the memory efficiency in parallel?

Another question.
Using monomials I compute the matrix memory occupation with the following
formula
(number of unknowns * number of dofs)^2 * number of elements * number of
neighbors+1 * 8 (bytes for double precision) * 2 (preconditioner)
Is this correct?

For my 180k mesh this results in velocity matrix of approximately 2 GB.
The pressure matrix is definitely smaller.
Is there a way to have an idea of the libmesh data structures memory
footprint?

Thank you very much
Lorenzo
------------------------------------------------------------------------------
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to