It may be a good idea to combine this study with looking at the output
of -log_summary (assuming PETSc is your solver engine) to see if there
is a correlation with the growth in communication or other components
of matrix assembly.
Dmitry.
On Mon, Jun 10, 2013 at 10:16 AM, Ataollah Mesgarnejad
wr
Cody,
I'm not sure if you saw the graph I uploaded it again here:
https://dl.dropboxusercontent.com/u/19391830/scaling.jpg.
In all these runs the NDOFs/Processor is less than 1. What is bothering me
is that the enforce_constraints_exactly is taking up more and more time as
number of proces
Ata,
You might be scaling past the reasonable limit for libMesh. I don't know
what solver you are using, but for a strong scaling study we generally
don't go below 10,000 local DOFs. This is the recommended floor for PETSc
too:
http://www.mcs.anl.gov/petsc/documentation/faq.html#slowerparallel
Dear all,
I've been doing some scaling tests on my code. When I look at time (or % of
time) spent at each stage in libMesh log I see that the
enforce_constraints_exactly stage in DofMap is scaling very bad. I was
wondering if anyone can comment.
Here is my EquationSystems.print_info():
Equat