(I am forking part of this to the devel list)

On 9/9/08 10:28 AM, "John Peterson" <[EMAIL PROTECTED]> wrote:
> 
> What about this parallel_for stuff?  Even if you're not using threads,
> this still introduces an O(N_elem) step in the ConstElemRange
> constructor.  This might be more noticeable when there are multiple
> vectors to project...?
> 

Funny you say that.  Remember the conversation about how you can be smart
with multiple variables when they are the same?  Look at line 264 of
system_projection.C.

We loop over variables, then elements.  If you know the FE types are the
same this would be much more efficient as an element loop with a variable
loop inside since the element-specific stuff would be done once for all vars
instead of once for each var.

Similarly, he is projecting a whole mess of vectors.  Now I think systems
should always be projected individually, but in the case of a
TransientImplicitSystem there are 3 vectors (soln,old_soln,older_soln) which
get projected.  These should all get done at once instead of piece-by-piece
since the element-specific stuff is the same in all cases.

> Did you do many comparisons of the pre- and post-TBB code for the 1
> thread case?

Yeah, and found it negligible...  At any rate, it is the same overhead
regardless of the number of processors (for SerialMesh) so should not cause
this scaling degradation.

WAAAY back on the thread I saw that his mesh size is ~99,000 elements with
~23,000 nodes at the end of the simulation.  So at 40 processors that is
only 575 degrees of freedom per processor, assuming linear Lagrange.  That
is way too small of a problem size.

I bet I can replicate this with ex10 even on infiniband and will try that
later.

-Ben


-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Libmesh-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-devel

Reply via email to