Hi all,

On 27.01.2015 17:26, Kenny, Jason L wrote:
I think it is that Parts tries to use DB data smarter than SCons does. For me 
the understanding of the logic needed to validate that
something is good or bad with MD5 I believe is the item that I had to learned 
from SCons.
[...]


during the last few days I ran and profiled SCons for the 100k files benchmark example, created by genbench.py from Aqualid's "example/benchmark" folder. All the single runs were performed on a COTS quad-core machine (4*AMD A8-6600K @ 1.9GHz) with 8GB RAM.

For Aqualid 0.5.2 I get:

  Clean build (-j 1)   = 12:37:53
  Clean build (-j 4)   =  4:41:33
  Update (w/o changes) =  0:00:59.77

For the standard SCons 2.3.4:

  Clean build (-j 1)   = 14:09:54
  Clean build (-j 4)   =  5:38:46
  Update (w/o changes) =  0:17:20.25
  Update (max-drift=1) =  0:17:00.07

For the experimental SCons version (switched Node class to slots + stubprocess.py wrapper), including some hacks to exploit the uniform structure of the benchmark sources, I could reduce this down to:

  Clean build (-j 1)   = 10:58:50
  Clean build (-j 4)   =  3:44:20
  Update (w/o changes) =  0:08:38.38
  Update (max-drift=1) =  0:04:27.77

The much higher times for Aqualid in the parallel build (-j 4) come from the processing order, which queues all library archiving to the end of the build. This sent my machine thrashing/swapping, even though it has 8GB RAM...SCons is better balanced in this case, it creates each lib as soon as all its sources are compiled.

I also ran a "clean/remove target" on a smaller project with 10k files each:

  Aqualid  (aql --clean) =  3:51.93
  SCons    (scons -c)    =  0:19.94

, this looks like a bug in Aqualid to me.


Find attached the memory consumption for a "parallel clean build" (-j 4) run 
with SCons...

Best regards,

Dirk

_______________________________________________
Scons-dev mailing list
Scons-dev@scons.org
https://pairlist2.pair.net/mailman/listinfo/scons-dev

Reply via email to