On 12 Apr 2012, at 23:56, Darren Hart wrote:
> Get back to us with times, and we'll build up a wiki page.

Some initial results / comments:

I'm running on:
 - i7 3820 (quad core, hyper-treading, 3.6GHz)
 - 16GB RAM (1600MHz XMP profile)
 - Asus P9X79 Pro motherboard
 - Ubuntu 11.10 x86_64 server installed on a 60GB OCZ Vertex 3 SSD on a 3Gb/s 
interface
 - Two 60GB OCZ Vertex 3s as RAID-0 on 6Gb/s interfaces.

The following results use a DL_DIR on the OS SSD (pre-populated) - I'm not 
interested in the speed of the internet, especially as I've only got a 
relatively slow connection ;-)

Poky-6.0.1 is also installed on the OS SSD.

I've done a few builds of core-image-minimal:

1) Build dir on the OS SSD
2) Build dir on the SSD RAID + various bits of tuning.

The results are basically the same, so it seems as if the SSD RAID makes no 
difference. Benchmarking it does show twice the read/write performance of the 
OS SSD, as expected. Disabling journalling and increasing the commit time to 
6000 also made no significant difference to the build times, which were (to the 
nearest minute):

Real   : 42m
User   : 133m
System : 19m

These time were starting from nothing, and seem to fit with your 30 minutes 
with 3 times as many cores! BTW, BB_NUMBER_THREADS was set to 16 and 
PARALLEL_MAKE to 12.

I also tried rebuilding the kernel:
   bitbake -c clean linux-yocto
   rm -rf the sstate bits for the above
   bitbake linux-yocto

and got the following times:

Real   : 39m
User   : 105m
System : 16m

Which kind of fits with an observation. The minimal build had something like 
1530 stages to complete. The first 750 to 800 of these flew past with all 8 
'cores' running at just about 100% all the time. Load average (short term) was 
about 19, so plenty ready to run. However, round about the time python-native, 
the kernel, libxslt, gettext kicked in the cpu usage dropped right off - to the 
point that the short term load average dropped below 3. It did pick up again 
later on (after the kernel was completed) before slowing down again towards the 
end (when it would seem reasonable to expect that less can run in parallel).

It seems as if some of these bits (or others around this time) aren't making 
use of parallel make or there is a queue of dependent tasks that needs to be 
serialized.

The kernel build is a much bigger part of the build than I was expecting, but 
this is only a small image. However, it looks as if the main compilation phase 
completes very early on and a lot of time is then spent building the modules 
(in a single thread, it seems) and in packaging - which leads me to ask if RPM 
is the best option (speed wise)? I don't use the packages myself (though 
understand they are needed internally), so I can use the fastest (if there is 
one).

Is there anything else I should be considering to improve build times? As I 
said above, this is just a rough-cut at some benchmarking and I plan to do some 
more, especially if there are other things to try and/or any other information 
that would be useful.

Still, it's looking much, much faster than my old build system :-)

Chris Tapp

opensou...@keylevel.com
www.keylevel.com
_______________________________________________
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto

Reply via email to