On Mon, Aug 06, 2007 at 06:33:55PM -0500, Bruce Dubbs wrote:
> 
> Your values don't seem too much out of line.  Your speeds seem to
> generally (not always) run a bit faster and use more disk space.  It
> could be a difference in the way you measure, the amount of memory you
> have, the amount of cache, the disk sector sizes, disk cache, disk
> speed, etc.
> 
 I measure time in whole seconds  (from $SECONDS) from configure to
the end of install (so, not including untarring and removing, even
though everybody has to do that).  Space is measured by df -k with a
grep for the rootfs (the build is logged, but to a different
filesystem) - before untarring, and after the install.  The builds
were on an athlon64 'winchester' 2GHz using ondemand cpufreq - it
has 1GiB of memory, but only around 900MiB available because I'm not
using CONFIG_HIMEM.  The disk is a common or garden SATA 7200rpm
from 2 or 3 years ago, with "ordinary" cache size - I think the disk
sector size is the same as everybody else's.

 My experience shows that the biggest influence on SBUs is the host -
if you can find something with an old (fast) compiler (even gcc-3.3
probably counts as a fast compiler now), the initial SBU will be a
lot less, and therefore everything else will use more SBUs.  In this
case, the host was LFS-svn from 2006-12-09 using a 2.6.22.1 kernel,
with gcc-4.1.1.  The SBU was 178 seconds.  Filesystem ext3, using the
defaults in mke2fs -j.

 If you don't want me to change these figures, as the release
manager all you have to do is NAK them.

ĸen
-- 
das eine Mal als Tragödie, das andere Mal als Farce
-- 
http://linuxfromscratch.org/mailman/listinfo/lfs-dev
FAQ: http://www.linuxfromscratch.org/faq/
Unsubscribe: See the above information page

Reply via email to