On Wed Nov 24 2010 at 12:42:44 -0500, Thor Lancelot Simon wrote: > On Wed, Nov 24, 2010 at 04:52:38PM +0200, Antti Kantee wrote: > > Thanks, I'll use your list as a starting point. One question though: > > > > On Wed Nov 24 2010 at 00:16:37 +0000, Andrew Doran wrote: > > > - build.sh on a static, unchanging source tree. > > > > >From the SSP discussion I have a recollection that build.sh can be > > very jittery, up to the order of 1% per build. I've never confirmed it > > myself, though. Did you notice anything like that? > > There are other issues associated with build.sh as a benchmark. > > * What are you trying to test? If you're trying to test the > efficiency of cache algorithms or the I/O subsystem (including > disk sort), for example, you need to test pairs of runs with > a cold boot of *ALL INVOLVED HARDWARE* (this includes disk > arrays etc) between each. > > * If SSDs, hybrid disks, or other potentially self-reorganizing > media are involved, forget it, you just basically lose. > > * If you're trying to test everything *but* the cache and I/O > subsystem, then you need to use a "warm up" procedure you can > have reasonable confidence works, for example always measuring > the Nth of N consecutive builds.
Indeed. Let's start with the low-hanging fruit first -- having some figures which at least make some sense (e.g. measure second of two builds in a row) is better than no figures. > * It can be hard to construct a system configuration where NetBSD > kernel performance is actually the bottleneck and some other > hardware limitation is not. Or where there's only a single > bottleneck. Dunno about NetBSD specifically, but this suggests great differences: http://www.netbsd.org/~ad/50/img15.html At least I doubt we got dramatically better drivers between 4 and 5. No idea about other OS performance there.