At work, my focus is on facilitating & improving the productivity of the developers. Much of the time they spend is in waiting for a build to complete -- thus, something that reduces the time thus spent is likely to be beneficial (all other things being approximately equal), while something that increases that time is (similarly) likely to be detrimental.
The software builds in question are done in a FreeBSD/i386 environment; recent hardware (and my test platform) is HP DL180 G6s. Almost 2 years ago, we migrated from a lightly-patched 6.2-R to 7.1-R with 5 commits that were made to 7.1-S backported to it. On the same hardware (not the HP mentioned above), I measured a 35% reduction in elapsed time for one particular form of the build in question. This was encouraging. A couple of days ago, I updated the active slice on my 8.x reference machine to 8.1-STABLE #5 r214029 and proceeded to start some timed builds; here are some fairly raw timing data: Start Stop real user sys OS 1287436357 1287461948 25590.99 81502.22 18115.07 8.1-S 1287462797 1287488766 25969.26 81452.14 17920.14 8.1-S 1287489641 1287515287 25645.84 81548.40 18256.52 8.1-S 1287516151 1287541481 25329.64 81546.23 18294.10 8.1-S 1287542355 1287568599 26244.59 81431.47 17902.39 8.1-S 1287525363 1287546846 21483.13 82628.20 21703.09 7.1-R+ 1287548005 1287569100 21094.63 82853.19 22185.02 7.1-R+ 1287570300 1287591371 21071.33 82756.81 21943.22 7.1-R+ After the 3rd build under 8.1-S had completed, and I looked at the results so far, I became a bit concerned (as I wasn't expecting each build to take over 7 hours), so I started a similar set of test builds on my 7.x reference machine; 3 of those have completed so far, and that's what I'm reporting (above) at this time. Each iteration involves: * Clearing a "sandbox" (subdirectory) on a local file system on the system under test. * Un-tarring a reference sandbox from NFS storage to the local sandbox. * Entering the sandbox, then performing the build command under /usr/bin/time (to get the above timing information). Note that the reported times are strictly for the "build" part of each itewration. Most of the "build tools" are "captive" -- maintained by a group at work and accessed via NFS. They are normally built under FreeBSD 6.2; we use the compatNx ports to be able to run such programs. The 8.x reference machine was created by cloning the 7.x reference machine (the OS "drive" is a RAID 1; I broke the mirror and physically booted the (soon-to-be) 8.x machine from a single drive from the 7.x mirror, changed the hostname & IP address, then allowed the RAID firmware on the controller to "re-silver" that mirror). Once that finished, I performed a fairly standard source upgrade to 8.0-R on one slice, cloned that slice, booted from the cloned slice, and did a source upgrade to more recent points along the stable/8 branch, culminating with the above-cited 8.1-STABLE #5 r214029. At this point, I've left the installed ports alone, except that the 8.x slices have the compat7x port installed. Thus, the only difference between the 7.1-R test slice and the 8.1-S test slice should be the OS itself, the physical hardware (which is the same as I can make it), and the actual switch ports that each uses. So.... taking the "real" columns from the above, placing them in a couple of files, and running "awk '{print $1/60}'" against each (to convert from seconds to minutes, just for human scaling), then running ministat I see: dwolf-bsd(8.1-S)[19] ministat -s !$ ministat -s elapsed_{7,8} x elapsed_7 + elapsed_8 +------------------------------------------------------------------------------+ | xx x + ++ + +| ||_MA___| | | |__M_A____| | +------------------------------------------------------------------------------+ N Min Max Median Avg Stddev x 3 351.189 358.052 351.577 353.606 3.8552332 + 5 422.161 437.41 427.431 429.268 5.9238499 Difference at 95.0% confidence 75.662 +/- 9.51485 21.3973% +/- 2.6908% (Student's t, pooled s = 5.32437) dwolf-bsd(8.1-S)[20] Now, I expect to get another cople of results from the 7.1-R test, but I doubt that they will be significantly different from what we see above: the 7.1 results are more in line with what is seen "in real life." While it's very likely the case that some things might be done to make things better, my concern at this point is that -- doing the same work in the same way -- 8.1-S (@r214029) appears to be performing significantly worse than 7.1-R+patches. Since I'd like to be able to justify migrating beyond 7.x soon, I'd appreciate suggestions for identifying (and fixing) the apparent regression. FWIW, the workload is fairly CPU intensive during most of the run; the I/O done during (most of) the test appears to be very light, and the memory used is fairly modest. In each of the test machines, I have turned off HTT (HyperThreading Technology); hw.ncpu reports 8 for each. Thanks! Peace, david -- David H. Wolfskill da...@catwhisker.org Depriving a girl or boy of an opportunity for education is evil. See http://www.catwhisker.org/~david/publickey.gpg for my public key.
pgppmR7sy7XGp.pgp
Description: PGP signature