On Thu, 15 Dec 2005, Dan Nicholson wrote:


  That's my prime objection to Greg's method - we always tell people
fbbg, but the comparison takes a shortcut.

Right, but for the purposes of testing, the environment should be as
consistent as possible.  That's standard procedure for running a test
in any field.  And why would you recreate the devices, directories and
symlinks if you were still in the chroot?  Setting up a test
environment is different than putting your system in a production
running state.


Not recreating devices, directories, symlinks is not the issue - in a "regular" build the final system is built from a temporary system. For a normal upgrade, the old LFS has to be good enough to build a new temp system, so that was how I began. Greg made a decision on what he wanted to test, I made a different decision.


As for booting, you're going to probably change your environment
drastically, and that would invalidate the test.  If you did a binary
diff and found two files to be different, would you be confident
enough to tell me that the difference was caused by the building
method and not by the altered environment?  Or vice versa?

I'm not following this, perhaps the 'environment' is throwing me - all I'm interested in are the results of the build - logs, devices, config files are variable data. Programs, libraries, even scripts are not expected to change once installed. Please remember that I've not trained as a tester, only as a programmer and analyst.

  I'm unclear what changing environment variables is likely to do to an
LFS build ?  In practice, either the environment is created during the
build (e.g. new .bashrc), or a builder probably has a standard
pre-existing environment.

How about LC_ALL?  Creation of /etc/profile in LFS dictates you to set
it to your locale, but for the build we've used LC_ALL=C (or POSIX).


I don't have an /etc/profile. My LC_ALL is set in my buildscripts, I don't see how the fact that a regular user will have a different LC_ALL alters what is in the files he is comparing.


I have no objections to farce.  It sounds like you've put a lot of
effort into seeing what happens to the files in a second build.  I'm
arguing for keeping a sane test environment.  We all know that LFS
builds and boots.  This tests the quality of the build, and that test
is separate from the two previous questions.

Actually, I don't mind if people do object to farce, I'm not overly pleased with it, but it's better than what I had before.

Purity was not my major concern when I started trying to improve my analysis of the results - in an average week, two or three unrelated packages are upgraded. As a tester and now as an editor, I need to know that it does indeed still build and boot (and ideally, to know that it builds the parts of blfs I care about - remember bison?). As somebody interested in kernel development, I try to test -rc kernels to make sure the features I use haven't been damaged. To optimise my time, I aim to test everything together because mostly there *aren't* issues (and to be honest, testing the kernels is actually more important).

People trained in formal testing are welcome to use their professional techniques. Me, I'm just a journeyman, I pick what I think are appropriate tools for what seems important to me. The main feature of my builds is that something will differ from the instructions in the book (e.g. my last pair of multilib builds used different bzip2 and vim instructions between the builds, and were built with a test kernel).

People here who understand the details of Greg's ICA should be encouraged to apply it to LFS builds. Maybe someone has been doing that, and we're just lucky that no problems have showed up for them to comment on, but my assumption is that ICA is poorly understood in lfs circles.

Ken
--
 das eine Mal als Tragödie, das andere Mal als Farce
-- 
http://linuxfromscratch.org/mailman/listinfo/lfs-dev
FAQ: http://www.linuxfromscratch.org/faq/
Unsubscribe: See the above information page

Reply via email to