On Sun, Sep 23, 2007 at 04:47:47PM -0400, Alonso Graterol wrote:
> Hi,
> 
> I'm at point 10.10.1. for GCC installation (SVN-20070923-x86_64-Pure64) and
> want to know if the following testsuite results are good enough as to
> continue with final installation.
> 
[...]
>         === libmudflap tests ===
> 
> 
> Running target unix
> FAIL: libmudflap.c/fail8-frag.c (-static) output pattern test
> --
>         === libmudflap Summary ===
> 
> # of expected passes        1812
> # of unexpected failures    2
[...]
> 
> The reference I can find in LFS logs is for GCC 4.0.2 and I'm not sure about
> its applicability.  The libmudflap fail test is what worries me.
> Thanks in advance for your help,
> 
 I haven't built anything with the current package versions, and I'm
not even close to doing so, but nevertheless I'll comment.

 From memory, x86_64 used to be one of the architectures where
libmudflap's testsuite ran perfectly.  I would not be surprised to
learn that changes to the code designed to make it work on a
different architecture have had an adverse effect on x86_64.
Equally, I wouldn't be very surprised to learn that another symlink
is now needed somewhere to pass the testsuite.

 It reports 2 unexpected failures, but you only show one.

 FWIW, I've become very ambivalent about running testsuites -
sometimes they show things are wrong, but more often the errors are
in the tests themselves.  In general, I wouldn't worry about two
failures in a package's tests.  What really matters is whether the
resulting system works, and at the moment it's probably too early to
say - if a few people say they have built, and are running, a
complete system (including whatever cblfs or other packages they
normally use) using what is now in the book, we can assume it is good.
In the meantime, we think it is ok.  Similarly, if somebody else says
"these tests work for me on the same architecture" we can assume you
have a local problem.  Otherwise, this is just the first report of a
test failure, and it doesn't look serious.

 I wonder if an 'output pattern test' might be exposing an issue
with a new version of grep, but I think it is more likely to use
sed, which hasn't changed recently (unless it is now being
miscompiled).  Basically, the possibilities are many, and the people
building trunk (on _any_ architecture) are few.  If you have
sufficient interest, and skill, feel free to spend hours
investigating it.  Otherwise, continue with the build.

 A datapoint for comparison - the current version of e2fsprogs seems
to fail one test on big-endian machines (well, on ppc anyway).  It
looks like the test was added some years ago to test a corner case
on big-endian hosts.  So, perhaps the package itself has regressed.
Does this mean I won't trust it with my data (I don't have a cat in
hell's chance of understanding the test at the moment) ?  No, but
then I assume that all software may have uncaught bugs.  All a
particular test means is "Somebody either had a problem here, or
thought they might get a problem, so they wrote a test which they
hope will highlight it."

 Summary: don't expect perfection in test results.  Sometimes,
particularly on popular architectures, you might test everything in
CLFS and not see any errors.  More likely, a few errors will happen.
Unless you see large-scale failures, it probably doesn't mean very
much.

ĸen
-- 
das eine Mal als Tragödie, das andere Mal als Farce
_______________________________________________
Clfs-support mailing list
[email protected]
http://lists.cross-lfs.org/cgi-bin/mailman/listinfo/clfs-support

Reply via email to