On 8/2/20 12:33 PM, Ken Moffat via lfs-dev wrote:
On Thu, Jul 30, 2020 at 08:15:10PM -0500, Bruce Dubbs via lfs-dev wrote:
On 7/30/20 6:22 PM, Ken Moffat via lfs-dev wrote:
On my experimental build which is currently in progress, I managed
to log the results of tcl's tests.  At first I thought the tests had
died, but in the end they completed (2.9 SBU with make -j8, most of
the time obviously spent on tests which failed).  The results do not
look wonderful:

Tests ended at Thu Jul 30 21:10:12 +0000 2020
all.tcl:        Total   24996   Passed  21606   Skipped 3336    Failed  54
Sourced 150 Test Files.
Files with failing tests: http.test httpold.test
Number of tests skipped for each constraint:
          9       !ieeeFloatingPoint
[snip]
          2       xdev

Test files exiting with errors:

    clock.test


I see there were quite a lot of failures in the clock tests, several
in http, and 1 in httpold.  I've no idea if this is normal, but the
timing seems to be more than a bit different from what the book
says.  For now I'll just mention it and ask if anyone else sees
similar or better results.  I've got the full log, but it's 99K so I
won't bother uploading or attaching it for the moment.

The expect tests were fine, as were the tests for dejagnu.

As is normal, I'm using my normal CFLAGS and CXXFLAGS - they start
out as
   (CFLAGS) -O3 -march=native -fstack-clash-protection -fstack-protector-strong
   (CXXFLAGS) -O3 -march=native -fstack-clash-protection 
-fstack-protector-strong -D_GLIBCXX_ASSERTIONS
although I don't necessarily use all of those on all packages.  I'm
also using headers from linux-5.8.0-rc7 for this build (I said it
was experimental!) and running a 5.8.0-rc5 kernel which has help up
well for the past 4 days (sleeping some of the time) and for a few
days before that.

What I really think you need to do is do a full build without CFLAGS or
CXXFLAGS set.  Then compare with a build with those settings.

My test logs are not complete, but if you really want me to, I can do a full
build as of 20200721 with all tests.  I can upload the test files to anduin
for you to compare.

   -- Bruce


I've now completed a build without any CFLAGS or CXXFLAGS as far as
the end of chapter 8, again running all the tests.  This is from my
'experimental' system, using the exact same versions and scripts.

In the original build where I queried the tcl results (lack of data
from anyone else) I now get exactly the same failing tests and test
ending with errors.

The other packages where I got different results from what the book
specifically mentions (or, for glibc, from what I had previously
seen) were:

glibc-2.31 : both failed misc/tst-ttyname - the book mentions that,
but in previous builds in the last couple of months it has passed.

gcc-10.2 : Without flags I get one failue beyond what the book
mentions, I'm fairly sure I've mentioned it before with 10.1 :

FAIL: 22_locale/numpunct/members/char/3.cc execution test

Adding my own flags, for gcc I *do* get an extra failure (again,
I've seen it before with 10.1) :

FAIL: 20_util/unsynchronized_pool_resource/allocate.cc execution test

Note that I have stopped forcing -O3 in my gcc builds because of the
failures in the torture tests, here I'm only using
   -O2 -march=native -fstack-clash-protection -fstack-protector-strong
and for CXX I add
    -D_GLIBCXX_ASSERTIONS
so in this case either that allocate.cc test fails with one of
these safe flags, or else it randomly fails.

autoconf-2.69b (beta) all passed in both runs.

automake-1.16.2 : te same 2 new failures from updated autoconf in
both runs.

util-linux-2.35.2 : column/invalid-multibyte failed in both runs.  I've
seen this before on this machine using util-linux-2.35.1, including
a build where I forgot to set my CFLAGS.

Summary: For gcc, adding sensible CFLAGS and more particularly
CXXFLAGS *can* provoke extra test failures.

Since I'm documenting this, I'll also mention that this week
phoronix ran some tests where it looked as if -O2 in recent gcc is
slow (that was on intel Comet Lake).  Much of the (runtime) slowness
in some tests was from using LTO, but in some cases the -O2
performance was much slower than on gcc9.

https://www.phoronix.com/scan.php?page=article&item=gcc-10900k-compiler&num=1

I do not have elapsed times for either of my builds (in the first,
I had breaks where tests failed unexpectedly for me, in the second
the Python tests stalled overnight - and of course like many other
packages there is no output from the tests until the end, so no idea
where they stalled.  However, I do have times (in seconds) for the
various 'stamps' I create to mark a step as completed.  For these,
timing starts after untarring and creating an initial log of what is
present, and stops before creating the log of what is now presnet
and then processing the logs to find what got installed and
modified.

For the first run with my flags: 178.522s

Without my flags: 297.926s

A little of that difference might be because my flags ensure that
debug information is not created.  But this suggests that using the
old and typical -O2 -g will slow the build down.

I'm doing a test build right now. Currently running glibc (still 2.31) tests. I did not use the beta autoconf. I added all the packages currently outstanding except systemd.

  -- Bruce


--
http://lists.linuxfromscratch.org/listinfo/lfs-dev
FAQ: http://www.linuxfromscratch.org/faq/
Unsubscribe: See the above information page

Reply via email to