On 09/20/2017 11:18 PM, Markus Trippelsdorf wrote:
On 2017.09.20 at 18:01 -0500, Segher Boessenkool wrote:
Hi!

On Wed, Sep 20, 2017 at 05:01:55PM +0200, Paulo Matos wrote:
This mail's intention is to gauge the interest of having a buildbot for
GCC.

+1.  Or no, +100.

- which machines we can use as workers: we certainly need more worker
(previously known as slave) machines to test GCC in different
archs/configurations;

I think this would use too much resources (essentially the full machines)
for the GCC Compile Farm.  If you can dial it down so it only uses a
small portion of the machines, we can set up slaves there, at least on
the more unusual architectures.  But then it may become too slow to be
useful.

There is already a buildbot that uses GCC compile farm resources:
http://toolchain.lug-owl.de/buildbot/

And it has the basic problem of all automatic testing: that in the long
run everyone simply ignores it.

I don't think that's a fair characterization.  The problem is not
that people ignore all automated build results (this, in fact,
couldn't be farther from the truth).  Rather, there is not a single
build and test system for GCC but a multitude of disjoint efforts,
each offering different views with varying levels of functionality
and detail, and each maintained to a different degree.  If we could
agree to adopt one that meets most of our needs and if it were
maintained with same diligence and attention as GCC itself I'm sure
it would benefit the project greatly.

The same thing would happen with the proposed new buildbot. It would use
still more resources on the already overused machines without producing
useful results.

The same thing is true for the regression mailing list
https://gcc.gnu.org/ml/gcc-regression/current/.
It is obvious that nobody pays any attention to it, e.g. PGO bootstrap
is broken for several months on x86_64 and i686 bootstrap is broken for
a long time, too.

The regression and the testresults lists are useful but not nearly
as much as they could be.  For one, the presentation isn't user
friendly (a matrix view would be much more informative).  But even
beyond it, rather than using the pull model (people have to make
an effort to search it for results of their changes or the changes
of others to find regressions), the regression testing setup could
be improved by adopting the push model and automatically emailing
authors of changes that caused regressions (or opening bugs, or
whatever else might get our attention).

Only a mandatory pre-commit hook that would reject commits that break
anything would work. But running the testsuite takes much to long to
make this approach feasible.

A commit hook would be very handy, but it can't very well do
the same amount of testing as an automated build bot.

Martin

Reply via email to