On Mon, Jul 31, 2017 at 7:08 PM, Andrew Haley <a...@redhat.com> wrote:
> On 31/07/17 17:12, Oleg Endo wrote:
>> On Mon, 2017-07-31 at 15:25 +0200, Georg-Johann Lay wrote:
>>> Around 2010, someone who used a code snipped that I published in
>>> a wiki, reported that the code didn't work and hang in an
>>> endless loop.  Soon I found out that it was due to some GCC
>>> problem, and I got interested in fixing the compiler so that
>>> it worked with my code.
>>>
>>> 1 1/2 years later, in 2011, [...]
>>
>> I could probably write a similar rant.  This is the life of a
>> "minority target programmer".  Most development efforts are being
>> done with primary targets in mind.  And as a result, most changes
>> are being tested only on such targets.
>>
>> To improve the situation, we'd need a lot more target specific tests
>> which test for those regressions that you have mentioned.  Then of
>> course somebody has to run all those tests on all those various
>> targets.  I think that's the biggest problem.  But still, with a
>> test case at hand, it's much easier to talk to people who have
>> silently introduced a regression on some "other" targets.  Most of
>> the time they just don't know.
>
> It's a fundamental problem for compilers, in general: every
> optimization pass wants to be the last one, and (almost?) no-one who
> writes a pass knows all the details of all the subsequent passes.  The
> more sophisticated and subtle an optimization, the more possibility
> there is of messing something up or confusing someone's back end or a
> later pass.  We've seen this multiple times, with apparently
> straightforward control flow at the source level turning into a mess
> of spaghetti in the resulting assembly.  But we know that the
> optimization makes sense for some kinds of program, or at least that
> it did at the time the optimization was written.  However, it is
> inevitable that some programs will be made worse by some
> optimizations.  We hope that they will be few in number, but it
> really can't be helped.
>
> So what is to be done?  We could abandon the eternal drive for more
> and more optimizations, back off, and concentrate on simplicity and
> robustness at the expens of ultimate code quality.  Should we?  It
> would take courage, and there will be an eternal pressume to improve
> code.  And, of course, we'd risk someone forking GCC and creating the
> "superoptimized GCC" project, starving FSF GCC of developers.  That's
> happened before, so it's not an imaginary risk.

Heh.  I suspect -Os would benefit from a separate compilation pipeline
such as -Og.  Nowadays the early optimization pipeline is what you
want (mostly simple CSE & jump optimizations, focused on code
size improvements).  That doesn't get you any loop optimizations but
loop optimizations always have the chance to increase code size
or register pressure.

But yes, targeting an architecture like AVR which is neither primary
nor secondary (so very low priority) _plus_ being quite special in
target abilities (it seems to be very easy to mess up things) is hard.

SUSE does have some testers doing (also) code size monitoring
but as much data we have somebody needs to monitor it, further
bisect and report regressions deemed worthwhile.  It's hard to
avoid slow creep -- compile-time and memory use are a similar
issue here.

Richard.

> --
> Andrew Haley
> Java Platform Lead Engineer
> Red Hat UK Ltd. <https://www.redhat.com>
> EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671

Reply via email to