Re: Overwhelmed by GCC frustration

2017-08-18 Thread Segher Boessenkool
On Thu, Aug 17, 2017 at 05:22:34PM +, paul.kon...@dell.com wrote:
> I think G-J said "...  LRA focusses just comfortable, orthogonal targets" 
> which is not quite the same thing.
> 
> I'm a bit curious about that, since x86 is hardly "comfortable orthogonal".  
> But if LRA is targeted only at some of the ISA styles that are out in the 
> world, which ones are they, and why the limitation?

LRA does better with more regular register sets, but then again, everything
has a much easier time with regular register sets.

> One of GCC's great strength is its support for many ISAs.

100% agreed.

> Not all to the same level of excellence, but there are many, and adding more 
> is easy at least for an initial basic level of support.  When this is needed, 
> GCC is the place to go.
> 
> I'd like to work on moving one of the remaining CC0 targets to the new way, 
> but if the reality is that GCC is trying to be "mainstream only" then that 
> may not be a good way for me to spend my time.

That is not at all GCC's goal.  See the mission statement:

https://gcc.gnu.org/gccmission.html

More often used platforms naturally get better support, have more people
working on them.  And for releases those platforms are more important
than others, which is just a practical thing (bugs on those affect more
people, it can be hard to find people to solve a problem on a minority
platform quickly enough).  https://gcc.gnu.org/gcc-8/criteria.html

But all targets matter, variety is good, also in GCC's self-interest.


Segher


Re: Overwhelmed by GCC frustration

2017-08-17 Thread R0b0t1
On Thu, Aug 17, 2017 at 12:36 PM, Bin.Cheng  wrote:
> Your work will contribute to this and must be highly appreciated :)
>

I apologize for interjecting as I do not understand GCC internals very
well, but I appreciate all of the work that is contributed to GCC,
especially by individuals on their own time. The emergence of Clang is
particularly worrisome to me as it competes with GCC and if GCC dies,
the world may only be left with a stripped and gutted "open source"
compiler where all useful innovations are proprietary.

I am not very smart, sirs. I have tried to understand GCC but I will
need more time if it is even possible. If work stopped on it I would
be left without a trustworthy compiler. A person of my modest position
and stature does not deserve to use such a capable piece of software.

R0b0t1.


Re: Overwhelmed by GCC frustration

2017-08-17 Thread Paul.Koning

> On Aug 17, 2017, at 1:36 PM, Bin.Cheng  wrote:
> 
> On Thu, Aug 17, 2017 at 6:22 PM,   wrote:
>> 
>> ...
>> One of GCC's great strength is its support for many ISAs.  Not all to the 
>> same level of excellence, but there are many, and adding more is easy at 
>> least for an initial basic level of support.  When this is needed, GCC is 
>> the place to go.
>> 
>> I'd like to work on moving one of the remaining CC0 targets to the new way, 
>> but if the reality is that GCC is trying to be "mainstream only" then that 
>> may not be a good way for me to spend my time.
> HI,
> I can't believe GCC ever tries to be "mainstream only".  It's somehow
> like that because major part of requirements come from popular
> architectures, and large part of patches are developed/tested on
> popular architectures.  It's a unfortunate/natural result of lacking
> of developers for non-mainstream.  We can make it less "mainstream
> only" only if we have enough developers for less popular arch.  Your
> work will contribute to this and must be highly appreciated :)
> 
> Thanks,
> bin


Thanks for the encouragement.  I will keep tinkering with the pdp11 target to 
make it better.

paul


Re: Overwhelmed by GCC frustration

2017-08-17 Thread Bin.Cheng
On Thu, Aug 17, 2017 at 6:22 PM,   wrote:
>
>> On Aug 17, 2017, at 11:22 AM, Oleg Endo  wrote:
>>
>> On Wed, 2017-08-16 at 19:04 -0500, Segher Boessenkool wrote:
>>>
>>> LRA is easier to work with than old reload, and that makes it better
>>> maintainable.
>>>
>>> Making LRA handle everything reload did is work, and someone needs to
>>> do it.
>>>
>>> LRA probably needs a few more target hooks (a _few_) to guide its
>>> decisions.
>>
>> Like Georg-Johann mentioned before, LRA has been targeted mainly for
>> mainstream ISAs.  And actually it's a pretty reasonable choice.  Again,
>> I don't think that "one RA to rule them all" is a scalable approach.
>>  But that's just my opinion.
>
> I think G-J said "...  LRA focusses just comfortable, orthogonal targets" 
> which is not quite the same thing.
>
> I'm a bit curious about that, since x86 is hardly "comfortable orthogonal".  
> But if LRA is targeted only at some of the ISA styles that are out in the 
> world, which ones are they, and why the limitation?
>
> One of GCC's great strength is its support for many ISAs.  Not all to the 
> same level of excellence, but there are many, and adding more is easy at 
> least for an initial basic level of support.  When this is needed, GCC is the 
> place to go.
>
> I'd like to work on moving one of the remaining CC0 targets to the new way, 
> but if the reality is that GCC is trying to be "mainstream only" then that 
> may not be a good way for me to spend my time.
HI,
I can't believe GCC ever tries to be "mainstream only".  It's somehow
like that because major part of requirements come from popular
architectures, and large part of patches are developed/tested on
popular architectures.  It's a unfortunate/natural result of lacking
of developers for non-mainstream.  We can make it less "mainstream
only" only if we have enough developers for less popular arch.  Your
work will contribute to this and must be highly appreciated :)

Thanks,
bin
>
> paul
>


Re: Overwhelmed by GCC frustration

2017-08-17 Thread Paul.Koning

> On Aug 17, 2017, at 11:22 AM, Oleg Endo  wrote:
> 
> On Wed, 2017-08-16 at 19:04 -0500, Segher Boessenkool wrote:
>>  
>> LRA is easier to work with than old reload, and that makes it better
>> maintainable.
>> 
>> Making LRA handle everything reload did is work, and someone needs to
>> do it.
>> 
>> LRA probably needs a few more target hooks (a _few_) to guide its
>> decisions.
> 
> Like Georg-Johann mentioned before, LRA has been targeted mainly for
> mainstream ISAs.  And actually it's a pretty reasonable choice.  Again,
> I don't think that "one RA to rule them all" is a scalable approach.
>  But that's just my opinion.

I think G-J said "...  LRA focusses just comfortable, orthogonal targets" which 
is not quite the same thing.

I'm a bit curious about that, since x86 is hardly "comfortable orthogonal".  
But if LRA is targeted only at some of the ISA styles that are out in the 
world, which ones are they, and why the limitation?

One of GCC's great strength is its support for many ISAs.  Not all to the same 
level of excellence, but there are many, and adding more is easy at least for 
an initial basic level of support.  When this is needed, GCC is the place to go.

I'd like to work on moving one of the remaining CC0 targets to the new way, but 
if the reality is that GCC is trying to be "mainstream only" then that may not 
be a good way for me to spend my time.

paul



Re: Overwhelmed by GCC frustration

2017-08-17 Thread Oleg Endo
On Wed, 2017-08-16 at 19:04 -0500, Segher Boessenkool wrote:
> 
> LRA is easier to work with than old reload, and that makes it better
> maintainable.
> 
> Making LRA handle everything reload did is work, and someone needs to
> do it.
> 
> LRA probably needs a few more target hooks (a _few_) to guide its
> decisions.

Like Georg-Johann mentioned before, LRA has been targeted mainly for
mainstream ISAs.  And actually it's a pretty reasonable choice.  Again,
I don't think that "one RA to rule them all" is a scalable approach.
 But that's just my opinion.

Cheers,
Oleg


Re: Overwhelmed by GCC frustration

2017-08-16 Thread Segher Boessenkool
On Wed, Aug 16, 2017 at 11:23:27PM +0900, Oleg Endo wrote:
> > First of all, LRA cannot cope with cc0 (Yes, I know deprecating
> > cc0 is just to deprecate all non-LRA BEs).  LRA asserts that
> > accessing the frame doesn't change condition code. LRA doesn't
> > provide replacement for LEGITIMITE_RELOAD_ADDRESS.  Hence LRA
> > focusses just comfortable, orthogonal targets.
> 
> It seems LRA is being praised so much, but all those niche BEs and
> corner cases get zero support.  There are several known instances of SH
> code regressions with LRA, and that's why I haven't switched it to
> LRA. 

LRA is easier to work with than old reload, and that makes it better
maintainable.

Making LRA handle everything reload did is work, and someone needs to
do it.

LRA probably needs a few more target hooks (a _few_) to guide its
decisions.


Segher


Re: Overwhelmed by GCC frustration

2017-08-16 Thread Segher Boessenkool
On Wed, Aug 16, 2017 at 03:53:24PM +0200, Georg-Johann Lay wrote:
> This means it's actually waste of time to work on these backends.  The
> code will finally end up in the dustbin as cc0 backends are considered
> undesired ballast that has to be "jettisoned".
> 
> "Deprecate all cc0" is just a nice formulation of "deprecate
> most of the cc0 backends".

_All_ cc0 backends.  We cannot remove cc0 support without removing all
targets that depend on it.

The push for moving away from cc0 isn't anything new.

> First of all, LRA cannot cope with cc0 (Yes, I know deprecating
> cc0 is just to deprecate all non-LRA BEs).

No, it isn't that at all.  CC0 is problematic in very many places.  It
is a blocker for removing old reload, that is true.

> As far as cc0 is concerned, transforming avr BE is not trivial.

That unfortunately is true for all cc0 backends.  It requires looking
over all of the backend code (not just the MD files even), and it
requires knowing the actual target behaviour in detail.

And it cannot be done piecemeal, it's an all-or-nothing switch.

> But my feeling is that opposing deprecation of cc0 is futile,
> the voices that support cc0 deprecation are more and usefulness
> of cc0 is not recognized.
> 
> Sooner or later these backends will end up in /dev/null.

If they aren't converted, yes.

A more constructive question is: what can be done to make conversion
easier and less painful?


Segher


Re: Overwhelmed by GCC frustration

2017-08-16 Thread Jeff Law
On 08/16/2017 08:14 AM, Eric Botcazou wrote:
>> Just the fact that the backends that get most attention and attract
>> most developers don't use cc0 doesn't mean cc0 is a useless device.
> 
> Everything that can be done with cc0 can be done with the new representation, 
> at least theoritically, although this can require more work.
Yup.

> 
>> As far as cc0 is concerned, transforming avr BE is not trivial.
>> It would need rewriting almost all of its md files entirely.
>> It would need rewriting great deal of avr.c that handle
>> insn output and provide input to NOTICE_UPDATE_CC.
> 
> I recently converted the Visium port, which is an architecture where every 
> integer instruction, including a simple move, clobber the flags, so it's 
> doable even for such an annoying target (but Visium is otherwise regular).
> See for example https://gcc.gnu.org/wiki/CC0Transition for some guidelines.
Yup.  I'd strongly recommend anyone contemplating a conversion to read
your guidelines.


> 
>> But my feeling is that opposing deprecation of cc0 is futile,
>> the voices that support cc0 deprecation are more and usefulness
>> of cc0 is not recognized.
> 
> cc0 is just obsolete and inferior compared to the new representation.
And cc0 is inherently buggy if you know how to poke at it.  There are
places where we can't enforce the cc0 user must immediately follow the
cc0 setter.

We've been faulting in work-arounds when ports trip over those problems,
but I'm certain more problems in this space remain.

jeff


Re: Overwhelmed by GCC frustration

2017-08-16 Thread Oleg Endo
On Wed, 2017-08-16 at 15:53 +0200, Georg-Johann Lay wrote:
> 
> This means it's actually waste of time to work on these
> backends.  The code will finally end up in the dustbin as cc0
> backends are considered undesired ballast that has to be
> "jettisoned".
> 
> "Deprecate all cc0" is just a nice formulation of "deprecate
> most of the cc0 backends".
> 
> Just the fact that the backends that get most attention and attract
> most developers don't use cc0 doesn't mean cc0 is a useless device.

The desire to get rid of old, crusty and unmaintained stuff is somehow
understandable...


> First of all, LRA cannot cope with cc0 (Yes, I know deprecating
> cc0 is just to deprecate all non-LRA BEs).  LRA asserts that
> accessing the frame doesn't change condition code. LRA doesn't
> provide replacement for LEGITIMITE_RELOAD_ADDRESS.  Hence LRA
> focusses just comfortable, orthogonal targets.

It seems LRA is being praised so much, but all those niche BEs and
corner cases get zero support.  There are several known instances of SH
code regressions with LRA, and that's why I haven't switched it to
LRA. 

I think the problem is that it's very difficult to make a register
allocator that works well for everything.  The last attempt ended in
reload.  And eventually LRA will go down the same route.  So instead of
trying to fit a round peg in a square hole, maybe we should just have
the options for round and square pegs and holes.


Cheers,
Oleg


Re: Overwhelmed by GCC frustration

2017-08-16 Thread Eric Botcazou
> Just the fact that the backends that get most attention and attract
> most developers don't use cc0 doesn't mean cc0 is a useless device.

Everything that can be done with cc0 can be done with the new representation, 
at least theoritically, although this can require more work.

> As far as cc0 is concerned, transforming avr BE is not trivial.
> It would need rewriting almost all of its md files entirely.
> It would need rewriting great deal of avr.c that handle
> insn output and provide input to NOTICE_UPDATE_CC.

I recently converted the Visium port, which is an architecture where every 
integer instruction, including a simple move, clobber the flags, so it's 
doable even for such an annoying target (but Visium is otherwise regular).
See for example https://gcc.gnu.org/wiki/CC0Transition for some guidelines.

> But my feeling is that opposing deprecation of cc0 is futile,
> the voices that support cc0 deprecation are more and usefulness
> of cc0 is not recognized.

cc0 is just obsolete and inferior compared to the new representation.

-- 
Eric Botcazou


Re: Overwhelmed by GCC frustration

2017-08-16 Thread Georg-Johann Lay

On 31.07.2017 19:54, Jeff Law wrote:

On 07/31/2017 11:23 AM, Segher Boessenkool wrote:

On Tue, Aug 01, 2017 at 01:12:41AM +0900, Oleg Endo wrote:

I could probably write a similar rant.  This is the life of a "minority
target programmer".  Most development efforts are being done with
primary targets in mind.  And as a result, most changes are being
tested only on such targets.


Also, many changes require retuning of all target backends.  This never


Got the message.

This means it's actually waste of time to work on these backends.  The
code will finally end up in the dustbin as cc0 backends are considered
undesired ballast that has to be "jettisoned".

"Deprecate all cc0" is just a nice formulation of "deprecate
most of the cc0 backends".

Just the fact that the backends that get most attention and attract
most developers don't use cc0 doesn't mean cc0 is a useless device.

First of all, LRA cannot cope with cc0 (Yes, I know deprecating
cc0 is just to deprecate all non-LRA BEs).  LRA asserts that
accessing the frame doesn't change condition code. LRA doesn't
provide replacement for LEGITIMITE_RELOAD_ADDRESS.  Hence LRA
focusses just comfortable, orthogonal targets.

As far as cc0 is concerned, transforming avr BE is not trivial.
It would need rewriting almost all of its md files entirely.
It would need rewriting great deal of avr.c that handle
insn output and provide input to NOTICE_UPDATE_CC.

But my feeling is that opposing deprecation of cc0 is futile,
the voices that support cc0 deprecation are more and usefulness
of cc0 is not recognized.

Sooner or later these backends will end up in /dev/null.

Johann


Re: Overwhelmed by GCC frustration

2017-08-04 Thread Richard Earnshaw
On 04/08/17 10:38, Claudiu Zissulescu wrote:
> Maybe better is to use the updated CsIbe repo from github
> https://github.com/szeged/csibe. I use it for ARC to track the code
> size.
> 

Thanks for the link Claudiu.  Personally I'll probably stick with the
existing code as I now have size data for it stretching back about 10
years, but I'll have a look to see if it might be worth running both.

R.

> Cheers,
> Claudiu
> 
> On Fri, Aug 4, 2017 at 11:12 AM, Richard Earnshaw
>  wrote:
>> On 03/08/17 13:11, Steven Bosscher wrote:
>>> On Mon, Jul 31, 2017 at 6:49 PM, Joel Sherrill wrote:
>>>

 Long ago, there was a code size regression tester for at least
 ARM. Is that still around?
>>>
>>> There used to be autotesters from CSiBE. Something still appears to
>>> exist (http://www.csibe.org/old/) but the last time I tried to run the
>>> benchmark, I couldn't get the suite to compile. That was some years
>>> ago, perhaps the situation is better nowadays (http://www.csibe.org/
>>> without the old, suggests something's changed at some point...).
>>>
>>> As for "long ago", see also your own reply to
>>> https://gcc.gnu.org/ml/gcc/2003-07/msg00111.html :-)
>>>
>>> Ciao!
>>> Steven
>>>
>>
>> CSiBE does still build (but you need -sdt=gnu98).  I build it nightly.
>>
>> R.



Re: Overwhelmed by GCC frustration

2017-08-04 Thread Claudiu Zissulescu
Maybe better is to use the updated CsIbe repo from github
https://github.com/szeged/csibe. I use it for ARC to track the code
size.

Cheers,
Claudiu

On Fri, Aug 4, 2017 at 11:12 AM, Richard Earnshaw
 wrote:
> On 03/08/17 13:11, Steven Bosscher wrote:
>> On Mon, Jul 31, 2017 at 6:49 PM, Joel Sherrill wrote:
>>
>>>
>>> Long ago, there was a code size regression tester for at least
>>> ARM. Is that still around?
>>
>> There used to be autotesters from CSiBE. Something still appears to
>> exist (http://www.csibe.org/old/) but the last time I tried to run the
>> benchmark, I couldn't get the suite to compile. That was some years
>> ago, perhaps the situation is better nowadays (http://www.csibe.org/
>> without the old, suggests something's changed at some point...).
>>
>> As for "long ago", see also your own reply to
>> https://gcc.gnu.org/ml/gcc/2003-07/msg00111.html :-)
>>
>> Ciao!
>> Steven
>>
>
> CSiBE does still build (but you need -sdt=gnu98).  I build it nightly.
>
> R.


Re: Overwhelmed by GCC frustration

2017-08-04 Thread Richard Earnshaw
On 03/08/17 13:11, Steven Bosscher wrote:
> On Mon, Jul 31, 2017 at 6:49 PM, Joel Sherrill wrote:
> 
>>
>> Long ago, there was a code size regression tester for at least
>> ARM. Is that still around?
> 
> There used to be autotesters from CSiBE. Something still appears to
> exist (http://www.csibe.org/old/) but the last time I tried to run the
> benchmark, I couldn't get the suite to compile. That was some years
> ago, perhaps the situation is better nowadays (http://www.csibe.org/
> without the old, suggests something's changed at some point...).
> 
> As for "long ago", see also your own reply to
> https://gcc.gnu.org/ml/gcc/2003-07/msg00111.html :-)
> 
> Ciao!
> Steven
> 

CSiBE does still build (but you need -sdt=gnu98).  I build it nightly.

R.


Re: Overwhelmed by GCC frustration

2017-08-03 Thread Steven Bosscher
On Mon, Jul 31, 2017 at 6:49 PM, Joel Sherrill wrote:

>
> Long ago, there was a code size regression tester for at least
> ARM. Is that still around?

There used to be autotesters from CSiBE. Something still appears to
exist (http://www.csibe.org/old/) but the last time I tried to run the
benchmark, I couldn't get the suite to compile. That was some years
ago, perhaps the situation is better nowadays (http://www.csibe.org/
without the old, suggests something's changed at some point...).

As for "long ago", see also your own reply to
https://gcc.gnu.org/ml/gcc/2003-07/msg00111.html :-)

Ciao!
Steven


Re: Overwhelmed by GCC frustration

2017-08-02 Thread Richard Biener
On Wed, Aug 2, 2017 at 3:54 PM, Segher Boessenkool
 wrote:
> On Tue, Aug 01, 2017 at 01:50:14PM +0200, David Brown wrote:
>> I would not expect that to be good at all.  With no optimisation (-O0),
>> gcc produces quite poor code - local variables are not put in registers
>> or "optimised away", there is no strength reduction, etc.  For an
>> architecture like the AVR with a fair number of registers (32, albeit
>> 8-bit registers) and relatively inefficient stack access, -O0 produces
>> /terrible/ code.
>
> -Og is better though (better than any other -O for this test at least).
>
> The regression happened before 4.7, it seems the big jump was with 4.6?
> So what happened there?  This seems to happen on x86 as well, maybe
> on everything.

And one function (of the two) shrinks compared to 3.4 and the other increases
so the jumps are probably mis-bisected anyway.

Richard.

>
> Segher


Re: Overwhelmed by GCC frustration

2017-08-02 Thread Segher Boessenkool
On Tue, Aug 01, 2017 at 01:50:14PM +0200, David Brown wrote:
> I would not expect that to be good at all.  With no optimisation (-O0),
> gcc produces quite poor code - local variables are not put in registers
> or "optimised away", there is no strength reduction, etc.  For an
> architecture like the AVR with a fair number of registers (32, albeit
> 8-bit registers) and relatively inefficient stack access, -O0 produces
> /terrible/ code.

-Og is better though (better than any other -O for this test at least).

The regression happened before 4.7, it seems the big jump was with 4.6?
So what happened there?  This seems to happen on x86 as well, maybe
on everything.


Segher


Re: Overwhelmed by GCC frustration

2017-08-02 Thread Richard Biener
On Tue, Aug 1, 2017 at 6:00 PM, James Greenhalgh
 wrote:
> On Tue, Aug 01, 2017 at 11:12:12AM -0400, Eric Gallager wrote:
>> On 8/1/17, Jakub Jelinek  wrote:
>> > On Tue, Aug 01, 2017 at 07:08:41AM -0400, Eric Gallager wrote:
>> >> > Heh.  I suspect -Os would benefit from a separate compilation pipeline
>> >> > such as -Og.  Nowadays the early optimization pipeline is what you
>> >> > want (mostly simple CSE & jump optimizations, focused on code
>> >> > size improvements).  That doesn't get you any loop optimizations but
>> >> > loop optimizations always have the chance to increase code size
>> >> > or register pressure.
>> >> >
>> >>
>> >> Maybe in addition to the -Os optimization level, GCC mainline could
>> >> also add the -Oz optimization level like Apple's GCC had, and clang
>> >> still has? Basically -Os is -O2 with additional code size focus,
>> >> whereas -Oz is -O0 with the same code size focus. Adding it to the
>> >> FSF's GCC, too, could help reduce code size even further than -Os
>> >> currently does.
>> >
>> > No, lack of optimizations certainly doesn't reduce the code size.
>> > For small code, you need lots of optimizations, but preferrably code-size
>> > aware ones.  For RTL that is usually easier, because you can often compare
>> > the sizes of the old and new sequences and choose smaller, for GIMPLE
>> > optimizations it is often just a wild guess on what optimizations generally
>> > result in smaller and what optimizations generally result in larger code.
>> > There are too many following passes to know for sure, and finding the right
>> > heuristics is hard.
>> >
>> > Jakub
>> >
>>
>> Upon rereading of the relevant docs, I guess it was a mistake to
>> compare -Oz to -O0. Let me quote from the apple-gcc "Optimize Options"
>> page:
>>
>> -Oz
>> (APPLE ONLY) Optimize for size, regardless of performance. -Oz
>> enables the same optimization flags that -Os uses, but -Oz also
>> enables other optimizations intended solely to reduce code size.
>> In particular, instructions that encode into fewer bytes are
>> preferred over longer instructions that execute in fewer cycles.
>> -Oz on Darwin is very similar to -Os in FSF distributions of GCC.
>> -Oz employs the same inlining limits and avoids string instructions
>> just like -Os.
>>
>> Meanwhile, their description of -Os as contrasted to -Oz reads:
>>
>> -Os
>> Optimize for size, but not at the expense of speed. -Os enables all
>> -O2 optimizations that do not typically increase code size.
>> However, instructions are chosen for best performance, regardless
>> of size. To optimize solely for size on Darwin, use -Oz (APPLE
>> ONLY).
>>
>> And the clang docs for -Oz say:
>>
>> -Oz Like -Os (and thus -O2), but reduces code size further.
>>
>> So -Oz does actually still optimize, so it's more like -O2 than -O0
>> after all, just even more size-focused than -Os.
>
> The relationship between -Os and -Oz is like the relationship between -O2
> and -O3.
>
> If -O3 says, try everything you can to increase performance even at the
> expense of code-size and compile time, then -Oz says, try everything you
> can to reduce the code size, even at the expense of performance and
> compile time.

Note for GCC -Os has been this historically.  I'd say that compared to
other compilers -O2 is what they do at -Os -- balance speed and size
with GCC being much more conservative on the size side than other
compilers.  Recently we've "weakened" -Os by for example allowing
integer division to expand to mul/add sequences but IIRC that was based
on the costs the target provides.

Richard.

> Thanks,
> James
>


Re: Overwhelmed by GCC frustration

2017-08-01 Thread James Greenhalgh
On Tue, Aug 01, 2017 at 11:12:12AM -0400, Eric Gallager wrote:
> On 8/1/17, Jakub Jelinek  wrote:
> > On Tue, Aug 01, 2017 at 07:08:41AM -0400, Eric Gallager wrote:
> >> > Heh.  I suspect -Os would benefit from a separate compilation pipeline
> >> > such as -Og.  Nowadays the early optimization pipeline is what you
> >> > want (mostly simple CSE & jump optimizations, focused on code
> >> > size improvements).  That doesn't get you any loop optimizations but
> >> > loop optimizations always have the chance to increase code size
> >> > or register pressure.
> >> >
> >>
> >> Maybe in addition to the -Os optimization level, GCC mainline could
> >> also add the -Oz optimization level like Apple's GCC had, and clang
> >> still has? Basically -Os is -O2 with additional code size focus,
> >> whereas -Oz is -O0 with the same code size focus. Adding it to the
> >> FSF's GCC, too, could help reduce code size even further than -Os
> >> currently does.
> >
> > No, lack of optimizations certainly doesn't reduce the code size.
> > For small code, you need lots of optimizations, but preferrably code-size
> > aware ones.  For RTL that is usually easier, because you can often compare
> > the sizes of the old and new sequences and choose smaller, for GIMPLE
> > optimizations it is often just a wild guess on what optimizations generally
> > result in smaller and what optimizations generally result in larger code.
> > There are too many following passes to know for sure, and finding the right
> > heuristics is hard.
> >
> > Jakub
> >
> 
> Upon rereading of the relevant docs, I guess it was a mistake to
> compare -Oz to -O0. Let me quote from the apple-gcc "Optimize Options"
> page:
> 
> -Oz
> (APPLE ONLY) Optimize for size, regardless of performance. -Oz
> enables the same optimization flags that -Os uses, but -Oz also
> enables other optimizations intended solely to reduce code size.
> In particular, instructions that encode into fewer bytes are
> preferred over longer instructions that execute in fewer cycles.
> -Oz on Darwin is very similar to -Os in FSF distributions of GCC.
> -Oz employs the same inlining limits and avoids string instructions
> just like -Os.
> 
> Meanwhile, their description of -Os as contrasted to -Oz reads:
> 
> -Os
> Optimize for size, but not at the expense of speed. -Os enables all
> -O2 optimizations that do not typically increase code size.
> However, instructions are chosen for best performance, regardless
> of size. To optimize solely for size on Darwin, use -Oz (APPLE
> ONLY).
> 
> And the clang docs for -Oz say:
> 
> -Oz Like -Os (and thus -O2), but reduces code size further.
> 
> So -Oz does actually still optimize, so it's more like -O2 than -O0
> after all, just even more size-focused than -Os.

The relationship between -Os and -Oz is like the relationship between -O2
and -O3.

If -O3 says, try everything you can to increase performance even at the
expense of code-size and compile time, then -Oz says, try everything you
can to reduce the code size, even at the expense of performance and
compile time.

Thanks,
James



Re: Overwhelmed by GCC frustration

2017-08-01 Thread Eric Gallager
On 8/1/17, Jakub Jelinek  wrote:
> On Tue, Aug 01, 2017 at 07:08:41AM -0400, Eric Gallager wrote:
>> > Heh.  I suspect -Os would benefit from a separate compilation pipeline
>> > such as -Og.  Nowadays the early optimization pipeline is what you
>> > want (mostly simple CSE & jump optimizations, focused on code
>> > size improvements).  That doesn't get you any loop optimizations but
>> > loop optimizations always have the chance to increase code size
>> > or register pressure.
>> >
>>
>> Maybe in addition to the -Os optimization level, GCC mainline could
>> also add the -Oz optimization level like Apple's GCC had, and clang
>> still has? Basically -Os is -O2 with additional code size focus,
>> whereas -Oz is -O0 with the same code size focus. Adding it to the
>> FSF's GCC, too, could help reduce code size even further than -Os
>> currently does.
>
> No, lack of optimizations certainly doesn't reduce the code size.
> For small code, you need lots of optimizations, but preferrably code-size
> aware ones.  For RTL that is usually easier, because you can often compare
> the sizes of the old and new sequences and choose smaller, for GIMPLE
> optimizations it is often just a wild guess on what optimizations generally
> result in smaller and what optimizations generally result in larger code.
> There are too many following passes to know for sure, and finding the right
> heuristics is hard.
>
>   Jakub
>

Upon rereading of the relevant docs, I guess it was a mistake to
compare -Oz to -O0. Let me quote from the apple-gcc "Optimize Options"
page:

-Oz
(APPLE ONLY) Optimize for size, regardless of performance. -Oz
enables the same optimization flags that -Os uses, but -Oz also
enables other optimizations intended solely to reduce code size.
In particular, instructions that encode into fewer bytes are
preferred over longer instructions that execute in fewer cycles.
-Oz on Darwin is very similar to -Os in FSF distributions of GCC.
-Oz employs the same inlining limits and avoids string instructions
just like -Os.

Meanwhile, their description of -Os as contrasted to -Oz reads:

-Os
Optimize for size, but not at the expense of speed. -Os enables all
-O2 optimizations that do not typically increase code size.
However, instructions are chosen for best performance, regardless
of size. To optimize solely for size on Darwin, use -Oz (APPLE
ONLY).

And the clang docs for -Oz say:

-Oz Like -Os (and thus -O2), but reduces code size further.

So -Oz does actually still optimize, so it's more like -O2 than -O0
after all, just even more size-focused than -Os.


Re: Overwhelmed by GCC frustration

2017-08-01 Thread David Brown
On 31/07/17 15:25, Georg-Johann Lay wrote:

> This weekend I un-mothed an old project, just an innocent game on a
> cathode-ray-tube, driven by some AVR µC.  After preparing the software
> so that it compiled with good old v3.4, the results overwhelmed me
> with complete frustration:  Any version from v4.7 up to v8 which I
> fed into the compiler (six major version to be precise), produced a
> code > 25% larger than compiled with v3.4.6 and the code is not only
> bigger, it's all needless bloat that will also slow down the result;
> optimizing for speed might bloat and slow even more.
> 

At the risk of stirring up a hornets nest, have you tried the AVR
backend of LLVM/clang ?  It is still a work in progress, and I haven't
tried it at all myself (my experience with clang is very minimal).

Ultimately, the aim of the AVR gcc users community is to have free,
cross-platform tools that will work with their existing and future code
and generate good AVR object code.  Few users really care if it is gcc
or clang (especially if the clang binaries are named "avr-gcc").  And
ultimately the aim of Microchip, who make and sell the chips themselves,
should be to keep the users happy.

If the structure of llvm/clang is such that it makes more sense for
Microchip to put resources into that project, and when it is good enough
they could move to it as their default toolchain for users, then the
quality of AVR code generation in gcc becomes a non-issue.

(Personally, I like to see friendly competition between llvm/clang and
gcc, as I think it has helped both projects, but there are severe limits
to the development resources that can be put into a minor target like this.)



Re: Overwhelmed by GCC frustration

2017-08-01 Thread David Brown
On 01/08/17 13:08, Eric Gallager wrote:
> On 8/1/17, Richard Biener  wrote:

>>
>> Heh.  I suspect -Os would benefit from a separate compilation pipeline
>> such as -Og.  Nowadays the early optimization pipeline is what you
>> want (mostly simple CSE & jump optimizations, focused on code
>> size improvements).  That doesn't get you any loop optimizations but
>> loop optimizations always have the chance to increase code size
>> or register pressure.
>>
> 
> Maybe in addition to the -Os optimization level, GCC mainline could
> also add the -Oz optimization level like Apple's GCC had, and clang
> still has? Basically -Os is -O2 with additional code size focus,
> whereas -Oz is -O0 with the same code size focus. Adding it to the
> FSF's GCC, too, could help reduce code size even further than -Os
> currently does.
> 

I would not expect that to be good at all.  With no optimisation (-O0),
gcc produces quite poor code - local variables are not put in registers
or "optimised away", there is no strength reduction, etc.  For an
architecture like the AVR with a fair number of registers (32, albeit
8-bit registers) and relatively inefficient stack access, -O0 produces
/terrible/ code.

There is also the body of existing code, projects, practice and
knowledge - all of that says "-Os" is for optimised code with an
emphasis on size, and it is the flag of choice on a large proportion of
embedded projects (not just for the AVR).

The ideal solution is to fix gcc so that -Os gives (close to) minimal
code size, or at least as good as it used to give - while retaining the
benefits of the newer optimisations and features of later gcc versions.

The question is, can this be done in a way that is practical,
maintainable, achievable with the limited resources of the AVR port, and
without detriment to any of the other ports?  As has been noted, the AVR
port is considered a minor port for gcc (though it is vital for the AVR
development community), and other ports have seen a trend of improvement
in code size - gcc can't take changes to improve the AVR if it makes
things worse for MIPS.

Is it possible to get some improvement in AVR generation by enabling or
disabling specific combinations of optimisation in addition to -Os?  Are
there tunables that could be fiddled with to improve matters (either
"--param" options, or in the AVR backend code)?  Can the "-fdump-rtl" or
"-fopt-info" flags be used to get an idea of which passes lead to code
increase?  If these led to improvements, then it should be possible to
better the situation by simply changing the defaults used by the AVR port.

Of course, this sort of analysis would require significant effort - but
not many changes to the gcc code.  Could Microchip (who now own Atmel,
the AVR manufacturer) provide the resources for the work?  It is clearly
in their interest that the AVR port of gcc is as good as it can be -
they ship it as part of their standard development tools.



Re: Overwhelmed by GCC frustration

2017-08-01 Thread Jakub Jelinek
On Tue, Aug 01, 2017 at 07:08:41AM -0400, Eric Gallager wrote:
> > Heh.  I suspect -Os would benefit from a separate compilation pipeline
> > such as -Og.  Nowadays the early optimization pipeline is what you
> > want (mostly simple CSE & jump optimizations, focused on code
> > size improvements).  That doesn't get you any loop optimizations but
> > loop optimizations always have the chance to increase code size
> > or register pressure.
> >
> 
> Maybe in addition to the -Os optimization level, GCC mainline could
> also add the -Oz optimization level like Apple's GCC had, and clang
> still has? Basically -Os is -O2 with additional code size focus,
> whereas -Oz is -O0 with the same code size focus. Adding it to the
> FSF's GCC, too, could help reduce code size even further than -Os
> currently does.

No, lack of optimizations certainly doesn't reduce the code size.
For small code, you need lots of optimizations, but preferrably code-size
aware ones.  For RTL that is usually easier, because you can often compare
the sizes of the old and new sequences and choose smaller, for GIMPLE
optimizations it is often just a wild guess on what optimizations generally
result in smaller and what optimizations generally result in larger code.
There are too many following passes to know for sure, and finding the right
heuristics is hard.

Jakub


Re: Overwhelmed by GCC frustration

2017-08-01 Thread Eric Gallager
On 8/1/17, Richard Biener  wrote:
> On Mon, Jul 31, 2017 at 7:08 PM, Andrew Haley  wrote:
>> On 31/07/17 17:12, Oleg Endo wrote:
>>> On Mon, 2017-07-31 at 15:25 +0200, Georg-Johann Lay wrote:
 Around 2010, someone who used a code snipped that I published in
 a wiki, reported that the code didn't work and hang in an
 endless loop.  Soon I found out that it was due to some GCC
 problem, and I got interested in fixing the compiler so that
 it worked with my code.

 1 1/2 years later, in 2011, [...]
>>>
>>> I could probably write a similar rant.  This is the life of a
>>> "minority target programmer".  Most development efforts are being
>>> done with primary targets in mind.  And as a result, most changes
>>> are being tested only on such targets.
>>>
>>> To improve the situation, we'd need a lot more target specific tests
>>> which test for those regressions that you have mentioned.  Then of
>>> course somebody has to run all those tests on all those various
>>> targets.  I think that's the biggest problem.  But still, with a
>>> test case at hand, it's much easier to talk to people who have
>>> silently introduced a regression on some "other" targets.  Most of
>>> the time they just don't know.
>>
>> It's a fundamental problem for compilers, in general: every
>> optimization pass wants to be the last one, and (almost?) no-one who
>> writes a pass knows all the details of all the subsequent passes.  The
>> more sophisticated and subtle an optimization, the more possibility
>> there is of messing something up or confusing someone's back end or a
>> later pass.  We've seen this multiple times, with apparently
>> straightforward control flow at the source level turning into a mess
>> of spaghetti in the resulting assembly.  But we know that the
>> optimization makes sense for some kinds of program, or at least that
>> it did at the time the optimization was written.  However, it is
>> inevitable that some programs will be made worse by some
>> optimizations.  We hope that they will be few in number, but it
>> really can't be helped.
>>
>> So what is to be done?  We could abandon the eternal drive for more
>> and more optimizations, back off, and concentrate on simplicity and
>> robustness at the expens of ultimate code quality.  Should we?  It
>> would take courage, and there will be an eternal pressume to improve
>> code.  And, of course, we'd risk someone forking GCC and creating the
>> "superoptimized GCC" project, starving FSF GCC of developers.  That's
>> happened before, so it's not an imaginary risk.
>
> Heh.  I suspect -Os would benefit from a separate compilation pipeline
> such as -Og.  Nowadays the early optimization pipeline is what you
> want (mostly simple CSE & jump optimizations, focused on code
> size improvements).  That doesn't get you any loop optimizations but
> loop optimizations always have the chance to increase code size
> or register pressure.
>

Maybe in addition to the -Os optimization level, GCC mainline could
also add the -Oz optimization level like Apple's GCC had, and clang
still has? Basically -Os is -O2 with additional code size focus,
whereas -Oz is -O0 with the same code size focus. Adding it to the
FSF's GCC, too, could help reduce code size even further than -Os
currently does.

> But yes, targeting an architecture like AVR which is neither primary
> nor secondary (so very low priority) _plus_ being quite special in
> target abilities (it seems to be very easy to mess up things) is hard.
>
> SUSE does have some testers doing (also) code size monitoring
> but as much data we have somebody needs to monitor it, further
> bisect and report regressions deemed worthwhile.  It's hard to
> avoid slow creep -- compile-time and memory use are a similar
> issue here.
>
> Richard.
>
>> --
>> Andrew Haley
>> Java Platform Lead Engineer
>> Red Hat UK Ltd. 
>> EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671
>


RE: Overwhelmed by GCC frustration

2017-08-01 Thread Matthew Fortune
Richard Biener  writes:
> On Mon, Jul 31, 2017 at 7:08 PM, Andrew Haley  wrote:
> > On 31/07/17 17:12, Oleg Endo wrote:
> >> On Mon, 2017-07-31 at 15:25 +0200, Georg-Johann Lay wrote:
> >>> Around 2010, someone who used a code snipped that I published in a
> >>> wiki, reported that the code didn't work and hang in an endless
> >>> loop.  Soon I found out that it was due to some GCC problem, and I
> >>> got interested in fixing the compiler so that it worked with my
> >>> code.
> >>>
> >>> 1 1/2 years later, in 2011, [...]
> >>
> >> I could probably write a similar rant.  This is the life of a
> >> "minority target programmer".  Most development efforts are being
> >> done with primary targets in mind.  And as a result, most changes are
> >> being tested only on such targets.
> >>
> >> To improve the situation, we'd need a lot more target specific tests
> >> which test for those regressions that you have mentioned.  Then of
> >> course somebody has to run all those tests on all those various
> >> targets.  I think that's the biggest problem.  But still, with a test
> >> case at hand, it's much easier to talk to people who have silently
> >> introduced a regression on some "other" targets.  Most of the time
> >> they just don't know.
> >
> > It's a fundamental problem for compilers, in general: every
> > optimization pass wants to be the last one, and (almost?) no-one who
> > writes a pass knows all the details of all the subsequent passes.  The
> > more sophisticated and subtle an optimization, the more possibility
> > there is of messing something up or confusing someone's back end or a
> > later pass.  We've seen this multiple times, with apparently
> > straightforward control flow at the source level turning into a mess
> > of spaghetti in the resulting assembly.  But we know that the
> > optimization makes sense for some kinds of program, or at least that
> > it did at the time the optimization was written.  However, it is
> > inevitable that some programs will be made worse by some
> > optimizations.  We hope that they will be few in number, but it really
> > can't be helped.
> >
> > So what is to be done?  We could abandon the eternal drive for more
> > and more optimizations, back off, and concentrate on simplicity and
> > robustness at the expens of ultimate code quality.  Should we?  It
> > would take courage, and there will be an eternal pressume to improve
> > code.  And, of course, we'd risk someone forking GCC and creating the
> > "superoptimized GCC" project, starving FSF GCC of developers.  That's
> > happened before, so it's not an imaginary risk.
> 
> Heh.  I suspect -Os would benefit from a separate compilation pipeline
> such as -Og.  Nowadays the early optimization pipeline is what you want
> (mostly simple CSE & jump optimizations, focused on code size
> improvements).  That doesn't get you any loop optimizations but loop
> optimizations always have the chance to increase code size or register
> pressure.
> 
> But yes, targeting an architecture like AVR which is neither primary nor
> secondary (so very low priority) _plus_ being quite special in target
> abilities (it seems to be very easy to mess up things) is hard.
> 
> SUSE does have some testers doing (also) code size monitoring but as
> much data we have somebody needs to monitor it, further bisect and
> report regressions deemed worthwhile.  It's hard to avoid slow creep --
> compile-time and memory use are a similar issue here.

Towards the end of last year we ran a code size analysis over time for
MIPS GCC (I believe microMIPSR3 to be specific) between Oct 2013 and
Aug 2016 taking every 50th commit if memory serves. I have a whole bunch
of graphs for open source benchmarks that I may be able to share. The
net effect was a significant code size reduction with just a few short
(<2months) regressions. Not all benchmarks ended up at the best ever code
size and some regressions were countered by different optimisations than
the ones that introduced the regression (so the issue wasn't strictly
fixed in all cases). Over this period I would therefore be surprised if
GCC has caused significant code size regressions in general. I don't have
the detailed analysis to hand but a significant code size reduction
happened ~Mar/Apr 2014 but I can't remember why that was. I do remember a
spike when changing to LRA but that settled down (mostly).

Matthew


Re: Overwhelmed by GCC frustration

2017-08-01 Thread Richard Biener
On Mon, Jul 31, 2017 at 7:08 PM, Andrew Haley  wrote:
> On 31/07/17 17:12, Oleg Endo wrote:
>> On Mon, 2017-07-31 at 15:25 +0200, Georg-Johann Lay wrote:
>>> Around 2010, someone who used a code snipped that I published in
>>> a wiki, reported that the code didn't work and hang in an
>>> endless loop.  Soon I found out that it was due to some GCC
>>> problem, and I got interested in fixing the compiler so that
>>> it worked with my code.
>>>
>>> 1 1/2 years later, in 2011, [...]
>>
>> I could probably write a similar rant.  This is the life of a
>> "minority target programmer".  Most development efforts are being
>> done with primary targets in mind.  And as a result, most changes
>> are being tested only on such targets.
>>
>> To improve the situation, we'd need a lot more target specific tests
>> which test for those regressions that you have mentioned.  Then of
>> course somebody has to run all those tests on all those various
>> targets.  I think that's the biggest problem.  But still, with a
>> test case at hand, it's much easier to talk to people who have
>> silently introduced a regression on some "other" targets.  Most of
>> the time they just don't know.
>
> It's a fundamental problem for compilers, in general: every
> optimization pass wants to be the last one, and (almost?) no-one who
> writes a pass knows all the details of all the subsequent passes.  The
> more sophisticated and subtle an optimization, the more possibility
> there is of messing something up or confusing someone's back end or a
> later pass.  We've seen this multiple times, with apparently
> straightforward control flow at the source level turning into a mess
> of spaghetti in the resulting assembly.  But we know that the
> optimization makes sense for some kinds of program, or at least that
> it did at the time the optimization was written.  However, it is
> inevitable that some programs will be made worse by some
> optimizations.  We hope that they will be few in number, but it
> really can't be helped.
>
> So what is to be done?  We could abandon the eternal drive for more
> and more optimizations, back off, and concentrate on simplicity and
> robustness at the expens of ultimate code quality.  Should we?  It
> would take courage, and there will be an eternal pressume to improve
> code.  And, of course, we'd risk someone forking GCC and creating the
> "superoptimized GCC" project, starving FSF GCC of developers.  That's
> happened before, so it's not an imaginary risk.

Heh.  I suspect -Os would benefit from a separate compilation pipeline
such as -Og.  Nowadays the early optimization pipeline is what you
want (mostly simple CSE & jump optimizations, focused on code
size improvements).  That doesn't get you any loop optimizations but
loop optimizations always have the chance to increase code size
or register pressure.

But yes, targeting an architecture like AVR which is neither primary
nor secondary (so very low priority) _plus_ being quite special in
target abilities (it seems to be very easy to mess up things) is hard.

SUSE does have some testers doing (also) code size monitoring
but as much data we have somebody needs to monitor it, further
bisect and report regressions deemed worthwhile.  It's hard to
avoid slow creep -- compile-time and memory use are a similar
issue here.

Richard.

> --
> Andrew Haley
> Java Platform Lead Engineer
> Red Hat UK Ltd. 
> EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671


Re: Overwhelmed by GCC frustration

2017-07-31 Thread Segher Boessenkool
On Mon, Jul 31, 2017 at 11:54:12AM -0600, Jeff Law wrote:
> On 07/31/2017 11:23 AM, Segher Boessenkool wrote:
> > On Tue, Aug 01, 2017 at 01:12:41AM +0900, Oleg Endo wrote:
> >> I could probably write a similar rant.  This is the life of a "minority
> >> target programmer".  Most development efforts are being done with
> >> primary targets in mind.  And as a result, most changes are being
> >> tested only on such targets.
> > 
> > Also, many changes require retuning of all target backends.  This never
> > happens for those backends that aren't very actively maintained.
> Well, I'd claim it's time to jettison some of those backends :-)

If targets no longer build (*), or are no longer useful for anything,
or are a drag on GCC development itself, then sure.

> I'd
> sleep easier at night if we deprecated all the cc0 targets for gcc-8,
> then removed them (if they weren't converted) by gcc-9.

I second that motion.

> Once cc0 is out of the way, then I'd push for doing the same for non-LRA
> targets.

That is a bit aggressive perhaps, we'll lose lots of targets that way,
and it is important for GCC development itself to have a wide variety
of targets.

> Yes, it's a bit draconian :-)  BUt if someone wants an m68k compiler (to
> pick on one I maintain that wouldn't survive), they can always use an
> older version of GCC or do the conversion to bring it up to modern
> standards.  Realistically I'll never do it for the m68k, it's just not
> important enough relatively to the other stuff on my plate.

Nod.


Segher


(*) I tried to build arc-elf-gcc today.  Turns out it won't build except
with really new binutils (but that at least worked).


Re: Overwhelmed by GCC frustration

2017-07-31 Thread Jeff Law
On 07/31/2017 11:23 AM, Segher Boessenkool wrote:
> On Tue, Aug 01, 2017 at 01:12:41AM +0900, Oleg Endo wrote:
>> I could probably write a similar rant.  This is the life of a "minority
>> target programmer".  Most development efforts are being done with
>> primary targets in mind.  And as a result, most changes are being
>> tested only on such targets.
> 
> Also, many changes require retuning of all target backends.  This never
> happens for those backends that aren't very actively maintained.
Well, I'd claim it's time to jettison some of those backends :-)  I'd
sleep easier at night if we deprecated all the cc0 targets for gcc-8,
then removed them (if they weren't converted) by gcc-9.

Once cc0 is out of the way, then I'd push for doing the same for non-LRA
targets.

Yes, it's a bit draconian :-)  BUt if someone wants an m68k compiler (to
pick on one I maintain that wouldn't survive), they can always use an
older version of GCC or do the conversion to bring it up to modern
standards.  Realistically I'll never do it for the m68k, it's just not
important enough relatively to the other stuff on my plate.


Jeff


Re: Overwhelmed by GCC frustration

2017-07-31 Thread Jeff Law
On 07/31/2017 10:49 AM, Joel Sherrill wrote:
> 
> 
> On 7/31/2017 11:12 AM, Oleg Endo wrote:
>> On Mon, 2017-07-31 at 15:25 +0200, Georg-Johann Lay wrote:
>>> Around 2010, someone who used a code snipped that I published in
>>> a wiki, reported that the code didn't work and hang in an
>>> endless loop.  Soon I found out that it was due to some GCC
>>> problem, and I got interested in fixing the compiler so that
>>> it worked with my code.
>>>
>>> 1 1/2 years later, in 2011, [...]
>>
>> I could probably write a similar rant.  This is the life of a "minority
>> target programmer".  Most development efforts are being done with
>> primary targets in mind.  And as a result, most changes are being
>> tested only on such targets.
>>
>> To improve the situation, we'd need a lot more target specific tests
>> which test for those regressions that you have mentioned.  Then of
>> course somebody has to run all those tests on all those various
>> targets.  I think that's the biggest problem.  But still, with a test
>> case at hand, it's much easier to talk to people who have silently
>> introduced a regression on some "other" targets.  Most of the time they
>> just don't know.
> 
> Long ago, there was a code size regression tester for at least
> ARM. Is that still around?
> 
> RTEMS also has a number of "minority targets" and we have seen
> breakages take a long time to get fixed. Most of our targets
> use gcc 7.1.0 but two have to use 4.9.x, one uses 4.8.3, and
> one is at 6.3.0.
One of the things that I think would improve this situation would be a
public build-bot for RTEMS targets.

I've actually got some bits along this path (but not for RTEMS).
Essentially config-list.mk is useful for verifying that our various
targets compile/link.

But it doesn't verify that the resulting compiler has any functionality
at all.   But we can use newlib, glibc, rtems, etc to do that next level
of testing.

So my buildbot will first build binutils, then gcc, then newlib/glibc.
I briefly looked at building rtems as part of that process, but I didn't
find the instructions particularly clear/useful.  Ultimately I
determined I was getting good enough coverage at this step with
newlib/glibc.

Anyway, that gives us a basic level of testing the code generator.  It
doesn't run the testsuite or anything like that, but it has been useful
in catching problems earlier.

The second problem is once you build the buildbot, then someone has to
actually monitor the results, and either fix the problems or farm them
out to the appropriate engineer :-)

jeff


Re: Overwhelmed by GCC frustration

2017-07-31 Thread Segher Boessenkool
On Tue, Aug 01, 2017 at 01:12:41AM +0900, Oleg Endo wrote:
> I could probably write a similar rant.  This is the life of a "minority
> target programmer".  Most development efforts are being done with
> primary targets in mind.  And as a result, most changes are being
> tested only on such targets.

Also, many changes require retuning of all target backends.  This never
happens for those backends that aren't very actively maintained.


Segher


Re: Overwhelmed by GCC frustration

2017-07-31 Thread Andrew Haley
On 31/07/17 17:12, Oleg Endo wrote:
> On Mon, 2017-07-31 at 15:25 +0200, Georg-Johann Lay wrote:
>> Around 2010, someone who used a code snipped that I published in
>> a wiki, reported that the code didn't work and hang in an
>> endless loop.  Soon I found out that it was due to some GCC
>> problem, and I got interested in fixing the compiler so that
>> it worked with my code.
>>
>> 1 1/2 years later, in 2011, [...]
> 
> I could probably write a similar rant.  This is the life of a
> "minority target programmer".  Most development efforts are being
> done with primary targets in mind.  And as a result, most changes
> are being tested only on such targets.
> 
> To improve the situation, we'd need a lot more target specific tests
> which test for those regressions that you have mentioned.  Then of
> course somebody has to run all those tests on all those various
> targets.  I think that's the biggest problem.  But still, with a
> test case at hand, it's much easier to talk to people who have
> silently introduced a regression on some "other" targets.  Most of
> the time they just don't know.

It's a fundamental problem for compilers, in general: every
optimization pass wants to be the last one, and (almost?) no-one who
writes a pass knows all the details of all the subsequent passes.  The
more sophisticated and subtle an optimization, the more possibility
there is of messing something up or confusing someone's back end or a
later pass.  We've seen this multiple times, with apparently
straightforward control flow at the source level turning into a mess
of spaghetti in the resulting assembly.  But we know that the
optimization makes sense for some kinds of program, or at least that
it did at the time the optimization was written.  However, it is
inevitable that some programs will be made worse by some
optimizations.  We hope that they will be few in number, but it
really can't be helped.

So what is to be done?  We could abandon the eternal drive for more
and more optimizations, back off, and concentrate on simplicity and
robustness at the expens of ultimate code quality.  Should we?  It
would take courage, and there will be an eternal pressume to improve
code.  And, of course, we'd risk someone forking GCC and creating the
"superoptimized GCC" project, starving FSF GCC of developers.  That's
happened before, so it's not an imaginary risk.

-- 
Andrew Haley
Java Platform Lead Engineer
Red Hat UK Ltd. 
EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671


Re: Overwhelmed by GCC frustration

2017-07-31 Thread Joseph Myers
On Tue, 1 Aug 2017, Oleg Endo wrote:

> To improve the situation, we'd need a lot more target specific tests
> which test for those regressions that you have mentioned.  Then of
> course somebody has to run all those tests on all those various
> targets.  I think that's the biggest problem.  But still, with a test

Code size is something where you could in principle have a regression 
tester that runs for all target architectures without needing target 
hardware (though you still need some way to decide which regressions, 
whether sudden or gradual, are significant and which are noise, and 
someone needs to keep monitoring the results and reporting regressions).  
Much like the compilation parts of the GCC testsuites (where identifying 
regressions would be rather easier).

The compilation-only regression testers I set up for glibc are very 
helpful for ensuring it stays building for minority architectures, albeit 
with existing compiler regressions for ColdFire and SH that predate 
setting up the testers, and with execution test results still being rather 
a mess for less-tested configurations (and anyone can do the compilation 
tests themselves with the build-many-glibcs.py script, though it takes a 
while without a many-cores system to run it on).

-- 
Joseph S. Myers
jos...@codesourcery.com

Re: Overwhelmed by GCC frustration

2017-07-31 Thread Joel Sherrill



On 7/31/2017 11:12 AM, Oleg Endo wrote:

On Mon, 2017-07-31 at 15:25 +0200, Georg-Johann Lay wrote:

Around 2010, someone who used a code snipped that I published in
a wiki, reported that the code didn't work and hang in an
endless loop.  Soon I found out that it was due to some GCC
problem, and I got interested in fixing the compiler so that
it worked with my code.

1 1/2 years later, in 2011, [...]


I could probably write a similar rant.  This is the life of a "minority
target programmer".  Most development efforts are being done with
primary targets in mind.  And as a result, most changes are being
tested only on such targets.

To improve the situation, we'd need a lot more target specific tests
which test for those regressions that you have mentioned.  Then of
course somebody has to run all those tests on all those various
targets.  I think that's the biggest problem.  But still, with a test
case at hand, it's much easier to talk to people who have silently
introduced a regression on some "other" targets.  Most of the time they
just don't know.


Long ago, there was a code size regression tester for at least
ARM. Is that still around?

RTEMS also has a number of "minority targets" and we have seen
breakages take a long time to get fixed. Most of our targets
use gcc 7.1.0 but two have to use 4.9.x, one uses 4.8.3, and
one is at 6.3.0.


Cheers,
Oleg




--joel


Re: Overwhelmed by GCC frustration

2017-07-31 Thread Oleg Endo
On Mon, 2017-07-31 at 15:25 +0200, Georg-Johann Lay wrote:
> Around 2010, someone who used a code snipped that I published in
> a wiki, reported that the code didn't work and hang in an
> endless loop.  Soon I found out that it was due to some GCC
> problem, and I got interested in fixing the compiler so that
> it worked with my code.
> 
> 1 1/2 years later, in 2011, [...]

I could probably write a similar rant.  This is the life of a "minority
target programmer".  Most development efforts are being done with
primary targets in mind.  And as a result, most changes are being
tested only on such targets.

To improve the situation, we'd need a lot more target specific tests
which test for those regressions that you have mentioned.  Then of
course somebody has to run all those tests on all those various
targets.  I think that's the biggest problem.  But still, with a test
case at hand, it's much easier to talk to people who have silently
introduced a regression on some "other" targets.  Most of the time they
just don't know.

Cheers,
Oleg