https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92281

--- Comment #6 from Segher Boessenkool <segher at gcc dot gnu.org> ---
(In reply to Richard Earnshaw from comment #5)
> What I've shown is equivalent to (minus (minus (A) (B)) (C)), which is what
> combine produces today.  Are you saying that the documentation disagrees on
> the overall shape of this and the compilers output right now?

I am saying that a lot of what combine forms is not canonical form.  There
simply is no canonical form for many expressions.  Every combine attempt
results in one form, that is a very important feature as well as one of the
main weaknesses of combine; but that one form is *not* canonical.

> Minus isn't commutative, but in a 3-way version (A - B - C), the order of B
> and C does not matter.  ... - B - C is the same as ... - C - B.  So you can
> re-order the nesting to produce a canonical form.

Sure.  And where there isn't a canonical form, you can reorder it whatever
way you want.  That is why there *are* canonical forms: to reduce the number
of forms everything has to deal with.  But this does not always help, and
some times it works *against* this goal.

> > > > What targets would it break, and how?
> > > 
> > > Hard to tell, until we try it.  Mostly the 'breakage' would be some 
> > > combine
> > > patterns might no-longer match if the target only had one and the ordering
> > > were not canonical (leading to some missed optimizations).  On targets 
> > > that
> > > have both orderings, some patterns might become redundant and never match
> > > unless directly generated by the back-end.
> > 
> > The breakage will be that many targets optimise worse than they did before.
> > And this is non-obvious to detect, usually.
> 
> At present it's entirely random, since there's no attempt to create order. 

It usually preserves ordering.  Simply by not changing things that do not
need any change.  But sometimes things are changed for no apparent reason.

> Any matching that does occur is more by good luck (or overkill in providing
> all the redundant variant forms).

Yes, but any change that degrades code quality is still a regression, whether
those targets just got lucky or that was by design.

> > A lot of what combine does is *not* canonicalisation.  But combine comes up
> > with only one result for every attempted combination, making that a kind-of
> > de-facto canonicalisation.
> > 
> > And yes, that is what I asked: in both cases it combined the same insn with
> > a simple pseudo move, in both cases on the RHS in that insn.  And it came
> > up with different results.
> > 
> > This may be unavoidable, or combine does something weird, or the RTL that
> > combine started with was non-canonical or unexpected in some other way, etc.
> > 
> > So I'd like to know where the difference was introduced.  Was it in combine
> > at all, to start with?  It can be in simplify-rtx as well for example.
> 
> Combine is the prime user of simplify-rtx - perhaps I'm conflating the two,
> but this is, in part, combine's problem because it's during the combine pass
> that having matchers for all these variants becomes most important.

I am not asking to shift the blame.  I am asking to start to solve the problem.
To do that we need to know where the problem *is*, if there actually is a
problem, etc.  Just more information is needed.

simplify-rtx is very different from combine.  Everything simplify-rtx does is
a simplification(*).  Many things combine does are *not*.  That is one of the
reasons combine still has its own "simplifier": it is not a simplifier.  Some
of what that does is good and useful.  Some of it is questionable.  Some of it
is actively bad.

(*) There are a few cases where simplify-rtx does a non-simplification.  I try
to weed those out.

Reply via email to