https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #38 from Segher Boessenkool <segher at gcc dot gnu.org> ---
(In reply to Richard Biener from comment #36)
> > No, it definitely should be done.  As I showed back then, it costs less than
> > 1%
> > extra compile time on *any platform* on average, and it reduced code size by
> > 1%-2%
> > everywhere.
> > 
> > It also cannot get stuck, any combination is attempted only once, any
> > combination
> > that succeeds eats up a loglink.  It is finite, (almost) linear in fact.
> 
> So the slowness for the testcase comes from failed attempts.

Of course.  Most attempts do not succeed, there aren't instructions for most
"random" combinations of instructions feeding each other.  But combine blindly
tries everything, that is its strength!  It ends up finding many more thing
than
any recognition automaton does.

> > Something that is the real complaint here: it seems we do not GC often
> > enough,
> > only after processing a BB (or EBB)?  That adds up for artificial code like
> > this, sure.
> 
> For memory use if you know combine doesn't have "dangling" links to GC memory
> you can call ggc_collect at any point you like.  Or, when you create
> throw-away RTL, ggc_free it explicitly (yeah, that only frees the
> "toplevel").

A lot of it *is* toplevel (well, completely disconnected RTX), just
temporaries,
things we can just throw away.  At every try_combine call even, kinda.  There
might be some more RTX that needs some protection.  We'll see.

> > And the "param to give an upper limit to how many combination attempts are
> > done
> > (per BB)" offer is on the table still, too.  I don't think it would ever be
> > useful (if you want your code to compile faster just write better code!),
> > but :-)
> 
> Well, while you say the number of successful combinations is linear the
> number of combine attempts appearantly isn't

It is, and that is pretty easy to show even.  With retries it stays linear, but
with a hefty constant.  And on some targets (with more than three inputs for
some instructions, say) it can be a big constant anyway.

But linear is linear, and stays linear, for way too big code it is just as
acceptable as for "normal" code.  Just slow.  If you don't want the compiler to
take a long time compiling your way too big code, use -O0, or preferably do not
write insane code in the first place :-)


> (well, of course, if we ever
> combine from multi-use defs).  So yeah, a param might be useful here but
> instead of some constant limit on the number of combine attempts per
> function or per BB it might make sense to instead limit it on the number
> of DEFs?

We still use loglinks in combine.  These are nice to prove that things stay
linear, even (every time combine succeeds a loglink is used up).

The number of loglinks and insns (insns combine can do anything with) differs
by a small constant factor.

> I understand we work on the uses

We work on the loglinks, a def-use pair if you want.

> so it'll be a bit hard to
> apply this in a way to, say, combine a DEF only with the N nearest uses
> (but not any ones farther out),

There is only a loglink from a def to the very next use.  If that combines, the
insn that does the def is retained as well, if there is any other use.  But
there never is a combination of a def with a later use tried, if the earliest
use does not combine.

> and maintaining such a count per DEF would
> cost.  So more practical might be to limit the number of attempts to combine
> into an (unchanged?) insns?
> 
> Basically I would hope with a hard limit in place we'd not stop after the
> first half of a BB leaving trivial combinations in the second half
> unhandled but instead somehow throttle the "expensive" cases?

Ideally we'll not do *any* artificial limitations.  But GCC just throws its hat
in the ring in other cases as well, say, too big RA problems.  You do get not
as high quality code as wanted, but at least you get something compiled in
an acceptable timeframe :-)

Reply via email to