[Bug rtl-optimization/101523] Huge number of combine attempts

2024-05-06 Thread rguenth at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #67 from Richard Biener  ---
(In reply to Segher Boessenkool from comment #66)
> (In reply to rguent...@suse.de from comment #64)
> > As promised I'm going to revert the revert after 14.1 is released 
> > (hopefully tomorrow).
> 
> Thank you!  beer++
> 
> > As for distros I have decided to include my
> > hack posted in 
> > https://gcc.gnu.org/pipermail/gcc-patches/2024-April/648725.html
> > for SUSE based distros in GCC 13 and 14 as that seems to improve
> > the problematical memory uses in our build farm.
> 
> I think this patch may well show some actual regressions :-(  We'll see.

I'm probably not going to notice - at least I think it should be fine by
design, but as we see combine doesn't adhere to it's design, so my milage
may vary ;)  But yeah, I didn't do any extensive before/after code
differences (there should be no difference - fingers crossing ;))

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-05-06 Thread segher at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #66 from Segher Boessenkool  ---
(In reply to rguent...@suse.de from comment #64)
> As promised I'm going to revert the revert after 14.1 is released 
> (hopefully tomorrow).

Thank you!  beer++

> As for distros I have decided to include my
> hack posted in 
> https://gcc.gnu.org/pipermail/gcc-patches/2024-April/648725.html
> for SUSE based distros in GCC 13 and 14 as that seems to improve
> the problematical memory uses in our build farm.

I think this patch may well show some actual regressions :-(  We'll see.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-05-06 Thread segher at kernel dot crashing.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #65 from Segher Boessenkool  ---
On Sat, May 04, 2024 at 01:14:18PM +, sarah.kriesch at opensuse dot org
wrote:
Do not reply to a PR comment in private mail.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-05-06 Thread rguenther at suse dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #64 from rguenther at suse dot de  ---
On Sat, 4 May 2024, segher at gcc dot gnu.org wrote:

> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523
> 
> --- Comment #61 from Segher Boessenkool  ---
> We used to do the wrong thing in combine.  Now that my fix was reverted, we
> still do.  This should be undone soonish,

As promised I'm going to revert the revert after 14.1 is released 
(hopefully tomorrow).  As for distros I have decided to include my
hack posted in 
https://gcc.gnu.org/pipermail/gcc-patches/2024-April/648725.html
for SUSE based distros in GCC 13 and 14 as that seems to improve
the problematical memory uses in our build farm.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-05-04 Thread segher at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #63 from Segher Boessenkool  ---
(In reply to Sarah Julia Kriesch from comment #62)
> (In reply to Segher Boessenkool from comment #61)
> > (In reply to Sarah Julia Kriesch from comment #60)
> > > I have to agree with Richard. This problem has been serious for a long 
> > > time
> > > but has been ignored by IBM based on distribution choices.
> > 
> > What?  What does IBM have to do with this?  Yes, they are my employer, but
> > what I decide is best for combine to do is not influenced by them *at all*
> > (except some times they want me to spend time doing paid work, distracting
> > me from things that really matter, like combine!)
> > 
> Then, tell other reasons why my requests in the openSUSE bug report have
> been rejected in the past, and this bug report has been open for 3 years.
> Perhaps it is helpful to know that IBM has fixed memory issues in PostgreSQL
> (for openSUSE/upstream) with higher quality via my request with the support
> for Red Hat (and faster).

Once again, I have no idea what you are talking about.  It sounds like some
complot theory?  Exciting!

I really have no idea what you are talking about.  I recognise some of the
words, but not enough to give me a handle on what you are on about.

> > > Anyway, we want to live within the open source community without any Linux
> > > distribution priorities (especially in upstream projects like here).
> > 
> > No idea what that means either.
> > 
> There is a reason for founding the Linux Distributions Working Group at the
> Open Mainframe Project (equality for all Linux Distributions on s390x).
> SUSE, Red Hat and Canonical have been supporting this idea also (especially
> based on my own work experience at IBM and the priorities inside).

And here I don't have any context either.

> > > Segher, can you specify the failed test cases? Then, it should be possible
> > > to reproduce and improve that all. In such a collaborative way, we can 
> > > also
> > > achieve a solution.
> > 
> > What failed test cases?  You completely lost me.
> > 
> This one:
> (In reply to Segher Boessenkool from comment #57)
> > (In reply to Richard Biener from comment #56)
> > PR101523 is a very serious problem, way way way more "P1" than any of the
> > "my target was inconvenienced by some bad testcases failing now" "P1"s there
> > are now.  Please undo this!

They are in this PR.  "See Also", top right corner in the headings.

> (In reply to Segher Boessenkool from comment #61)
> > We used to do the wrong thing in combine.  Now that my fix was reverted, we
> > still do.  This should be undone soonish, so that I can commit an actual
> > UNCSE
> > implementation, which fixes all "regressions" (quotes, because they are 
> > not!)
> > caused by my previous patch, and does a lot more too.  It also will allow us
> > to remove a bunch of other code from combine, speeding up things a lot more
> > (things that keep a copy of a set if the dest is used more than once).  
> > There
> > has been talk of doing an UNCSE for over twenty years now, so annoying me
> > enough to get this done is a good result of this whole thing :-)
> Your fixes should also work with upstream code and the used gcc versions in
> our/all Linux distributions. I recommend applying tests and merging your
> fixes to at least one gcc version.

Lol.  No.  Distributions have to sort out their own problems.  I don't have
a copy of an old version of most distros even; I haven't *heard* about the
*existence* of most distros!

I don't use a Linux distro on any of my own machines.  And I care about some
other OSes at least as much, btw.  And not just because my employer cares about
some of those.

> If you want to watch something about our reasons for creating a
> collaboration between Linux distributions (and upstream projects), you
> should watch my first presentation "Collaboration instead of Competition":
> https://av.tib.eu/media/57010
> 
> Hint: The IBM statement came from my former IBM Manager (now your CPO).

CPO?  What is a CPO?  I don't think I have any?  I do have an R2 somewhere,
does that help?

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-05-04 Thread sarah.kriesch at opensuse dot org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #62 from Sarah Julia Kriesch  ---
(In reply to Segher Boessenkool from comment #61)
> (In reply to Sarah Julia Kriesch from comment #60)
> > I have to agree with Richard. This problem has been serious for a long time
> > but has been ignored by IBM based on distribution choices.
> 
> What?  What does IBM have to do with this?  Yes, they are my employer, but
> what I decide is best for combine to do is not influenced by them *at all*
> (except some times they want me to spend time doing paid work, distracting
> me from things that really matter, like combine!)
> 
Then, tell other reasons why my requests in the openSUSE bug report have been
rejected in the past, and this bug report has been open for 3 years.
Perhaps it is helpful to know that IBM has fixed memory issues in PostgreSQL
(for openSUSE/upstream) with higher quality via my request with the support for
Red Hat (and faster).

> > Anyway, we want to live within the open source community without any Linux
> > distribution priorities (especially in upstream projects like here).
> 
> No idea what that means either.
> 
There is a reason for founding the Linux Distributions Working Group at the
Open Mainframe Project (equality for all Linux Distributions on s390x).
SUSE, Red Hat and Canonical have been supporting this idea also (especially
based on my own work experience at IBM and the priorities inside).

> > Segher, can you specify the failed test cases? Then, it should be possible
> > to reproduce and improve that all. In such a collaborative way, we can also
> > achieve a solution.
> 
> What failed test cases?  You completely lost me.
> 
This one:
(In reply to Segher Boessenkool from comment #57)
> (In reply to Richard Biener from comment #56)
> PR101523 is a very serious problem, way way way more "P1" than any of the
> "my target was inconvenienced by some bad testcases failing now" "P1"s there
> are now.  Please undo this!

(In reply to Segher Boessenkool from comment #61)
> We used to do the wrong thing in combine.  Now that my fix was reverted, we
> still do.  This should be undone soonish, so that I can commit an actual
> UNCSE
> implementation, which fixes all "regressions" (quotes, because they are not!)
> caused by my previous patch, and does a lot more too.  It also will allow us
> to remove a bunch of other code from combine, speeding up things a lot more
> (things that keep a copy of a set if the dest is used more than once).  There
> has been talk of doing an UNCSE for over twenty years now, so annoying me
> enough to get this done is a good result of this whole thing :-)
Your fixes should also work with upstream code and the used gcc versions in
our/all Linux distributions. I recommend applying tests and merging your fixes
to at least one gcc version.


If you want to watch something about our reasons for creating a collaboration
between Linux distributions (and upstream projects), you should watch my first
presentation "Collaboration instead of Competition":
https://av.tib.eu/media/57010

Hint: The IBM statement came from my former IBM Manager (now your CPO).

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-05-04 Thread segher at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #61 from Segher Boessenkool  ---
(In reply to Sarah Julia Kriesch from comment #60)
> I have to agree with Richard. This problem has been serious for a long time
> but has been ignored by IBM based on distribution choices.

What?  What does IBM have to do with this?  Yes, they are my employer, but
what I decide is best for combine to do is not influenced by them *at all*
(except some times they want me to spend time doing paid work, distracting
me from things that really matter, like combine!)

> Anyway, we want to live within the open source community without any Linux
> distribution priorities (especially in upstream projects like here).

No idea what that means either.

> Segher, can you specify the failed test cases? Then, it should be possible
> to reproduce and improve that all. In such a collaborative way, we can also
> achieve a solution.

What failed test cases?  You completely lost me.

We used to do the wrong thing in combine.  Now that my fix was reverted, we
still do.  This should be undone soonish, so that I can commit an actual UNCSE
implementation, which fixes all "regressions" (quotes, because they are not!)
caused by my previous patch, and does a lot more too.  It also will allow us
to remove a bunch of other code from combine, speeding up things a lot more
(things that keep a copy of a set if the dest is used more than once).  There
has been talk of doing an UNCSE for over twenty years now, so annoying me
enough to get this done is a good result of this whole thing :-)

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-04-10 Thread sarah.kriesch at opensuse dot org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #60 from Sarah Julia Kriesch  ---
I have to agree with Richard. This problem has been serious for a long time but
has been ignored by IBM based on distribution choices.

Anyway, we want to live within the open source community without any Linux
distribution priorities (especially in upstream projects like here).

Segher, can you specify the failed test cases? Then, it should be possible to
reproduce and improve that all. In such a collaborative way, we can also
achieve a solution.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-04-10 Thread rguenth at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #59 from Richard Biener  ---
(In reply to Segher Boessenkool from comment #57)
> (In reply to Richard Biener from comment #56)
> > The fix was reverted but will be re-instantiated for GCC 15 by me.
> 
> And I still protest.
> 
> PR101523 is a very serious problem, way way way more "P1" than any of the
> "my target was inconvenienced by some bad testcases failing now" "P1"s there
> are now.  Please undo this!

It was a very serious problem in 2021, too.  But since we shipped with that
very serious problem it is by definition not a ship-stopper.  I'll also
point out that your previous assessment of this bug was more as an obscure
corner case.

If there's a solution avoiding the code generation regressions that can
be considered for backporting to GCC 14.2 or even earlier branches.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-04-10 Thread jakub at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #58 from Jakub Jelinek  ---
(In reply to Segher Boessenkool from comment #57)
> (In reply to Richard Biener from comment #56)
> > The fix was reverted but will be re-instantiated for GCC 15 by me.
> 
> And I still protest.
> 
> PR101523 is a very serious problem, way way way more "P1" than any of the
> "my target was inconvenienced by some bad testcases failing now" "P1"s there
> are now.  Please undo this!

It isn't just the case where some unimportant testcase would need to be
xfailed, there are 5% slowdowns on SPEC and similar bugs as well.
Yes, PR101523 is important to get fixed, but we've lived with it for years and
don't have time in GCC 14 cycle to deal with the fallouts from the change,
which there are clearly many.  If the fix was done in stage1, there could be
time to deal with that, but we want to release in 2 weeks or so.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-04-10 Thread segher at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #57 from Segher Boessenkool  ---
(In reply to Richard Biener from comment #56)
> The fix was reverted but will be re-instantiated for GCC 15 by me.

And I still protest.

PR101523 is a very serious problem, way way way more "P1" than any of the
"my target was inconvenienced by some bad testcases failing now" "P1"s there
are now.  Please undo this!

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-04-10 Thread rguenth at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

Richard Biener  changed:

   What|Removed |Added

 Resolution|FIXED   |---
 Status|RESOLVED|ASSIGNED
   Assignee|segher at gcc dot gnu.org  |rguenth at gcc dot 
gnu.org

--- Comment #56 from Richard Biener  ---
The fix was reverted but will be re-instantiated for GCC 15 by me.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-04-05 Thread rguenth at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #55 from Richard Biener  ---
(In reply to Segher Boessenkool from comment #54)
> Propose a patch, then?  With justification.  It should also work for 10x
> bigger testcases.

https://gcc.gnu.org/pipermail/gcc-patches/2024-April/648725.html

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-04-05 Thread segher at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #54 from Segher Boessenkool  ---
Propose a patch, then?  With justification.  It should also work for 10x
bigger testcases.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-04-03 Thread rguenth at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #53 from Richard Biener  ---
So just to recap, with reverting the change and instead doing

diff --git a/gcc/combine.cc b/gcc/combine.cc
index a4479f8d836..ff25752cac4 100644
--- a/gcc/combine.cc
+++ b/gcc/combine.cc
@@ -4186,6 +4186,10 @@ try_combine (rtx_insn *i3, rtx_insn *i2, rtx_insn *i1,
rtx_insn *i0,
   adjust_for_new_dest (i3);
 }

+  bool i2_unchanged = false;
+  if (rtx_equal_p (newi2pat, PATTERN (i2)))
+i2_unchanged = true;
+
   /* We now know that we can do this combination.  Merge the insns and
  update the status of registers and LOG_LINKS.  */

@@ -4752,6 +4756,9 @@ try_combine (rtx_insn *i3, rtx_insn *i2, rtx_insn *i1,
rtx_insn *i0,
   combine_successes++;
   undo_commit ();

+  if (i2_unchanged)
+return i3;
+
   rtx_insn *ret = newi2pat ? i2 : i3;
   if (added_links_insn && DF_INSN_LUID (added_links_insn) < DF_INSN_LUID
(ret))
 ret = added_links_insn;

combine time is down from 79s (93%) to 3.5s (37%), quite a bit more than
with the currently installed patch which has combine down to 0.02s (0%).
But notably peak memory use is down from 9GB to 400MB (installed patch 340MB).

That was with a cross from x86_64-linux and a release checking build.

This change should avoid any code generation changes, I do think if the
pattern doesn't change what distribute_notes/links does should be a no-op
even to I2 so we can ignore added_{links,notes}_insn (not ignoring them
only provides a 50% speedup).

I like the 0% combine result of the installed patch but the regressions
observed probably mean this needs to be defered to stage1.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-27 Thread segher at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

Segher Boessenkool  changed:

   What|Removed |Added

 Resolution|--- |FIXED
 Status|NEW |RESOLVED

--- Comment #52 from Segher Boessenkool  ---
Fixed.  (On trunk only, no backports planned, this goes back aaages).

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-27 Thread segher at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #51 from Segher Boessenkool  ---
(In reply to Richard Biener from comment #46)
> Maybe combine already knows that it just "keeps i2" rather than replacing it?

It never does that.  Instead, it thinks it is making a new I2, but it ends up
to be exactly the same instruction.  This is not a good thing to do, combine
can change the whole thing back to the previous shape for example, when it
feels like it (combine does not make canonical forms ever!)

> When !newi2pat we seem to delete i2.  Anyway, somebody more familiar with
> combine should produce a good(TM) patch.

Yes, the most common combinations delete I2, they combine 2->1 or 3->1 or 4->1.
When this isn't possible combine tries to combine to two instructions, it has
various strategies for this: the backend can do it explicitly (via a
define_split),
or it can break apart the expression that was the src in the one set that was
the ->1 result, hoping that the two instructions it gets that way are valid
insns.  It tries only one way to do this, and it isn't very smart about it,
just very heuristic.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-27 Thread cvs-commit at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #50 from GCC Commits  ---
The master branch has been updated by Segher Boessenkool :

https://gcc.gnu.org/g:839bc42772ba7af66af3bd16efed4a69511312ae

commit r14-9692-g839bc42772ba7af66af3bd16efed4a69511312ae
Author: Segher Boessenkool 
Date:   Wed Mar 27 14:09:52 2024 +

combine: Don't combine if I2 does not change

In some cases combine will "combine" an I2 and I3, but end up putting
exactly the same thing back as I2 as was there before.  This is never
progress, so we shouldn't do it, it will lead to oscillating behaviour
and the like.

If we want to canonicalise things, that's fine, but this is not the
way to do it.

2024-03-27  Segher Boessenkool  

PR rtl-optimization/101523
* combine.cc (try_combine): Don't do a 2-insn combination if
it does not in fact change I2.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-07 Thread krebbel at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #33 from Andreas Krebbel  ---
(In reply to Andrew Pinski from comment #26)
...
> I suspect if we change the s390 backend just slightly to set the cost when
> there is an index to the address to 1 for the MEM, combine won't be acting
> up here.
> Basically putting in sync the 2 cost methods.

I've tried that but this didn't change anything. As you have expected the
problem goes away when letting s390_address_cost always return 0.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-07 Thread krebbel at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #32 from Andreas Krebbel  ---
(In reply to Segher Boessenkool from comment #25)
> So this testcase compiles on powerpc64-linux (-O2) in about 34s.  Is s390x
> way worse, or is this in lie what you are seeing?

Way worse. See #c22 : 20s before your commit and 5min with it.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-07 Thread segher at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #31 from Segher Boessenkool  ---
I need a configure flag, hrm.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-07 Thread jakub at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

Jakub Jelinek  changed:

   What|Removed |Added

 CC||jakub at gcc dot gnu.org

--- Comment #30 from Jakub Jelinek  ---
(In reply to Segher Boessenkool from comment #29)
> I did manage to build one, but it does not know _Float64x and stuff.

Use -mlong-double-128 ?

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-07 Thread segher at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #29 from Segher Boessenkool  ---
I did manage to build one, but it does not know _Float64x and stuff.

Do you have a basic C-only testcase, maybe?

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-07 Thread segher at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #28 from Segher Boessenkool  ---
For Q111: on rs6000:
;; Combiner totals: 53059 attempts, 52289 substitutions (7135 requiring new
space),
;; 229 successes.

I don't have C++ cross-compilers built (those are not trivial to do, hrm). 
I'll
try to build a s390x one.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-07 Thread pinskia at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #27 from Andrew Pinski  ---
(In reply to Segher Boessenkool from comment #25)
> So this testcase compiles on powerpc64-linux (-O2) in about 34s.  Is s390x
> way worse, or is this in lie what you are seeing?

I should note that in the powerpc backend, address_cost is always 0 so I
suspect it won't run into this issue where fwprop rejects the transformation
but combine accepts it.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-07 Thread pinskia at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #26 from Andrew Pinski  ---
So looking into the s390 backend, I notice that s390_address_cost says the
addressing mode `base+index` is slightly more expensive than just `base`:
from s390_address_cost :
  return ad.indx? COSTS_N_INSNS (1) + 1 : COSTS_N_INSNS (1);

BUT then s390_rtx_costs when looking at MEM does not take into account the
addressing for the cost there (it does take into account on the LHS though):
```
case MEM:
  *total = 0;
  return true;
```

This mismatch does cause some issues. Basically fwprop uses address_cost to
figure out when it is replacing into MEM but combine just uses
insn_cost/rtx_cost .  So while fwprop rejects it as being worse and then
combine comes along and does it.

I suspect if we change the s390 backend just slightly to set the cost when
there is an index to the address to 1 for the MEM, combine won't be acting up
here.
Basically putting in sync the 2 cost methods.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-07 Thread segher at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #25 from Segher Boessenkool  ---
So this testcase compiles on powerpc64-linux (-O2) in about 34s.  Is s390x
way worse, or is this in lie what you are seeing?

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-07 Thread segher at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #24 from Segher Boessenkool  ---
(In reply to Andreas Krebbel from comment #21)
> Wouldn't it in this particular case be possible to recognize already in
> try_combine that separating the move out of the parallel cannot lead to
> additional optimization opportunities? To me it looks like we are just
> recreating the situation we had before merging the INSNs into a parallel. Is
> there a situation where this could lead to any improvement in the end?

It might be possible.  It's not trivial at all though, esp. if you consider
other patterns, other targets, everything.

Anything that grossly reduces what we try will not fly.

This testcase is very degenerate, if we can recognise something about that
and make combine handle that better, that could be done.  Or I'll do my
proposed "do not try more than 40 billion things" patch.

As it is now, combine only ever reconsiders anything if it *did* make changes.
So, if you see it reconsidering things a lot, you also see it making a lot of
changes.  And all those changes make for materially better generated code (that
is tested by combine always, before making changes).

Changing things so combine makes fewer changes directly means you want it to
optimise less well.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-07 Thread krebbel at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #23 from Andreas Krebbel  ---
Created attachment 57646
  --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=57646=edit
Testcase for comment #22

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-07 Thread krebbel at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #22 from Andreas Krebbel  ---
I did a git bisect which ended up pointing at this commit, somewhere between
GCC 8 and 9:

commit c4c5ad1d6d1e1e1fe7a1c2b3bb097cc269dc7306 (bad)
Author: Segher Boessenkool 
Date:   Mon Jul 30 15:18:17 2018 +0200

combine: Allow combining two insns to two insns

This patch allows combine to combine two insns into two.  This helps
in many cases, by reducing instruction path length, and also allowing
further combinations to happen.  PR85160 is a typical example of code
that it can improve.

This patch does not allow such combinations if either of the original
instructions was a simple move instruction.  In those cases combining
the two instructions increases register pressure without improving the
code.  With this move test register pressure does no longer increase
noticably as far as I can tell.

(At first I also didn't allow either of the resulting insns to be a
move instruction.  But that is actually a very good thing to have, as
should have been obvious).

With this command line:
cc1plus -O2 -march=z196 -fpreprocessed Q111-8.ii -quiet

before:   20s compile-time and21846 total combine attempts
after: > 5min compile-time and 43175686 total combine attempts

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-07 Thread krebbel at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #21 from Andreas Krebbel  ---
(In reply to Segher Boessenkool from comment #16)
...
> When some insns have changed (or might have changed, combine does not always
> know
> the details), combinations of the insn with later insns are tried again. 
> Sometimes
> this finds new combination opportunities.
> 
> Not retrying combinations after one of the insns has changed would be a
> regression.

Wouldn't it in this particular case be possible to recognize already in
try_combine that separating the move out of the parallel cannot lead to
additional optimization opportunities? To me it looks like we are just
recreating the situation we had before merging the INSNs into a parallel. Is
there a situation where this could lead to any improvement in the end?

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-07 Thread krebbel at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #20 from Andreas Krebbel  ---
(In reply to Segher Boessenkool from comment #17)
...
> So what is really happening?  And, when did this start, anyway, because
> apparently at some point in time all was fine?

Due to the C++ constructs used the testcase doesn't compile with much older
GCCs. However, I can confirm that the problem can already be reproduced with
GCC 11.1.0.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-07 Thread krebbel at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #19 from Andreas Krebbel  ---
(In reply to Sarah Julia Kriesch from comment #15)
> (In reply to Segher Boessenkool from comment #13)
> > (In reply to Sarah Julia Kriesch from comment #12)
> > A bigger case of what?  What do you mean?
> Not only one software package is affected by this bug. "Most" software
> builds are affected. As Andreas mentioned correctly, the fix is also
> beneficial for other projects/target software.

I don't think we have any evidence yet that this is the problem which also hits
us with other packages builds. If you have other cases please open separate BZs
for that and we will try to figure out whether it is actually a DUP of this
one.

With "targets" I meant other GCC build targets. This pattern doesn't look
s390x-specific to me, although I haven't tried to reproduce it somewhere else.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-06 Thread pinskia at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #18 from Andrew Pinski  ---
Hmm, looking at what is done in combine, I wonder why forwprop didn't do the
address add into the memory. That would definitely decrease the # of combines
being done.

Maybe it is because it is used more than once. But still seems like another
pass should have done the a=b+c; d=e*[a] into a=b+c; d=e*[b+c] before hand.

Maybe there is some address cost going wrong here that forwprop is causing
issues. And it just happens combine does not take that into account due to rtl
cost not taking address cost into account.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-06 Thread segher at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #17 from Segher Boessenkool  ---
Why does this happen so extremely often for s390x customers?  It should from
first principles happen way more often for e.g. powerpc, but we never see such
big problems, let alone "all of the time"!

So what is really happening?  And, when did this start, anyway, because
apparently
at some point in time all was fine?

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-06 Thread segher at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #16 from Segher Boessenkool  ---
(In reply to Andreas Krebbel from comment #14)
> If my analysis from comment #1 is correct, combine does superfluous steps
> here. Getting rid of this should not cause any harm, but should be
> beneficial for other targets as well. I agree that the patch I've proposed
> is kind of a hack. Do you think this could be turned into a proper fix?

When some insns have changed (or might have changed, combine does not always
know
the details), combinations of the insn with later insns are tried again. 
Sometimes
this finds new combination opportunities.

Not retrying combinations after one of the insns has changed would be a
regression.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-05 Thread sarah.kriesch at opensuse dot org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #15 from Sarah Julia Kriesch  ---
(In reply to Segher Boessenkool from comment #13)
> (In reply to Sarah Julia Kriesch from comment #12)
> A bigger case of what?  What do you mean?
Not only one software package is affected by this bug. "Most" software builds
are affected. As Andreas mentioned correctly, the fix is also beneficial for
other projects/target software.

There are so many packages with customized memory settings for s390x at
openSUSE, and the Maintainers can only shake their head about this behaviour.
But let's progress step by step.

I have luck, that also IBM customers are raising questions regarding that now.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-04 Thread krebbel at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #14 from Andreas Krebbel  ---
If my analysis from comment #1 is correct, combine does superfluous steps here.
Getting rid of this should not cause any harm, but should be beneficial for
other targets as well. I agree that the patch I've proposed is kind of a hack.
Do you think this could be turned into a proper fix?

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-04 Thread segher at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #13 from Segher Boessenkool  ---
(In reply to Sarah Julia Kriesch from comment #12)
> I expect also, that this bug is a bigger case.

A bigger case of what?  What do you mean?

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-04 Thread sarah.kriesch at opensuse dot org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #12 from Sarah Julia Kriesch  ---
Raise your hand if you need anything new from my side.
We have got enough use cases in our build system and upstream open source
projects gave warnings to remove the s390x support because of long building
time and the required resources.

I expect also, that this bug is a bigger case.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-04 Thread segher at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #11 from Segher Boessenkool  ---
Okay, so it is a function with a huge BB, so this is not a regression at all,
there will have been incredibly many combination attempts since the day combine
has existed.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-04 Thread krebbel at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #10 from Andreas Krebbel  ---
Created attachment 57599
  --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=57599=edit
Testcase - somewhat reduced from libecpint

Verified with rev 146f16c97f6

cc1plus -O2 t.cc

try_combine invocations:
x86:
3
27262
27603

s390x:
8
40439657
40440339

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-03 Thread segher at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #9 from Segher Boessenkool  ---
Yeah.

Without a testcase we do not know what is going on.  Likely it is a testcase
with some very big basic block, which naturally gives very many combination
opportunities: the problem by nature is at least quadratic.  There are various
ways to limit the work done for this, all amounting to "just give up if the
problem is too big", just like we do in many other places.

It also is interesting to see when this started happening.  One of the external
PRs indicated this has happened for some years already -- so notably this is
not a regression -- but what change caused this then?  It can even be the 2-2
thing, if it started far enough back.  Or, the real reason why we need to know
when it started: possibly a bug was introduced.

In all cases, we need the testcase.

(The reason this does not happen on x86 is that so many things on x86 are
stored
in memory, and on less register-poor archs like 390 not.  Combine never does
dependencies via memory).

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-02 Thread sjames at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

Sam James  changed:

   What|Removed |Added

 CC||sjames at gcc dot gnu.org

--- Comment #8 from Sam James  ---
I think Andreas meant to attach a testcase but hadn't?

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-03-02 Thread sarah.kriesch at opensuse dot org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

Sarah Julia Kriesch  changed:

   What|Removed |Added

 CC||sarah.kriesch at opensuse dot 
org

--- Comment #7 from Sarah Julia Kriesch  ---
That started more than some years ago.
I have initiated the debugging with this openSUSE bug report:
https://bugzilla.opensuse.org/show_bug.cgi?id=1188441

IBM Bugzilla:
https://bugzilla.linux.ibm.com/show_bug.cgi?id=193674

The problem with the memory has been already available before my time at IBM.
That is reproducible on most Linux distributions for IBM Z & LinuxONE.

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-02-25 Thread segher at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #6 from Segher Boessenkool  ---
There is no attached testcase, btw.  This makes investigating this kind of
tricky ;-)

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-02-25 Thread segher at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #5 from Segher Boessenkool  ---
Hrm.  When did this start, can you tell?

[Bug rtl-optimization/101523] Huge number of combine attempts

2024-02-23 Thread krebbel at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

Andreas Krebbel  changed:

   What|Removed |Added

 CC||stefansf at linux dot ibm.com

--- Comment #4 from Andreas Krebbel  ---
Hi Segher, any guidance on how to proceed with that? This recently was brought
up by distro people again because it is causing actual problems in their build
setups.

[Bug rtl-optimization/101523] Huge number of combine attempts

2021-07-20 Thread segher at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

Segher Boessenkool  changed:

   What|Removed |Added

   Assignee|unassigned at gcc dot gnu.org  |segher at gcc dot 
gnu.org

--- Comment #3 from Segher Boessenkool  ---
The "newi2pat = NULL_RTX;" you do there cannot be correct.  But the general
idea is fine, sure.  I'll work on it.

(nN

[Bug rtl-optimization/101523] Huge number of combine attempts

2021-07-20 Thread krebbel at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #2 from Andreas Krebbel  ---
Created attachment 51174
  --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=51174=edit
Experimental Fix

With that patch the number of combine attempts goes back to normal.

[Bug rtl-optimization/101523] Huge number of combine attempts

2021-07-20 Thread krebbel at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #1 from Andreas Krebbel  ---
This appears to be triggered by try_combine unnecessarily setting back the
position by returning the i2 insn.

When 866 is inserted into 973 866 still needs to be kept around for other
users. So try_combine first merges the two sets into a parallel and immediately
notices that this can't be recognized. Because none of the sets is a trivial
move it is split again into two separate insns. Although the new i2 pattern
exactly matches the input i2 combine considers this to be a new insn and
triggers all the scanning log link creation and eventually returns it what
let's the combine start all over at 866.

Due to that combine tries many of the substitutions more than 400x.

Trying 866 -> 973:
  866: r22393:DI=r22391:DI+r22392:DI
  973: r22499:DF=r22498:DF*[r22393:DI]
  REG_DEAD r22498:DF
Failed to match this instruction:
(parallel [
(set (reg:DF 22499)
(mult:DF (reg:DF 22498)
(mem:DF (plus:DI (reg/f:DI 22391 [ _85085 ])
(reg:DI 22392 [ _85086 ])) [17 *_85087+0 S8 A64])))
(set (reg/f:DI 22393 [ _85087 ])
(plus:DI (reg/f:DI 22391 [ _85085 ])
(reg:DI 22392 [ _85086 ])))
])
Failed to match this instruction:
(parallel [
(set (reg:DF 22499)
(mult:DF (reg:DF 22498)
(mem:DF (plus:DI (reg/f:DI 22391 [ _85085 ])
(reg:DI 22392 [ _85086 ])) [17 *_85087+0 S8 A64])))
(set (reg/f:DI 22393 [ _85087 ])
(plus:DI (reg/f:DI 22391 [ _85085 ])
(reg:DI 22392 [ _85086 ])))
])
Successfully matched this instruction:
(set (reg/f:DI 22393 [ _85087 ])
(plus:DI (reg/f:DI 22391 [ _85085 ])
(reg:DI 22392 [ _85086 ])))
Successfully matched this instruction:
(set (reg:DF 22499)
(mult:DF (reg:DF 22498)
(mem:DF (plus:DI (reg/f:DI 22391 [ _85085 ])
(reg:DI 22392 [ _85086 ])) [17 *_85087+0 S8 A64])))
allowing combination of insns 866 and 973
original costs 4 + 4 = 8
replacement costs 4 + 4 = 8
modifying insn i2   866: r22393:DI=r22391:DI+r22392:DI
deferring rescan insn with uid = 866.
modifying insn i3   973: r22499:DF=r22498:DF*[r22391:DI+r22392:DI]
  REG_DEAD r22498:DF
deferring rescan insn with uid = 973.