[Bug tree-optimization/61515] [4.9/4.10 Regression] Extremely long compile time for generated code
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=61515 Jakub Jelinek jakub at gcc dot gnu.org changed: What|Removed |Added Target Milestone|4.9.1 |4.9.2 --- Comment #14 from Jakub Jelinek jakub at gcc dot gnu.org --- GCC 4.9.1 has been released.
[Bug tree-optimization/61515] [4.9/4.10 Regression] Extremely long compile time for generated code
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=61515 Richard Biener rguenth at gcc dot gnu.org changed: What|Removed |Added Priority|P3 |P2
[Bug tree-optimization/61515] [4.9/4.10 Regression] Extremely long compile time for generated code
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=61515 --- Comment #13 from rguenther at suse dot de rguenther at suse dot de --- On Tue, 17 Jun 2014, law at redhat dot com wrote: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=61515 Jeffrey A. Law law at redhat dot com changed: What|Removed |Added CC||law at redhat dot com Assignee|unassigned at gcc dot gnu.org |law at redhat dot com --- Comment #12 from Jeffrey A. Law law at redhat dot com --- Fundamentally we don't have a way to know what equivalences we need to invalidate. Invalidation is, umm, painful. The question in my mind is why are we getting so many invalidations to start with. That's the first thing to look at. Well, it's easy to avoid the quadraticness - you can always create testcases that need a lot of invalidates. But the current algorithm really does not scale. Honestly though, I really wonder if handling backedges is worth the effort, even though it's important for one benchmark. Not sure about that, but trivial improvements to the scalability are possible here. Walking all SSA names is O(number of stmts) - if you do that O(number of stmts) time (as you do) that's clearly the bug.
[Bug tree-optimization/61515] [4.9/4.10 Regression] Extremely long compile time for generated code
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=61515 Richard Biener rguenth at gcc dot gnu.org changed: What|Removed |Added Known to work||4.9.0 Target Milestone|4.10.0 |4.9.1 Summary|[4.10 Regression] Extremely |[4.9/4.10 Regression] |long compile time for |Extremely long compile time |generated code |for generated code Known to fail||4.9.1 --- Comment #11 from Richard Biener rguenth at gcc dot gnu.org --- Same code on the 4.9 branch now.
[Bug tree-optimization/61515] [4.9/4.10 Regression] Extremely long compile time for generated code
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=61515 Jeffrey A. Law law at redhat dot com changed: What|Removed |Added CC||law at redhat dot com Assignee|unassigned at gcc dot gnu.org |law at redhat dot com --- Comment #12 from Jeffrey A. Law law at redhat dot com --- Fundamentally we don't have a way to know what equivalences we need to invalidate. Invalidation is, umm, painful. The question in my mind is why are we getting so many invalidations to start with. That's the first thing to look at. Honestly though, I really wonder if handling backedges is worth the effort, even though it's important for one benchmark.