https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108803
Jakub Jelinek changed:
What|Removed |Added
CC||segher at gcc dot gnu.org
--- Comment
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108803
--- Comment #6 from Jakub Jelinek ---
Created attachment 54476
--> https://gcc.gnu.org/bugzilla/attachment.cgi?id=54476=edit
gcc13-pr108803.patch
Actually, the above patch isn't correct because for op1 equal to 0 we really
need the
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108803
--- Comment #5 from Jakub Jelinek ---
The change then would be
--- gcc/optabs.cc.jj2023-01-02 09:32:53.309838465 +0100
+++ gcc/optabs.cc 2023-02-16 19:33:14.583883584 +0100
@@ -507,7 +507,7 @@ expand_subword_shift (scalar_int_mode op
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108803
--- Comment #4 from Jakub Jelinek ---
On the other side, if we knew that the backend would use something like the
shifts with masking, we could then avoid the extra reverse unsigned shift by 1
+ reverse unsigned shift by (63 - op1) & 63 plus
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108803
--- Comment #3 from Jakub Jelinek ---
I take back the "I wonder why we haven't optimized it earlier", the reason is
-Og, we do optimize that in evrp/vrp*, but those aren't done with -Og.
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108803
Jakub Jelinek changed:
What|Removed |Added
Last reconfirmed||2023-02-16
Ever confirmed|0
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108803
--- Comment #2 from Jakub Jelinek ---
--- gcc/optabs.cc.jj2023-01-02 09:32:53.309838465 +0100
+++ gcc/optabs.cc 2023-02-16 18:04:54.794871019 +0100
@@ -596,6 +596,16 @@ expand_doubleword_shift_condmove (scalar
{
rtx
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108803
Jakub Jelinek changed:
What|Removed |Added
CC||jakub at gcc dot gnu.org,
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108803
Richard Biener changed:
What|Removed |Added
Priority|P3 |P2
Target Milestone|---