On 5/24/23 17:14, Jivan Hakobyan via Gcc-patches wrote:
`This patch tries to prevent generating unnecessary sign extension
after *w instructions like "addiw" or "divw".
The main idea of it is to add SUBREG_PROMOTED fields during expanding.
I have tested on SPEC2017 there is no regression.
Only gcc.dg/pr30957-1.c test failed.
To solve that I did some changes in loop-iv.cc, but not sure that it is
suitable.
So this generally looks good and I did some playing around with it over
the weekend. It's generally a win, though it can result in performance
regressions under the "right" circumstances.
The case I was looking at was omnetpp in spec. I suspect you didn't see
it because we have a parameter which increases the threshold for when a
string comparison against a constant string should be inlined. I
suspect you aren't using that param. I inherited that param usage and
haven't tried to eliminate it yet.
If you compile a test like this:
#include <string.h>
int
foo (char *x)
{
return strcmp (x, "lowerLayout");
}
With -O2 --param builtin-string-cmp-inline-length=100
You'll see the regression. This is reasonable representation of the
code in omnetpp.
It's actually a fairly interesting little problem. We end up with
overlapping lifetimes for the 32 and 64bit results. The register
allocator doesn't really try hard to detect the case where it can ignore
a conflict because the two objects hold the same value. I remember
kicking this around with Vlad at least 10 years ago and we concluded
(based on experience and data at the time) that this case wasn't that
important to handle. Anyway...
Another approach to this problem would be to twiddle the strcmp expander
to work in word_mode, then convert the word mode result to the final
mode at the end of the sequence. I need to ponder the semantics of this
a bit more, but if the semantics are right, it seems like it might be a
viable solution.
I briefly looked at the big improvement in leela (33b instructions,
roughly 1.7% of the dynamic count removed). The hope was that if I
looked at the cases where we improved that they would all be shifts,
rotates and the like and we could consider a more limited version of
your patch. But it was quite clear that the improvement in leela as due
to better handling of 32bit additions. So that idea's a non-starter. I
didn't look at the big gains in xz (they're smaller in absolute count
terms, but larger in percentage of instructions removed).
So I'm going to play a bit more with the expander for comparisons
against constant strings. If done correctly it might actually be an
improvement on other targets as well.
Jeff
gcc/ChangeLog:
* config/riscv/bitmanip.md (rotrdi3): New pattern.
(rotrsi3): Likewise.
(rotlsi3): Likewise.
* config/riscv/riscv-protos.h (riscv_emit_binary): New function
declaration
* config/riscv/riscv.cc (riscv_emit_binary): Removed static
* config/riscv/riscv.md (addsi3): New pattern
(subsi3): Likewise.
(negsi2): Likewise.
(mulsi3): Likewise.
(<optab>si3): New pattern for any_div.
(<optab>si3): New pattern for any_shift.
* loop-iv.cc (get_biv_step_1): Process src of extension when it
PLUS
gcc/testsuite/ChangeLog:
* testsuite/gcc.target/riscv/shift-and-2.c: New test
* testsuite/gcc.target/riscv/shift-shift-2.c: New test
* testsuite/gcc.target/riscv/sign-extend.c: New test
* testsuite/gcc.target/riscv/zbb-rol-ror-03.c: New test