https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108987
Bug ID: 108987 Summary: [13 Regression] RISC-V: shiftadd cost model bug needlessly preferring syth_multiply Product: gcc Version: 13.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: target Assignee: unassigned at gcc dot gnu.org Reporter: vineetg at rivosinc dot com Target Milestone: --- gcc trunk is preferring synthetic multiply using shift+add even when they are costlier than multiply. unsigned long long f5(unsigned long long i) { return i * 0x0202020202020202ULL; } riscv64-unknown-linux-gnu-gcc -c -O2 -march=rv64gc_zba f5: slli a5,a0,8 add a0,a5,a0 slli a5,a0,16 add a0,a0,a5 slli a5,a0,32 add a0,a0,a5 slli a0,a0,1 ret With gcc 12.2 this used to be f5: lui a5,%hi(.LC0) ld a5,%lo(.LC0)(a5) mul a0,a0,a5 ret This is a regression introduced by commit f90cb39235c4 ("RISC-V: costs: support shift-and-add in strength-reduction"). It introduced the cost for shift[1-3]+add (to favor SH*ADD) but due to a coding bug ended up doing this for all shift values, affecting synth multiply among others. This showed up as dynamic icount regression in SPEC 531.deepsjeng.