https://gcc.gnu.org/bugzilla/show_bug.cgi?id=117726
Bug ID: 117726
Summary: [avr] better optimize multi-byte shifts
Product: gcc
Version: 15.0
Status: UNCONFIRMED
Severity: normal
Priority: P3
Component: target
Assignee: unassigned at gcc dot gnu.org
Reporter: gjl at gcc dot gnu.org
Target Milestone: ---
Multi-byte shift insns like
long ashl32_25 (long x)
{
return x << 25;
}
are not optimized well:
$ avr-gcc -S -Os ...
ashl32_25:
ldi r18,25 ; 21 [c=44 l=7] *ashlsi3_const/3
1:
lsl r22
rol r23
rol r24
rol r25
dec r18
brne 1b
ret
this is slow, and even with -Os / OPTIMIZE_SIZE_BALANCED a faster code is
desirable, like for example:
ashl32_25:
mov r25,r22 ; 22 [c=4 l=1] movqi_insn/0
lsl r25 ; 23 [c=4 l=1] *ashlqi3/2
ldi r24,0 ; 24 [c=4 l=1] movqi_insn/0
ldi r22,0 ; 26 [c=4 l=1] movqi_insn/0
ldi r23,0 ; 27 [c=4 l=1] movqi_insn/0
ret