https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70048

--- Comment #5 from Wilco <wdijkstr at arm dot com> ---
(In reply to amker from comment #4)
> (In reply to ktkachov from comment #3)
> > Started with r233136.
> 
> That's why I forced base+offset out of memory reference and kept register
> scaling in in the first place.  I think my statement still holds, that
> GIMPLE optimizers should be improved to CSE register scaling expression
> among different memory references, then force base(sp)+offset parts out of
> memory reference in legitimize_address so that it can be merged.  I suspect
> this case is going to be very inefficient in loop context.  Unfortunately
> GIMPLE optimizers are weak now and r233136 is a workaround to it.  I want to
> revisit slsr sometime after stage4 to fix this issue.

The issue is that the new version splits immediates from virtual stack
variables, this is what GCC used to generate:

(insn 8 7 9 2 (set (reg:DI 82)
        (plus:DI (reg/f:DI 68 virtual-stack-vars)
            (const_int -4000 [0xfffffffffffff060]))) addroff.c:38 -1

However latest GCC does:

(insn 9 8 10 2 (set (reg:DI 84)
        (plus:DI (reg/f:DI 68 virtual-stack-vars)
            (reg:DI 83))) addroff.c:38 -1
     (nil))
(insn 10 9 11 2 (set (reg:DI 85)
        (plus:DI (reg:DI 84)
            (const_int -4096 [0xfffffffffffff000]))) addroff.c:38 -1
     (nil))

So we need to recoginize virtual-stack-vars+offset as a special case rather
than like any other add with immediate.

Reply via email to