https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91420
--- Comment #9 from Andrew Pinski ---
(In reply to Andrew Pinski from comment #8)
> That is definitely a bug in mcmodel=kernel in the x86backenbd which is
> different from the problem here even though both have same testcase.
Filed the x86_64 is
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91420
--- Comment #8 from Andrew Pinski ---
(In reply to Andrew Waterman from comment #2)
> The RISC-V code models currently in existence place a 2 GiB limit on
> the extent of the statically linked portion of a binary. Rather than
> a bug, I would de
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91420
Andrew Pinski changed:
What|Removed |Added
CC||sch...@linux-m68k.org
--- Comment #7 fro
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91420
Jim Wilson changed:
What|Removed |Added
Status|UNCONFIRMED |NEW
Last reconfirmed|
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91420
--- Comment #6 from Richard Biener ---
(In reply to Andrew Waterman from comment #4)
> In -O2, the compiler materializes ("x" + INT_MIN) by loading that
> symbol+offset into a register in one shot, whereas in -O0 it loads the
> address of "x" int
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91420
--- Comment #5 from Bin Meng ---
Thanks Andrew. That makes sense!
I wonder whether there is a way to teach GCC not to generate code for such
radical optimization that it can't relocate when using "-O2", on all
architectures :)