https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66358

--- Comment #3 from Kazumoto Kojima <kkojima at gcc dot gnu.org> ---
(In reply to Oleg Endo from comment #2)

Defaulting -mlra might be reasonable for gcc 6.
For gcc 5, I thought the patch for prepare_move_operands like

diff --git a/config/sh/sh.c b/config/sh/sh.c
index 1cf6ed0..b855d70 100644
--- a/config/sh/sh.c
+++ b/config/sh/sh.c
@@ -1789,9 +1789,8 @@ prepare_move_operands (rtx operands[], machine_mode mode)
         target/55212.
         We split possible load/store to two move insns via r0 so as to
         shorten R0 live range.  It will make some codes worse but will
-        win on avarage for LRA.  */
-      else if (sh_lra_p ()
-              && TARGET_SH1 && ! TARGET_SH2A
+        win on avarage.  */
+      else if (TARGET_SH1 && ! TARGET_SH2A
               && (mode == QImode || mode == HImode)
               && ((REG_P (operands[0]) && MEM_P (operands[1]))
                   || (REG_P (operands[1]) && MEM_P (operands[0]))))

which would be a simplest form of the preallocating r0 for this limited case,
though I'm afraid that it's still too invasive for the release branch.

Reply via email to