On 30/09/14 19:36, Jiong Wang wrote:
2014-09-30 17:30 GMT+01:00 Jeff Law <l...@redhat.com>:
On 09/30/14 08:37, Jiong Wang wrote:

On 30/09/14 05:21, Jeff Law wrote:

I do agree with Richard that it would be useful to see the insns that
are incorrectly sunk and the surrounding context.
So I must be missing something.  I thought the shrink-wrapping code wouldn't
sink arithmetic/logical insns like we see with insn 14 and insn 182.  I
thought it was limited to reg-reg copies and constant initializations.
yes, it was limited to reg-reg copies, and my previous sink improvement aimed to
sink any rtx

   A: be single_set
   B: the src operand be any combination of no more than one register
and no non-constant objects.

while some operator like shift may have side effect. IMHO, all side
effects are reflected on RTX,
together with this fail_on_clobber_use modification, the rtx returned
by single_set_no_clobber_use is
safe to sink if it meets the above limit B and pass later register
use/def check in move_insn_for_shrink_wrap ?

Ping ~

And as there is NONDEBUG_INSN_P check before move_insn_for_shrink_wrap invoked,
we could avoid creating new wrapper function by invoke single_set_2 directly.

comments?

bootstrap ok on x86-64, and no regression on check-gcc/g++.
will do aarch64 bootstrapping/regression going on.

2014-10-08  Jiong Wang  <jiong.w...@arm.com>

        * rtl.h (single_set_2): New parameter "fail_on_clobber_use".
        (single_set): Likewise.
        * config/ia64/ia64.c (ia64_single_set): Likewise.
        * rtlanal.c (single_set_2): Return NULL_RTX if fail_on_clobber_use be 
true.
        * shrink-wrap.c (move_insn_for_shrink_wrap): Use single_set_2.

Regards,
Jiong


Regards,
Jiong

Jeff


diff --git a/gcc/config/ia64/ia64.c b/gcc/config/ia64/ia64.c
index 9337be1..09d3c4a 100644
--- a/gcc/config/ia64/ia64.c
+++ b/gcc/config/ia64/ia64.c
@@ -7172,7 +7172,7 @@ ia64_single_set (rtx_insn *insn)
       break;
 
     default:
-      ret = single_set_2 (insn, x);
+      ret = single_set_2 (insn, x, false);
       break;
     }
 
diff --git a/gcc/rtl.h b/gcc/rtl.h
index e73f731..c0b5bf5 100644
--- a/gcc/rtl.h
+++ b/gcc/rtl.h
@@ -2797,7 +2797,7 @@ extern void set_insn_deleted (rtx);
 
 /* Functions in rtlanal.c */
 
-extern rtx single_set_2 (const rtx_insn *, const_rtx);
+extern rtx single_set_2 (const rtx_insn *, const_rtx, bool fail_on_clobber_use);
 
 /* Handle the cheap and common cases inline for performance.  */
 
@@ -2810,7 +2810,7 @@ inline rtx single_set (const rtx_insn *insn)
     return PATTERN (insn);
 
   /* Defer to the more expensive case.  */
-  return single_set_2 (insn, PATTERN (insn));
+  return single_set_2 (insn, PATTERN (insn), false);
 }
 
 extern enum machine_mode get_address_mode (rtx mem);
diff --git a/gcc/rtlanal.c b/gcc/rtlanal.c
index 3063458..7d6ed27 100644
--- a/gcc/rtlanal.c
+++ b/gcc/rtlanal.c
@@ -1182,7 +1182,7 @@ record_hard_reg_uses (rtx *px, void *data)
    will not be used, which we ignore.  */
 
 rtx
-single_set_2 (const rtx_insn *insn, const_rtx pat)
+single_set_2 (const rtx_insn *insn, const_rtx pat, bool fail_on_clobber_use)
 {
   rtx set = NULL;
   int set_verified = 1;
@@ -1197,6 +1197,8 @@ single_set_2 (const rtx_insn *insn, const_rtx pat)
 	    {
 	    case USE:
 	    case CLOBBER:
+	      if (fail_on_clobber_use)
+		return NULL_RTX;
 	      break;
 
 	    case SET:
diff --git a/gcc/shrink-wrap.c b/gcc/shrink-wrap.c
index b1ff8a2..a3b57b6 100644
--- a/gcc/shrink-wrap.c
+++ b/gcc/shrink-wrap.c
@@ -177,7 +177,8 @@ move_insn_for_shrink_wrap (basic_block bb, rtx_insn *insn,
   edge live_edge;
 
   /* Look for a simple register copy.  */
-  set = single_set (insn);
+  set = (GET_CODE (PATTERN (insn)) == SET ? PATTERN (insn)
+	 : single_set_2 (insn, PATTERN (insn), true));
   if (!set)
     return false;
   src = SET_SRC (set);

Reply via email to