On 29/09/14 19:32, Richard Henderson wrote:
On 09/29/2014 11:12 AM, Jiong Wang wrote:
+inline rtx single_set_no_clobber_use (const rtx_insn *insn)
+{
+  if (!INSN_P (insn))
+    return NULL_RTX;
+
+  if (GET_CODE (PATTERN (insn)) == SET)
+    return PATTERN (insn);
+
+  /* Defer to the more expensive case, and return NULL_RTX if there is
+     USE or CLOBBER.  */
+  return single_set_2 (insn, PATTERN (insn), true);
  }

Richard,

  thanks for review.

What more expensive case?

single_set_no_clobber_use is just a clone of single_set, I copied the comments 
with
only minor modifications.

I think the "more expensive case" here means the case where there are PARALLEL 
that
we need to check the inner rtx.


If you're disallowing USE and CLOBBER, then single_set is just GET_CODE == SET.

I think this function is somewhat useless, and should not be added.

An adjustment to move_insn_for_shrink_wrap may be reasonable though. I haven't
tried to understand the miscompilation yet.  I can imagine that this would
disable quite a bit of shrink wrapping for x86 though.

I don't think so. from the x86-64 bootstrap, there is no regression on the 
number
of functions shrink-wrapped. actually speaking, previously only single mov 
dest, src
handled, so the disallowing USE/CLOBBER will not disallow shrink-wrap 
opportunity which
was allowed previously.

and I am afraid if we don't reuse single_set_2, then there will be another loop
to check all those inner rtx which single_set_2 already does.

so, IMHO, just modify single_set_2 will be more efficient.

Can we do better in
understanding when the clobbered register is live at the location to which we'd
like to move then insns?

currently, the generic code in move_insn_for_shrink_wrap only handle dest/src 
be single
register, so if there is clobber or use, then we might need to check multiply 
regs, then there might
be a few modifications. and I think that's better be done after all single 
dest/src issues fixed.

--
Jiong




r~





Reply via email to