Thanks for the comments. One question inlined. Preparing another patch
addressing the comments.

Regards,
Wei Mi.

On Tue, Oct 15, 2013 at 1:35 PM, Jeff Law <l...@redhat.com> wrote:
> On 10/03/13 12:24, Wei Mi wrote:
>>
>> Thanks,
>> Wei Mi.
>>
>> 2013-10-03  Wei Mi  <w...@google.com>
>>
>>          * gcc/config/i386/i386.c (memory_address_length): Extract a part
>>          of code to rip_relative_addr_p.
>>          (rip_relative_addr_p): New Function.
>>          (ix86_macro_fusion_p): Ditto.
>>          (ix86_macro_fusion_pair_p): Ditto.
>>          * gcc/config/i386/i386.h: Add new tune features about
>> macro-fusion.
>>          * gcc/config/i386/x86-tune.def (DEF_TUNE): Ditto.
>>          * gcc/doc/tm.texi: Generated.
>>          * gcc/doc/tm.texi.in: Ditto.
>>          * gcc/haifa-sched.c (try_group_insn): New Function.
>>          (group_insns_for_macro_fusion): Ditto.
>>          (sched_init): Call group_insns_for_macro_fusion.
>>          * gcc/sched-rgn.c (add_branch_dependences): Keep insns in
>>          a SCHED_GROUP at the end of BB to remain their location.
>>          * gcc/target.def: Add two hooks: macro_fusion_p and
>>          macro_fusion_pair_p.
>
> I'm not going to comment on the x86 specific stuff -- I'll defer to the port
> maintainers for that.
>
>
>
>> index 61eaaef..d6726a9 100644
>> --- a/gcc/haifa-sched.c
>> +++ b/gcc/haifa-sched.c
>> @@ -6519,6 +6519,44 @@ setup_sched_dump (void)
>>                  ? stderr : dump_file);
>>   }
>>
>> +static void
>> +try_group_insn (rtx insn)
>
> You need a comment for this function.
>

Ok, will add comment for it.

>
>
>> +{
>> +  unsigned int condreg1, condreg2;
>> +  rtx cc_reg_1;
>> +  rtx prev;
>> +
>> +  targetm.fixed_condition_code_regs (&condreg1, &condreg2);
>> +  cc_reg_1 = gen_rtx_REG (CCmode, condreg1);
>> +  prev = prev_nonnote_nondebug_insn (insn);
>> +  if (!any_condjump_p (insn)
>> +      || !reg_referenced_p (cc_reg_1, PATTERN (insn))
>> +      || !prev
>> +      || !modified_in_p (cc_reg_1, prev))
>> +    return;
>
> I'd test !any_condjump_p at the start of this function before calling the
> target hook.  If insn isn't a conditional jump, then all the other work is
> totally useless.

Ok. will fix it.

>
> Aren't you just trying to see if we have a comparison feeding the
> conditional jump and if they're already adjacent?  Do you actually need to
> get the condition code regs to do that test?
>

Yes, I am trying to see if we have a comparison feeding the
conditional jump and if they're already adjacent. Do you have more
easier way to do that test?

>
>> +
>> +  /* Different microarchitectures support macro fusions for different
>> +     combinations of insn pairs.  */
>> +  if (!targetm.sched.macro_fusion_pair_p
>> +      || !targetm.sched.macro_fusion_pair_p (prev, insn))
>> +    return;
>> +
>> +  SCHED_GROUP_P (insn) = 1;
>
> I'm surprised that SCHED_GROUP_P worked -- I've tried to do similar stuff in
> the past and ran into numerous problems trying to hijack SCHED_GROUP_P for
> this kind of purpose.
>
>
>
>>
>>   static void haifa_init_only_bb (basic_block, basic_block);
>> diff --git a/gcc/sched-rgn.c b/gcc/sched-rgn.c
>> index e1a2dce..156359e 100644
>> --- a/gcc/sched-rgn.c
>> +++ b/gcc/sched-rgn.c
>> @@ -2443,6 +2443,8 @@ add_branch_dependences (rtx head, rtx tail)
>>        cc0 setters remain at the end because they can't be moved away from
>>        their cc0 user.
>>
>> +     Predecessors of SCHED_GROUP_P instructions at the end remain at the
>> end.
>> +
>>        COND_EXEC insns cannot be moved past a branch (see e.g. PR17808).
>>
>>        Insns setting TARGET_CLASS_LIKELY_SPILLED_P registers (usually
>> return
>> @@ -2465,7 +2467,8 @@ add_branch_dependences (rtx head, rtx tail)
>>   #endif
>>                   || (!reload_completed
>>                       && sets_likely_spilled (PATTERN (insn)))))
>> -        || NOTE_P (insn))
>> +        || NOTE_P (insn)
>> +        || (last != 0 && SCHED_GROUP_P (last)))
>>       {
>>         if (!NOTE_P (insn))
>>          {
>
> This looks like a straighforward bugfix and probably should go forward
> independent of this enhancement.

Ok, I will separate it into another patch.

>
> Jeff

Reply via email to