Hi Segher, 

Thank you for testing the patch and approving it.

Before I commit it, I wanted to check with you on the changelog entry.

Please find the updated patch with the changelog entry. 
I have just removed the comments that says checks for PLUS/MINUS RTX is a hack.

Let me know if it ok.  

Regards,
Venkat.

Change Log 
---------------

2015-05-05  Venkataramanan Kumar  <venkataramanan.ku...@amd.com>

        * combine.c (make_compound_operation): Remove checks for PLUS/MINUS
        rtx type.

---patch---

diff --git a/gcc/combine.c b/gcc/combine.c
index c04146a..9e3eb03 100644
--- a/gcc/combine.c
+++ b/gcc/combine.c
@@ -7723,9 +7723,8 @@ extract_left_shift (rtx x, int count)
    We try, as much as possible, to re-use rtl expressions to save memory.

    IN_CODE says what kind of expression we are processing.  Normally, it is
-   SET.  In a memory address (inside a MEM, PLUS or minus, the latter two
-   being kludges), it is MEM.  When processing the arguments of a comparison
-   or a COMPARE against zero, it is COMPARE.  */
+   SET.  In a memory address it is MEM.  When processing the arguments of
+   a comparison or a COMPARE against zero, it is COMPARE.  */

rtx
 make_compound_operation (rtx x, enum rtx_code in_code)
@@ -7745,8 +7744,6 @@ make_compound_operation (rtx x, enum rtx_code in_code)
      but once inside, go back to our default of SET.  */

   next_code = (code == MEM ? MEM
-              : ((code == PLUS || code == MINUS)
-                 && SCALAR_INT_MODE_P (mode)) ? MEM
               : ((code == COMPARE || COMPARISON_P (x))
                  && XEXP (x, 1) == const0_rtx) ? COMPARE
               : in_code == COMPARE ? SET : in_code);

---patch---

 
-----Original Message-----
From: Segher Boessenkool [mailto:seg...@kernel.crashing.org] 
Sent: Friday, May 01, 2015 8:48 PM
To: Kumar, Venkataramanan
Cc: Jeff Law (l...@redhat.com); gcc-patches@gcc.gnu.org; 
maxim.kuvyr...@linaro.org
Subject: Re: [RFC]: Remove Mem/address type assumption in combiner

On Wed, Apr 29, 2015 at 12:03:35PM -0500, Segher Boessenkool wrote:
> On Wed, Apr 29, 2015 at 09:25:21AM +0000, Kumar, Venkataramanan wrote:
> > diff --git a/gcc/combine.c b/gcc/combine.c index 5c763b4..945abdb 
> > 100644
> > --- a/gcc/combine.c
> > +++ b/gcc/combine.c
> > @@ -7703,8 +7703,6 @@ make_compound_operation (rtx x, enum rtx_code in_code)
> >       but once inside, go back to our default of SET.  */
> > 
> >    next_code = (code == MEM ? MEM
> > -              : ((code == PLUS || code == MINUS)
> > -                 && SCALAR_INT_MODE_P (mode)) ? MEM
> >                : ((code == COMPARE || COMPARISON_P (x))
> >                   && XEXP (x, 1) == const0_rtx) ? COMPARE
> >                : in_code == COMPARE ? SET : in_code);
> > 
> > 
> > On X86_64, it passes bootstrap and regression tests.
> > But on Aarch64 the test in PR passed, but I got a few test case failures.
> 
> > There are few patterns based on multiplication operations in Aarch64 
> > backend which are used to match with the pattern combiner generated.
> > Now those patterns have to be fixed to use SHIFTS.  Also need to see any 
> > impact on other targets.
> 
> Right.  It would be good if you could find out for what targets it matters.
> The thing is, if a target expects only the patterns as combine used to 
> make them, it will regress (as you've seen on aarch64); but it will 
> regress _silently_.  Which isn't so nice.
> 
> > But  before that  I wanted to check if the assumption in combiner,  can 
> > simply be removed ?
> 
> Seeing for what targets / patterns it makes a difference would tell us 
> the answer to that, too :-)
> 
> I'll run some tests with and without your patch.

So I ran the tests.  These are text sizes for vmlinux built for most 
architectures (before resp. after the patch).  Results are good, but they show 
many targets need some work...


BIGGER
 5410496   5432744   alpha
 3848987   3849143   arm
 2190276   2190292   blackfin
 2188431   2191503   c6x (but data shrank by more than text grew)
 2212322   2212706   cris
10896454  10896965   i386
 7471891   7488771   parisc
 6168799   6195391   parisc64
 1545840   1545872   shnommu
 1946649   1954149   xtensa

I looked at some of those.  Alpha regresses with s4add things, arm with add 
,lsl things, parisc with shladd things.  I tried to fix the parisc one (it 
seemed simplest), and the 32-bit kernel got most (but not all) of the size 
difference back; and the 64-bit wouldn't build (an assert in the kernel failed, 
and it uses a floating point reg where an integer one is needed -- I gave up).


SMALLER
 8688171   8688003   powerpc
20252605  20251797   powerpc64
11425334  11422558   s390
12300135  12299767   x86_64

The PowerPC differences are mostly what first was rlwinm;addi;rlwinm;lwz and 
now is rlwinm;lwz; i.e. the add is moved after the two rotates, the rotates are 
merged, and the add is made part of the offset in the load.


NO DIFF
 3321492   3321492   frv
 3252747   3252747   m32r
 4708232   4708232   microblaze
 3949145   3949145   mips
 5693019   5693019   mips64
 2373441   2373441   mn10300
 6489846   6489846   sh64
 3733749   3733749   sparc
 6075122   6075122   sparc64

Also quite a few without any diff.  Good :-)  Some of those might have 
unnecessary add/mult patterns now, but they also have the add/shift.


I didn't see any big differences, and all are of the expected kind.
Some targets need some love (but what else is new).

The patch is approved for trunk.

Thanks,


Segher

Reply via email to