On Wed, Feb 11, 2015 at 4:55 AM, Jeff Law <l...@redhat.com> wrote:
>
> This PR was originally minor issue where we regressed on this kind of
> sequence:
>
> typedef struct toto_s *toto_t;
> toto_t add (toto_t a, toto_t b) {
>   int64_t tmp = (int64_t)(intptr_t)a + ((int64_t)(intptr_t)b&~1L);
>   return (toto_t)(intptr_t) tmp;
> }
>
>
> There was talk of trying to peephole this in the x86 backend.  But later
> Jakub speculated that if we had good type narrowing this could be done in
> the tree optimizers...
>
> Soooo, here we go.  I didn't do anything with logicals are those are already
> handled elsewhere in match.pd.  I didn't try to handle MULT as in the early
> experiments I did, it was a lose because of the existing mechanisms for
> widening multiplications.
>
> Interestingly enough, this patch seems to help out libjava more than
> anything else in a GCC build and it really only helps a few routines. There
> weren't any routines I could see where the code regressed after this patch.
> This is probably an indicator that these things aren't *that* common, or the
> existing shortening code better than we thought, or some important
> shortening case is missing.

Cool that we are trying to simplify type conversion using generic
match facility.  I have thought about type promotion in match.pd too.
For example, (unsigned long long)(unsigned long)(int_expr), if we can
prove int_expr is always positive (in my case, this is from vrp
information), then the first conversion can be saved.  This is another
way for (and related? I didn't look at the code) the sign/zero
extension elimination work using VRP I suppose?

Thanks,
bin
>
>
> I think we should pull the other tests from 47477 which are not regressions
> out into their own bug for future work.  Or alternately, when this fix is
> checked in remove the regression marker in 47477.
>
>
> Bootstrapped and regression tested on x86_64-unknown-linux-gnu.  OK for the
> trunk?
>
>
>
>
>
>
>
>
> diff --git a/gcc/ChangeLog b/gcc/ChangeLog
> index 7f3816c..7a95029 100644
> --- a/gcc/ChangeLog
> +++ b/gcc/ChangeLog
> @@ -1,3 +1,8 @@
> +2015-02-10  Jeff Law  <l...@redhat.com>
> +
> +       * match.pd (convert (plus/minus (convert @0) (convert @1): New
> +       simplifier to narrow arithmetic.
> +
>  2015-02-10  Richard Biener  <rguent...@suse.de>
>
>         PR tree-optimization/64909
> diff --git a/gcc/match.pd b/gcc/match.pd
> index 81c4ee6..abc703e 100644
> --- a/gcc/match.pd
> +++ b/gcc/match.pd
> @@ -1018,3 +1018,21 @@ along with GCC; see the file COPYING3.  If not see
>     (logs (pows @0 @1))
>     (mult @1 (logs @0)))))
>
> +/* If we have a narrowing conversion of an arithmetic operation where
> +   both operands are widening conversions from the same type as the outer
> +   narrowing conversion.  Then convert the innermost operands to a suitable
> +   unsigned type (to avoid introducing undefined behaviour), perform the
> +   operation and convert the result to the desired type.
> +
> +   This narrows the arithmetic operation.  */
> +(for op (plus minus)
> +  (simplify
> +    (convert (op (convert@2 @0) (convert @1)))
> +    (if (TREE_TYPE (@0) == TREE_TYPE (@1)
> +         && TREE_TYPE (@0) == type
> +         && INTEGRAL_TYPE_P (type)
> +         && TYPE_PRECISION (TREE_TYPE (@2)) > TYPE_PRECISION (TREE_TYPE
> (@0))
> +        /* This prevents infinite recursion.  */
> +        && unsigned_type_for (TREE_TYPE (@0)) != TREE_TYPE (@2))
> +      (with { tree utype = unsigned_type_for (TREE_TYPE (@0)); }
> +        (convert (op (convert:utype @0) (convert:utype @1)))))))
> diff --git a/gcc/testsuite/ChangeLog b/gcc/testsuite/ChangeLog
> index 15d5e2d..76e5254 100644
> --- a/gcc/testsuite/ChangeLog
> +++ b/gcc/testsuite/ChangeLog
> @@ -1,3 +1,8 @@
> +2015-02-10  Jeff Law  <l...@redhat.com>
> +
> +       PR rtl-optimization/47477
> +       * gcc.dg/tree-ssa/narrow-arith-1.c: New test.
> +
>  2015-02-10  Richard Biener  <rguent...@suse.de>
>
>         PR tree-optimization/64909
> diff --git a/gcc/testsuite/gcc.dg/tree-ssa/narrow-arith-1.c
> b/gcc/testsuite/gcc.dg/tree-ssa/narrow-arith-1.c
> new file mode 100644
> index 0000000..104cb6f5
> --- /dev/null
> +++ b/gcc/testsuite/gcc.dg/tree-ssa/narrow-arith-1.c
> @@ -0,0 +1,22 @@
> +/* PR tree-optimization/47477 */
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -fdump-tree-optimized -w" } */
> +/* { dg-require-effective-target ilp32 } */
> +
> +typedef int int64_t __attribute__ ((__mode__ (__DI__)));
> +typedef int * intptr_t;
> +
> +typedef struct toto_s *toto_t;
> +toto_t add (toto_t a, toto_t b) {
> +  int64_t tmp = (int64_t)(intptr_t)a + ((int64_t)(intptr_t)b&~1L);
> +  return (toto_t)(intptr_t) tmp;
> +}
> +
> +/* For an ILP32 target there'll be 6 casts when we start, but just 4
> +   if the match.pd pattern is successfully matched.  */
> +/* { dg-final { scan-tree-dump-times "= \\(int\\)" 1 "optimized" } } */
> +/* { dg-final { scan-tree-dump-times "= \\(unsigned int\\)" 2 "optimized" }
> } */
> +/* { dg-final { scan-tree-dump-times "= \\(struct toto_s \\*\\)" 1
> "optimized" } } */
> +/* { dg-final { cleanup-tree-dump "optimized" } } */
> +
> +
>

Reply via email to