Ups, missed to update patch.

Kai

----- Original Message -----
From: "Kai Tietz" <kti...@redhat.com>
To: "Richard Guenther" <richard.guent...@gmail.com>
Cc: gcc-patches@gcc.gnu.org
Sent: Monday, June 27, 2011 7:04:04 PM
Subject: Re: [patch tree-optimization]: Try to sink type-casts for binary 
and/or/xor operations

Hi,

so I modified patch to use int_fits_type_p() for integer CST checking.  Well, 
this approach is - as discussed on IRC suboptimal - as my intial approach was 
for and-operations with precision type > precision type-x and unsigned type-x 
for constant values bigger then (type-x)~0.
But well, those we miss now by int_fits_type_p() approach, too. And also we 
miss now the cases for that type is signed and type-x is unsigned with same 
precision.

Anyway ... here is the updated patch

Regards,
Kai

----- Original Message -----
From: "Richard Guenther" <richard.guent...@gmail.com>
To: "Kai Tietz" <kti...@redhat.com>
Cc: gcc-patches@gcc.gnu.org
Sent: Monday, June 27, 2011 4:08:41 PM
Subject: Re: [patch tree-optimization]: Try to sink type-casts for binary 
and/or/xor operations

On Mon, Jun 27, 2011 at 3:46 PM, Kai Tietz <kti...@redhat.com> wrote:
> Hello,
>
> this patch sink type conversions in forward-propagate for the following 
> patterns:
> - ((type) X) op ((type) Y): If X and Y have compatible types.
> - ((type) X) op CST: If the conversion of (type) ((type-x) CST) == CST and X 
> has integral type.
> - CST op ((type) X): If the conversion of (type) ((type-x) CST) == CST and X 
> has integral type.

See IRC comments.

> Additionally it fixes another issue shown by this type-sinking in bswap 
> detection. The bswap pattern matching algorithm goes for the first hit, and 
> not tries to seek for best hit.  So we search here two times. First for di 
> case (if present) and then for si mode case.

Please split this piece out.  I suppose either walking over stmts backwards
or simply handling __builtin_bswap in find_bswap_1 would be a better
fix than yours.

Richard.

> ChangeLog
>
> 2011-06-27  Kai Tietz  <kti...@redhat.com>
>
>        * tree-ssa-forwprop.c (simplify_bitwise_binary): Improve
>        type sinking.
>        * tree-ssa-math-opts.c (execute_optimize_bswap): Separate
>        search for di/si mode patterns for finding widest match.
>
> Bootstrapped and regression tested for x86_64-pc-linux-gnu.  Ok for apply?
>
> Regards,
> Kai
>
Index: gcc-head/gcc/tree-ssa-forwprop.c
===================================================================
--- gcc-head.orig/gcc/tree-ssa-forwprop.c
+++ gcc-head/gcc/tree-ssa-forwprop.c
@@ -1624,30 +1624,54 @@ simplify_bitwise_binary (gimple_stmt_ite
   /* If the first argument is an SSA name that is itself a result of a
      typecast of an ADDR_EXPR to an integer, feed the ADDR_EXPR to the
      folder rather than the ssa name.  */
-  if (code == BIT_AND_EXPR
-      && TREE_CODE (arg2) == INTEGER_CST
+  if (TREE_CODE (arg2) == INTEGER_CST
       && TREE_CODE (arg1) == SSA_NAME)
     {
       gimple def = SSA_NAME_DEF_STMT (arg1);
       tree op = arg1;
+      tree opp = NULL_TREE;
+      tree folded_int = NULL_TREE;
 
-      /* ???  This looks bogus - the conversion could be truncating.  */
       if (is_gimple_assign (def)
          && CONVERT_EXPR_CODE_P (gimple_assign_rhs_code (def))
          && INTEGRAL_TYPE_P (TREE_TYPE (arg1)))
        {
-         tree opp = gimple_assign_rhs1 (def);
+         opp = gimple_assign_rhs1 (def);
+         if (INTEGRAL_TYPE_P (opp)
+             && int_fits_type_p (arg2, TREE_TYPE (opp)))
+           folded_int = fold_convert_loc (gimple_location (stmt),
+                                          TREE_TYPE (opp), arg2);
+         /* ???  This looks bogus - the conversion could be truncating.  */
          if (TREE_CODE (opp) == ADDR_EXPR)
            op = opp;
        }
+      if (code == BIT_AND_EXPR)
+        {
+         res = fold_binary_loc (gimple_location (stmt),
+                                BIT_AND_EXPR,
+                                TREE_TYPE (gimple_assign_lhs (stmt)),
+                                op, arg2);
+         if (res && is_gimple_min_invariant (res))
+           {
+             gimple_assign_set_rhs_from_tree (gsi, res);
+             update_stmt (stmt);
+             return true;
+           }
+       }
 
-      res = fold_binary_loc (gimple_location (stmt),
-                            BIT_AND_EXPR, TREE_TYPE (gimple_assign_lhs (stmt)),
-                            op, arg2);
-      if (res && is_gimple_min_invariant (res))
-       {
-         gimple_assign_set_rhs_from_tree (gsi, res);
-         update_stmt (stmt);
+      /* Convert (type) X & CST -> (type) (X & (typeof-X) CST),
+         if conversion of CST is reversible.  */
+      if (opp != NULL_TREE && folded_int != NULL_TREE)
+        {
+         gimple newop;
+         tree tem = create_tmp_reg (TREE_TYPE (opp), NULL);
+         newop = gimple_build_assign_with_ops (code, tem, opp, folded_int);
+         tem = make_ssa_name (tem, newop);
+         gimple_assign_set_lhs (newop, tem);
+         gsi_insert_before (gsi, newop, GSI_SAME_STMT);
+         gimple_assign_set_rhs_with_ops_1 (gsi, NOP_EXPR,
+                                           tem, NULL_TREE, NULL_TREE);
+         update_stmt (gsi_stmt (*gsi));
          return true;
        }
     }
@@ -1682,10 +1706,11 @@ simplify_bitwise_binary (gimple_stmt_ite
   if (CONVERT_EXPR_CODE_P (def1_code)
       && CONVERT_EXPR_CODE_P (def2_code)
       && types_compatible_p (TREE_TYPE (def1_arg1), TREE_TYPE (def2_arg1))
-      /* Make sure that the conversion widens the operands or that it
-        changes the operation to a bitfield precision.  */
+      /* Make sure that the conversion widens the operands, or has same
+        precision,  or that it changes the operation to a bitfield
+        precision.  */
       && ((TYPE_PRECISION (TREE_TYPE (def1_arg1))
-          < TYPE_PRECISION (TREE_TYPE (arg1)))
+          <= TYPE_PRECISION (TREE_TYPE (arg1)))
          || (GET_MODE_CLASS (TYPE_MODE (TREE_TYPE (arg1)))
              != MODE_INT)
          || (TYPE_PRECISION (TREE_TYPE (arg1))

Reply via email to