Hello,

This revised patch adds support to tree-ssa-reassoc for intial
normalizing of bitwise operation.  So the reassociation pass is able
to have better folding result.
Also it has a final denormalzing of bitwise-operations after
reassociation pass is completed to restore optimized writing.

This patch is a bit huge, but I see here no good point to split it
into smaller pieces due the denormalization and normalization of
bitwise-operation are  bound pretty much to each other.

I added some additional testcases to check also for the denormalization pass.

Normalization transforms the following patterns:

 - ~ (X | Y) -> ~X & ~Y
 - ~ (X & Y) -> ~X | ~Y
 - ~ (X ^ Y) -> ~X ^ Y
 - ~ (X ^ CST) -> X ^ CST'; with CST'=~CST
 - ~ (X cmp Y) -> X cmp' Y; with cmp' = inverted comparison of cmp
 - ((X cmp Y) != 0) -> X cmp Y
 - ((X cmp Y) == 0) -> X cmp' Y; with cmp' = inverted comparison of cmp
 - ~ (~X) -> X
 - (X ! Y) != 0 -> (X != 0) | (Y != 0)
 - (X | Y) == 0 -> (X == 0) & (Y == 0)

by this even more complex statments like produced for this code:

  int foo (int a, int b, int c, int d)
  {
    int r1 = a != 0 & c != 0 & b != 0;
    int r2 = a == 0 | b != 0 | d == 0;
    return (r1 != 0 & r2 == 0);
  }

getting in gimple transformed to (as pseudo code)

    return (a != 0 & c != 0 & b != 0 & a != 0 & b == 0 & d != 0);

which can be optimized to fixed result of zero.


The de-normalization pass transforms the following patterns

 - ~X & ~Y -> ~ (X | Y)
 - ~X & CST -> ~ (X | CST'); with CST'=~CST
 - ~X | ~Y -> ~ (X & Y)
 - ~X | CST -> ~ (X & CST'); with CST'=~CST
 - ~X ^ Y -> ~ (X ^ Y)
 - ~X ^ CST -> X ^ CST'; with CST'=~CST
 - (X != 0) | (Y != 0) -> (X ! Y) != 0
 - (X == 0) & (Y == 0) -> (X | Y) == 0
 - (X != ~0) | (Y != ~0) -> (X & Y) != ~0
 - (X == ~0) & (Y == ~0) -> (X & Y) != ~0

ChangeLog

2011-09-21  Kai Tietz  <kti...@redhat.com>

        * tree-ssa-reassoc.c (gimple build_and_add_sum): Add forwarder
        declaration and add support for unary-expressions.
        (remove_visited_stmt_chain): Add forwarder declaration.
        (push_bitwise_op): New helper function.
        (remove_bitwise_op): Likewise.
        (operate_bitwise_stmt): Likewise.
        (repropagate_bitwise): Likewise.
        (operate_bitwise_xor_stmt): Likewise.
        (make_new_tmp_statement): Likewise.
        (expand_not_bitwise_binary): Likewise.
        (get_bitwise_single_use_root): Likewise.
        (is_bitwise_not): Likewise.
        (walk_bitwise_stmt_elems): Likewise.
        (expand_cmp_ior): Likewise.
        (rebuild_vector_tree): Likewise.
        (break_up_bitwise_combined_stmt): Likewise.
        (merge_bitwise_compares): Likewise.
        (bitwise_ops): New static variable.
        (break_up_subtract_bb): Renamed to break_up_expr_bb and
        add call to break_up_bitwise_combined_stmt function.
        (do_reassoc): Rename break_up_subtract_bb call as break_up_expr_bb.
        (init_reassoc): Initialize bitwise_ops vector.
        (fini_reassoc): Destroy bitwise_ops vector.
        (execute_reassoc): Add call for repropagate_bitwise function.


ChangeLog gcc/testsuite

2011-09-21  Kai Tietz  <kti...@redhat.com>

        * gcc.dg/tree-ssa/reassoc-26.c: New test.
        * gcc.dg/tree-ssa/reassoc-27.c: New test.
        * gcc.dg/tree-ssa/reassoc-28.c: New test.
        * gcc.dg/tree-ssa/reassoc-29.c: New test.
        * gcc.dg/tree-ssa/reassoc-30.c: New test.
        * gcc.dg/tree-ssa/reassoc-31.c: New test.
        * gcc.dg/tree-ssa/reassoc-32.c: New test.
        * gcc.dg/tree-ssa/reassoc-33.c: New test.
        * gcc.dg/tree-ssa/reassoc-34.c: New test.
        * gcc.dg/tree-ssa/reassoc-35.c: New test.

Bootstrapped and regression tested for all languages (including Ada
and Obj-C++) on host x86_64-pc-linux-gnu.
Ok for apply?

Regards,
Kai

Index: gcc/gcc/tree-ssa-reassoc.c
===================================================================
--- gcc.orig/gcc/tree-ssa-reassoc.c
+++ gcc/gcc/tree-ssa-reassoc.c
@@ -43,6 +43,10 @@ along with GCC; see the file COPYING3.
 #include "target.h"
 #include "params.h"

+/* Forwarders.  */
+static gimple build_and_add_sum (tree, tree, tree, enum tree_code);
+static void remove_visited_stmt_chain (tree);
+
 /*  This is a simple global reassociation pass.  It is, in part, based
     on the LLVM pass of the same name (They do some things more/less
     than we do, in different orders, etc).
@@ -50,7 +54,9 @@ along with GCC; see the file COPYING3.
     It consists of five steps:

     1. Breaking up subtract operations into addition + negate, where
-    it would promote the reassociation of adds.
+    it would promote the reassociation of adds.  Additionally breaking
+    up combined expression made out of boolean-typed bitwise expressions
+    for improving simplification.

     2. Left linearization of the expression trees, so that (A+B)+(C+D)
     becomes (((A+B)+C)+D), which is easier for us to rewrite later.
@@ -194,7 +200,9 @@ static struct pointer_map_t *operand_ran

 /* Forward decls.  */
 static long get_rank (tree);
-
+static void push_bitwise_op (tree);
+static void remove_bitwise_op (tree);
+static void operate_bitwise_stmt (gimple, enum tree_code);

 /* Bias amount for loop-carried phis.  We want this to be larger than
    the depth of any reassociation tree we can see, but not larger than
@@ -556,6 +564,385 @@ get_unary_op (tree name, enum tree_code
   return NULL_TREE;
 }

+/* Create a temorary register expression with type TYPE, tree code CODE, and
+   operands OP1 and OP2.  If REF_DEF is a valid gimple statement, we use its
+   location for new generated temporary.
+   Function returns left-hand-side of new generated temporary register.  */
+static tree
+make_new_tmp_statement (tree type, enum tree_code code, tree op1, tree op2,
+                       gimple ref_def)
+{
+  gimple sum;
+  tree tmpvar = create_tmp_reg (type, NULL);
+  add_referenced_var (tmpvar);
+  sum = build_and_add_sum (tmpvar, op1, op2, code);
+  if (ref_def)
+    gimple_set_location (sum, gimple_location (ref_def));
+  return gimple_get_lhs (sum);
+}
+
+/* Perfrom on tree LHS with optional definition statement EPXR
+   a logic-not operation.  TYPE is of kind boolean with a 1-bit
+   precision.  */
+static tree
+expand_not_bitwise_binary (tree type, tree lhs, gimple expr)
+{
+  enum tree_code code = ERROR_MARK;
+  tree op1 = NULL, op2 = NULL;
+  gimple s1 = NULL, s2 = NULL;
+
+  if (TREE_CODE (lhs) == INTEGER_CST)
+    return fold_build1 (BIT_NOT_EXPR, type, lhs);
+
+  if (expr && is_gimple_assign (expr))
+    code = gimple_assign_rhs_code (expr);
+
+  /* If statement lhs isn't a single-use statement,
+     we don't want to modify it. So we can do only default-case
+     operation for it.  */
+  if (code != ERROR_MARK && !has_single_use (lhs))
+    code = ERROR_MARK;
+
+  if (TREE_CODE_CLASS (code) == tcc_comparison
+      || code == BIT_XOR_EXPR
+      || code == BIT_AND_EXPR
+      || code == BIT_IOR_EXPR)
+    {
+      op1 = gimple_assign_rhs1 (expr);
+      op2 = gimple_assign_rhs2 (expr);
+    }
+  else if (code == BIT_NOT_EXPR)
+    op1 = gimple_assign_rhs1 (expr);
+  else
+    return make_new_tmp_statement (TREE_TYPE (lhs), BIT_NOT_EXPR,
lhs, NULL_TREE, expr);
+
+  /* ~(~X) -> X.  */
+  if (code == BIT_NOT_EXPR)
+    return op1;
+
+  /* Invert comparison if possible, otherwise fall through to
+     default case.  */
+  else if (TREE_CODE_CLASS (code) == tcc_comparison)
+    {
+      enum tree_code ncode;
+      tree op1type = TREE_TYPE (op1);
+
+      ncode = invert_tree_comparison (code,
+                                     HONOR_NANS (TYPE_MODE (op1type)));
+      if (ncode != ERROR_MARK)
+       return make_new_tmp_statement (type, ncode, op1, op2, expr);
+    }
+  /* ~(A & B) -> ~A | ~B.  */
+  else if (code == BIT_AND_EXPR)
+    {
+      if (TREE_CODE (op1) != SSA_NAME
+         || !(s1 = SSA_NAME_DEF_STMT (op1))
+         || !is_gimple_assign (s1)
+         || !has_single_use (op1))
+       s1 = NULL;
+      if (TREE_CODE (op2) != SSA_NAME
+         || !(s2 = SSA_NAME_DEF_STMT (op2))
+         || !is_gimple_assign (s2)
+         || !has_single_use (op2))
+       s2 = NULL;
+
+      /* We have to do inversion in one step to avoid looping.  */
+      op1 = expand_not_bitwise_binary (type, op1, s1);
+      op2 = expand_not_bitwise_binary (type, op2, s2);
+      if (TREE_CODE (op2) == INTEGER_CST && integer_zerop (op2))
+        {
+         remove_visited_stmt_chain (op2);
+         return op1;
+       }
+      else if (TREE_CODE (op2) == INTEGER_CST && integer_all_onesp (op2))
+        {
+         remove_visited_stmt_chain (op1);
+         return op2;
+       }
+      return make_new_tmp_statement (type, BIT_IOR_EXPR, op1, op2, expr);
+    }
+  /* ~(A | B) -> ~A & ~B.  */
+  else if (code == BIT_IOR_EXPR)
+    {
+      if (TREE_CODE (op1) != SSA_NAME
+         || !(s1 = SSA_NAME_DEF_STMT (op1))
+         || !is_gimple_assign (s1)
+         || !has_single_use (op1))
+       s1 = NULL;
+      if (TREE_CODE (op2) != SSA_NAME
+         || !(s2 = SSA_NAME_DEF_STMT (op2))
+         || !is_gimple_assign (s2)
+         || !has_single_use (op2))
+       s2 = NULL;
+      op1 = expand_not_bitwise_binary (type, op1, s1);
+      op2 = expand_not_bitwise_binary (type, op2, s2);
+      if (TREE_CODE (op2) == INTEGER_CST && integer_zerop (op2))
+        {
+         remove_visited_stmt_chain (op1);
+         return op2;
+       }
+      else if (TREE_CODE (op2) == INTEGER_CST && integer_all_onesp (op2))
+        {
+         remove_visited_stmt_chain (op2);
+         return op1;
+       }
+      return make_new_tmp_statement (type, BIT_AND_EXPR, op1, op2, expr);
+    }
+  /* ~(A ^ B) -> ~A ^ B.  Handle here special cases for B being
+     an integer constant, or being a logical-not.  */
+  else if (code == BIT_XOR_EXPR)
+    {
+      if (TREE_CODE (op1) != SSA_NAME
+         || !(s1 = SSA_NAME_DEF_STMT (op1))
+         || !is_gimple_assign (s1)
+         || !has_single_use (op1))
+       s1 = NULL;
+      if (TREE_CODE (op2) != SSA_NAME
+         || !(s2 = SSA_NAME_DEF_STMT (op2))
+         || !is_gimple_assign (s2)
+         || !has_single_use (op2))
+       s2 = NULL;
+      if (TREE_CODE (op2) != INTEGER_CST
+         && (!s2 || gimple_assign_rhs_code (s2) != BIT_NOT_EXPR))
+        op1 = expand_not_bitwise_binary (type, op1, s1);
+      else
+        op2 = expand_not_bitwise_binary (type, op2, s2);
+
+      if (TREE_CODE (op2) == INTEGER_CST && integer_zerop (op2))
+        {
+         remove_visited_stmt_chain (op2);
+         return op1;
+       }
+
+      if (TREE_CODE (op2) == INTEGER_CST && integer_all_onesp (op2))
+        {
+         remove_visited_stmt_chain (op2);
+         return make_new_tmp_statement (type, BIT_NOT_EXPR, op1, NULL, expr);
+       }
+      return make_new_tmp_statement (type, BIT_XOR_EXPR, op1, op2, expr);
+    }
+
+  if (!types_compatible_p (type, TREE_TYPE (lhs)))
+    abort ();
+
+  /* Default case lhs -> ~lhs  */
+  return make_new_tmp_statement (TREE_TYPE (lhs), BIT_NOT_EXPR, lhs,
+                                NULL_TREE, expr);
+}
+
+/* Helper routine to expand X CODE 0.
+   Cases are
+    (X | Y) == 0 to (X == 0) & (Y == 0)
+    (X | Y) != 0 to (X != 0) | (Y != 0).
+    (X cmp Y) == 0 to (X cmp' Y) with cmp'=invert of cmp.
+    (X cmp Y) != 0 to (X cmp Y).
+
+   The CODE variable can be either NE_EXPR, or EQ_EXPR.  It indicates
+   what expansion is performed.  */
+static tree
+expand_cmp_ior (tree op, tree type, enum tree_code code)
+{
+  tree op1, op2;
+  gimple s = NULL;
+  enum tree_code hcode;
+
+  if (TREE_CODE (op) == INTEGER_CST)
+    {
+      if (code == EQ_EXPR)
+        return fold_convert (type, (integer_zerop (op) ? integer_one_node
+                                                       : integer_zero_node));
+      return fold_convert (type, (!integer_zerop (op) ? integer_one_node
+                                                     : integer_zero_node));
+    }
+  if (TREE_CODE (op) != SSA_NAME
+      || !(s = SSA_NAME_DEF_STMT (op))
+      || !is_gimple_assign (s)
+      || !has_single_use (op))
+    return make_new_tmp_statement (type, code, op,
+                                  build_zero_cst (TREE_TYPE (op)), s);
+  hcode = gimple_assign_rhs_code (s);
+
+  if (TREE_CODE_CLASS (hcode) != tcc_comparison
+      && hcode != BIT_IOR_EXPR)
+    return make_new_tmp_statement (type, code, op,
+                                  build_zero_cst (TREE_TYPE (op)), s);
+
+  op1 = gimple_assign_rhs1 (s);
+  op2 = gimple_assign_rhs2 (s);
+
+  if (TREE_CODE_CLASS (hcode) == tcc_comparison)
+    {
+      tree op1type = TREE_TYPE (op1);
+
+      if (code == EQ_EXPR)
+        {
+         enum tree_code ncode;
+         ncode = invert_tree_comparison (hcode,
+                                         HONOR_NANS (TYPE_MODE (op1type)));
+         if (ncode != ERROR_MARK)
+           return make_new_tmp_statement (type, ncode, op1, op2, s);
+        }
+      else
+        return make_new_tmp_statement (type, hcode, op1, op2, s);
+    }
+  if (hcode == BIT_IOR_EXPR)
+    {
+      op1 = expand_cmp_ior (op1, type, code);
+      op2 = expand_cmp_ior (op2, type, code);
+      return make_new_tmp_statement (type, (code == EQ_EXPR ? BIT_AND_EXPR
+                                                           : BIT_IOR_EXPR),
+                                    op1, op2, s);
+    }
+  return make_new_tmp_statement (type, code, op,
+                                build_zero_cst (TREE_TYPE (op)), s);
+}
+
+/* Break up statement STMT if it is a combined expressions made out of
+   bitwise operations.  Handle expansion of (A | B) !=/== 0, and ~(A op B).  */
+static bool
+break_up_bitwise_combined_stmt (gimple stmt)
+{
+  tree op1, op2, old_op;
+  gimple op1_def, op2_def;
+  enum tree_code code = gimple_assign_rhs_code (stmt);
+  tree type = TREE_TYPE (gimple_assign_lhs (stmt));
+  gimple_stmt_iterator gsi;
+
+  op1 = gimple_assign_rhs1 (stmt);
+  old_op = op1;
+  op2 = NULL_TREE;
+
+  if (code == EQ_EXPR || code == NE_EXPR)
+    op2 = gimple_assign_rhs2 (stmt);
+
+  /* Check that left-hand operand is a gimple-assignment
+     and has single use.  */
+  if ((code != BIT_NOT_EXPR
+       && code != EQ_EXPR && code != NE_EXPR)
+      || TREE_CODE (op1) != SSA_NAME)
+    return false;
+  op1_def = SSA_NAME_DEF_STMT (op1);
+  if (!op1_def
+      || !is_gimple_assign (op1_def)
+      || !has_single_use (op1))
+    return false;
+
+  if (code == NE_EXPR || code == EQ_EXPR)
+    {
+      tree inner_op1, inner_op2;
+
+      /* Check that operands have integral type and
+         see if it has as second argument a constant zero valued
+         operand.  */
+      if (!INTEGRAL_TYPE_P (TREE_TYPE (op1))
+          || TREE_CODE (op2) != INTEGER_CST
+         || !integer_zerop (op2))
+       return false;
+
+      /* Check for pattern X | Y == 0 and X | Y != 0  */
+      if (gimple_assign_rhs_code (op1_def) != BIT_IOR_EXPR)
+       return false;
+
+      inner_op1 = gimple_assign_rhs1 (op1_def);
+      inner_op2 = gimple_assign_rhs2 (op1_def);
+
+      /* Expand (X | Y) == 0 -> (X == 0) & (Y == 0)
+         and (X | Y) != 0 -> (X != 0) | (Y != 0)  */
+
+      op1 = expand_cmp_ior (inner_op1, type, code);
+      op2 = expand_cmp_ior (inner_op2, type, code);
+
+      remove_bitwise_op (old_op);
+
+      gsi = gsi_for_stmt (stmt);
+      gimple_assign_set_rhs_with_ops (&gsi, (code == NE_EXPR ? BIT_IOR_EXPR
+                                                            : BIT_AND_EXPR),
+                                     op1, op2);
+      update_stmt (gsi_stmt (gsi));
+      remove_visited_stmt_chain (old_op);
+      push_bitwise_op (gimple_assign_lhs (stmt));
+      return true;
+    }
+  /* Handle expansion for expansion of ~X.  */
+  else if (code == BIT_NOT_EXPR)
+    {
+      enum tree_code inner_code = gimple_assign_rhs_code (op1_def);
+
+      if (inner_code == BIT_NOT_EXPR)
+        {
+         op1 = gimple_assign_rhs1 (op1_def);
+         gsi = gsi_for_stmt (stmt);
+         gimple_assign_set_rhs_from_tree (&gsi, op1);
+         update_stmt (gsi_stmt (gsi));
+         remove_visited_stmt_chain (old_op);
+         push_bitwise_op (gimple_assign_lhs (stmt));
+         return true;
+       }
+      else if (TREE_CODE_CLASS (inner_code) == tcc_comparison)
+        {
+         enum tree_code ncode;
+         tree op1type;
+
+         op1 = gimple_assign_rhs1 (op1_def);
+         op2 = gimple_assign_rhs2 (op1_def);
+         op1type = TREE_TYPE (op1);
+         ncode = invert_tree_comparison (inner_code,
+                                         HONOR_NANS (TYPE_MODE (op1type)));
+         if (ncode == ERROR_MARK)
+           return false;
+         gsi = gsi_for_stmt (stmt);
+         gimple_assign_set_rhs_with_ops (&gsi, ncode, op1, op2);
+         update_stmt (gsi_stmt (gsi));
+         remove_visited_stmt_chain (old_op);
+         return true;
+       }
+      if (inner_code != BIT_AND_EXPR && inner_code != BIT_IOR_EXPR
+          && inner_code != BIT_XOR_EXPR)
+       return false;
+
+      op1 = gimple_assign_rhs1 (op1_def);
+      op2 = gimple_assign_rhs2 (op1_def);
+
+      op1_def = op2_def = NULL;
+      if (TREE_CODE (op1) != SSA_NAME
+       || (op1_def = SSA_NAME_DEF_STMT (op1)) == NULL
+       || !is_gimple_assign (op1_def))
+       op1_def = NULL;
+      if (TREE_CODE (op2) != SSA_NAME
+       || (op2_def = SSA_NAME_DEF_STMT (op2)) == NULL
+       || !is_gimple_assign (op2_def))
+       op2_def = NULL;
+      /* Transform as best representation either from
+         ~(X ^ Y) to ~X ^ Y, or from ~(X ^ Y) toX ^ Y.  */
+      if (inner_code == BIT_XOR_EXPR)
+       {
+          if (TREE_CODE (op2) != INTEGER_CST
+             && (!op2_def || !is_gimple_assign (op2_def)
+                 || !has_single_use (op2)
+                 || gimple_assign_rhs_code (op2_def) != BIT_NOT_EXPR))
+            op1 = expand_not_bitwise_binary (type, op1, op1_def);
+          else
+            op2 = expand_not_bitwise_binary (type, op2, op2_def);
+       }
+      /* Transform ~(X | Y) -> ~X & Y, or ~(X & Y) -> ~X | ~Y.  */
+      else
+       {
+          op1 = expand_not_bitwise_binary (type, op1, op1_def);
+          op2 = expand_not_bitwise_binary (type, op2, op2_def);
+         inner_code = (inner_code == BIT_AND_EXPR ? BIT_IOR_EXPR
+                                                  : BIT_AND_EXPR);
+       }
+      gsi = gsi_for_stmt (stmt);
+      gimple_assign_set_rhs_with_ops (&gsi, inner_code, op1, op2);
+      update_stmt (gsi_stmt (gsi));
+      remove_visited_stmt_chain (old_op);
+      push_bitwise_op (gimple_assign_lhs (stmt));
+      return true;
+    }
+
+  return false;
+}
+
 /* If CURR and LAST are a pair of ops that OPCODE allows us to
    eliminate through equivalences, do so, remove them from OPS, and
    return true.  Otherwise, return false.  */
@@ -632,6 +1019,7 @@ eliminate_duplicate_pair (enum tree_code
 }

 static VEC(tree, heap) *plus_negates;
+static VEC(tree, heap) *bitwise_ops;

 /* If OPCODE is PLUS_EXPR, CURR->OP is a negate expression or a bitwise not
    expression, look in OPS for a corresponding positive operation to cancel
@@ -1017,8 +1405,8 @@ zero_one_operation (tree *def, enum tree
   while (1);
 }

-/* Builds one statement performing OP1 OPCODE OP2 using TMPVAR for
-   the result.  Places the statement after the definition of either
+/* Builds one statement performing OP1 OPCODE OP2, OPCODE op1 using TMPVAR
+   for the result.  Places the statement after the definition of either
    OP1 or OP2.  Returns the new statement.  */

 static gimple
@@ -1037,7 +1425,7 @@ build_and_add_sum (tree tmpvar, tree op1
   /* Find an insertion place and insert.  */
   if (TREE_CODE (op1) == SSA_NAME)
     op1def = SSA_NAME_DEF_STMT (op1);
-  if (TREE_CODE (op2) == SSA_NAME)
+  if (op2 && TREE_CODE (op2) == SSA_NAME)
     op2def = SSA_NAME_DEF_STMT (op2);
   if ((!op1def || gimple_nop_p (op1def))
       && (!op2def || gimple_nop_p (op2def)))
@@ -2204,6 +2592,758 @@ linearize_expr_tree (VEC(operand_entry_t
   add_to_ops_vec (ops, binrhs);
 }

+/* This function seeks the root assignment-statement of LHS, which
+   is a bitwise-wise expression.  It additional don't walk over
+   none-single-use elements.  */
+static tree
+get_bitwise_single_use_root (tree lhs)
+{
+  gimple user;
+  if (TREE_CODE (lhs) != SSA_NAME
+      || !SSA_NAME_DEF_STMT (lhs)
+      || !is_gimple_assign (SSA_NAME_DEF_STMT (lhs))
+      || !has_single_use (lhs))
+    return lhs;
+  user = get_single_immediate_use (lhs);
+  if (!user)
+    return lhs;
+  do
+    {
+      if (!is_gimple_assign (user))
+        return lhs;
+      switch (gimple_assign_rhs_code (user))
+        {
+       case BIT_NOT_EXPR:
+       case BIT_XOR_EXPR:
+       case BIT_AND_EXPR:
+       case BIT_IOR_EXPR:
+         break;
+       default:
+         return lhs;
+       }
+      lhs = gimple_assign_lhs (user);
+    }
+  while ((user = get_single_immediate_use (lhs)) != NULL);
+
+  return lhs;
+}
+
+/* Saves LHS on vector BITWISE_OPS, if it isn't
+   already stored in vector BITWISE_OPS.  */
+static void
+push_bitwise_op (tree lhs)
+{
+  gimple user;
+
+  /* Check if we are on a root-element. */
+  if (has_single_use (lhs)
+      && (user = get_single_immediate_use (lhs)) != NULL
+      && is_gimple_assign (user))
+    {
+      enum tree_code ucode = gimple_assign_rhs_code (user);
+      if (ucode == BIT_AND_EXPR
+         || ucode == BIT_IOR_EXPR
+         || ucode == BIT_XOR_EXPR
+         || ucode == BIT_NOT_EXPR)
+       return;
+    }
+  /* Find root element.  */
+  lhs = get_bitwise_single_use_root (lhs);
+
+  /* Check that LHS is unique in vector BITWISE_OPS.  */
+  if (bitwise_ops != NULL)
+    {
+      unsigned int i = 0;
+      tree x;
+
+      FOR_EACH_VEC_ELT (tree, bitwise_ops, i, x)
+        {
+         if (x == lhs)
+           return;
+       }
+    }
+
+  VEC_safe_push (tree, heap, bitwise_ops, lhs);
+}
+
+/* Function removes element LHS from vector
+   BITWISE_OPS.  */
+static void
+remove_bitwise_op (tree lhs)
+{
+  /* Check that LHS is unique in vector BITWISE_OPS.  */
+  if (bitwise_ops != NULL)
+    {
+      unsigned int i = 0;
+      tree x;
+
+      FOR_EACH_VEC_ELT (tree, bitwise_ops, i, x)
+        {
+         if (x == lhs)
+           {
+             VEC_ordered_remove (tree, bitwise_ops, i);
+             return;
+           }
+       }
+    }
+}
+
+/* This routine returns TRUE, if LHS is assignment with
+   tree-code BIT_NOT_EXPR.  Otherwise it returns FALSE.  */
+
+static bool
+is_bitwise_not (tree lhs)
+{
+  gimple stmt;
+
+  if (TREE_CODE (lhs) != SSA_NAME)
+    return false;
+  stmt = SSA_NAME_DEF_STMT (lhs);
+  if (!stmt
+      || !is_gimple_assign (stmt))
+    return false;
+
+  return gimple_assign_rhs_code (stmt) == BIT_NOT_EXPR;
+}
+
+/* Helper function to sort statements starting from STMT with tree-code
+   CODE into ORG for none-not-expressions, into NOT for not-expressions,
+   into VCMP Y == 0 for CODE equal to bitwise-or, into VCMP Y != 0 for
+   CODE eual to bitwise-and, and into CST for integral constants. */
+static void
+walk_bitwise_stmt_elems (gimple stmt, enum tree_code code,
+                        VEC(tree, heap) **vcst,
+                        VEC(tree, heap) **vnot,
+                        VEC(tree, heap) **vcmp_zero,
+                        VEC(tree, heap) **vcmp_m1,
+                        VEC(tree, heap) **vexpr)
+{
+  gimple s;
+  tree l;
+
+  l = gimple_assign_rhs1 (stmt);
+  if (TREE_CODE (l) == INTEGER_CST)
+    VEC_safe_push (tree, heap, *vcst, l);
+  else if (TREE_CODE (l) != SSA_NAME
+          || (s = SSA_NAME_DEF_STMT (l)) == NULL
+          || !is_gimple_assign (s)
+          || !has_single_use (l))
+    VEC_safe_push (tree, heap, *vexpr, l);
+  else if (gimple_assign_rhs_code (s) == code)
+    walk_bitwise_stmt_elems (s, code, vcst, vnot, vcmp_zero, vcmp_m1, vexpr);
+  else if (is_bitwise_not (l))
+    VEC_safe_push (tree, heap, *vnot, l);
+  /* (A == 0) & (B == 0) -> (A | B) == 0
+     (A != 0) | (B != 0) -> (A | B) != 0.  */
+  else if (((code == BIT_AND_EXPR
+            && gimple_assign_rhs_code (s) == EQ_EXPR)
+           || (code == BIT_IOR_EXPR
+               && gimple_assign_rhs_code (s) == NE_EXPR))
+          && INTEGRAL_TYPE_P (TREE_TYPE (gimple_assign_rhs1 (s)))
+          && TREE_CODE (gimple_assign_rhs2 (s)) == INTEGER_CST
+          && integer_zerop (gimple_assign_rhs2 (s)))
+    VEC_safe_push (tree, heap, *vcmp_zero, l);
+  /* (A == -1) & (B == -1) -> (A & B) == -1
+     (A != -1) | (B != -1) -> (A & B) != -1  */
+  else if (((code == BIT_AND_EXPR
+            && gimple_assign_rhs_code (s) == EQ_EXPR)
+           || (code == BIT_IOR_EXPR
+               && gimple_assign_rhs_code (s) == NE_EXPR))
+          && INTEGRAL_TYPE_P (TREE_TYPE (gimple_assign_rhs1 (s)))
+          && TREE_CODE (gimple_assign_rhs2 (s)) == INTEGER_CST
+          && integer_all_onesp (gimple_assign_rhs2 (s)))
+    VEC_safe_push (tree, heap, *vcmp_m1, l);
+  else
+    {
+      switch (gimple_assign_rhs_code (s))
+       {
+       case BIT_IOR_EXPR:
+       case BIT_AND_EXPR:
+       case BIT_XOR_EXPR:
+         operate_bitwise_stmt (s, gimple_assign_rhs_code (s));
+         break;
+       default:
+         break;
+       }
+
+      VEC_safe_push (tree, heap, *vexpr, l);
+    }
+
+  l = gimple_assign_rhs2 (stmt);
+
+  if (TREE_CODE (l) == INTEGER_CST)
+    {
+      VEC_safe_push (tree, heap, *vcst, l);
+      return;
+    }
+  if (TREE_CODE (l) != SSA_NAME
+      || (s = SSA_NAME_DEF_STMT (l)) == NULL
+      || !is_gimple_assign (s)
+      || !has_single_use (l))
+    VEC_safe_push (tree, heap, *vexpr, l);
+  else if (gimple_assign_rhs_code (s) == code)
+    walk_bitwise_stmt_elems (s, code, vcst, vnot, vcmp_zero, vcmp_m1, vexpr);
+  else if (is_bitwise_not (l))
+    VEC_safe_push (tree, heap, *vnot, l);
+  /* (A == 0) & (B == 0) -> (A | B) == 0
+     (A != 0) | (B != 0) -> (A | B) != 0.  */
+  else if (((code == BIT_AND_EXPR
+            && gimple_assign_rhs_code (s) == EQ_EXPR)
+           || (code == BIT_IOR_EXPR
+               && gimple_assign_rhs_code (s) == NE_EXPR))
+          && INTEGRAL_TYPE_P (TREE_TYPE (gimple_assign_rhs1 (s)))
+          && TREE_CODE (gimple_assign_rhs2 (s)) == INTEGER_CST
+          && integer_zerop (gimple_assign_rhs2 (s)))
+    VEC_safe_push (tree, heap, *vcmp_zero, l);
+  /* (A == -1) & (B == -1) -> (A & B) == -1
+     (A != -1) | (B != -1) -> (A & B) != -1  */
+  else if (((code == BIT_AND_EXPR
+            && gimple_assign_rhs_code (s) == EQ_EXPR)
+           || (code == BIT_IOR_EXPR
+               && gimple_assign_rhs_code (s) == NE_EXPR))
+          && INTEGRAL_TYPE_P (TREE_TYPE (gimple_assign_rhs1 (s)))
+          && TREE_CODE (gimple_assign_rhs2 (s)) == INTEGER_CST
+          && integer_all_onesp (gimple_assign_rhs2 (s)))
+    VEC_safe_push (tree, heap, *vcmp_m1, l);
+  else
+    {
+      switch (gimple_assign_rhs_code (s))
+       {
+       case BIT_IOR_EXPR:
+       case BIT_AND_EXPR:
+       case BIT_XOR_EXPR:
+         operate_bitwise_stmt (s, gimple_assign_rhs_code (s));
+         break;
+       default:
+         break;
+       }
+
+      VEC_safe_push (tree, heap, *vexpr, l);
+    }
+}
+
+/* Helper function to rebuild a binary tree of CODE elements
+   from vector VEC.
+   If LASTP is NULL, then all elements
+   are combined and result is returned, otherwise the last
+   element of vector VEC is stored in LASTP and all - but the last -
+   elements are merged and returned.
+   Note: for vector with just one element, this element is returned
+   and LASTP is set to NULL, if provided.
+   If INNER_LEFT is true, the RHS1 operand of VEC element
+   is used, otherwise the VEC element itself is used.  */
+static tree
+rebuild_vector_tree (VEC(tree, heap) *vec,
+                    enum tree_code code, tree *lastp, bool inner_left)
+{
+  gimple s;
+  unsigned int i = 0;
+  tree r = NULL_TREE, x, r2 = NULL_TREE;
+
+  FOR_EACH_VEC_ELT (tree, vec, i, x)
+    {
+      s = SSA_NAME_DEF_STMT (x);
+
+      if (inner_left)
+       x = gimple_assign_rhs1 (s);
+      if (!r)
+        r = x;
+      else if (!r2)
+        r2 = x;
+      else
+        {
+         r = make_new_tmp_statement (TREE_TYPE (r), code, r, r2, s);
+         r2 = x;
+       }
+    }
+  if (lastp)
+    *lastp = r2;
+  else if (r && r2)
+    {
+      s = SSA_NAME_DEF_STMT (r);
+      r = make_new_tmp_statement (TREE_TYPE (r), code, r, r2, s);
+    }
+  return r;
+}
+
+/* Sink not-expression out of xor-expression-sequences TCST, NOT, and ORG
+   generated from STMT.
+   Returns TRUE, if statement STMT was modified, otherwise FALSE.  */
+static bool
+operate_bitwise_xor_stmt (gimple stmt, tree tcst, VEC(tree, heap) *vnot,
+                         VEC(tree, heap) *vexpr)
+{
+  gimple_stmt_iterator gsi;
+  unsigned int i = VEC_length (tree, vnot);
+  tree l = NULL_TREE, r = NULL_TREE;
+  bool inv = false;
+
+  /* If the amount of not-expressions is odd, then we have two cases:
+     a) we have a constant, so we can sink one not into integeral constant
+        as ~X ^ CST -> X ^ CST' with CST' = ~CST.
+     b) we need to add to the hole statment a bitwise-not expression.  */
+  if ((i & 1) != 0)
+    {
+      if (tcst)
+        {
+         tcst = fold_build1 (BIT_NOT_EXPR, TREE_TYPE (tcst), tcst);
+         /* See if we can eleminate constant.  */
+         if (integer_zerop (tcst))
+           tcst = NULL;
+       }
+      else
+        inv = true;
+    }
+
+  if (vnot && vexpr)
+    {
+      l = rebuild_vector_tree (vnot, BIT_XOR_EXPR, NULL, true);
+      r = rebuild_vector_tree (vexpr, BIT_XOR_EXPR, NULL, false);
+      if (tcst)
+        {
+         l = make_new_tmp_statement (TREE_TYPE (l), BIT_XOR_EXPR, l, r, stmt);
+         r = tcst;
+       }
+    }
+  else if (vnot)
+    {
+      l = rebuild_vector_tree (vnot, BIT_XOR_EXPR, &r, true);
+      if (tcst)
+        {
+         if (!r)
+           r = tcst;
+         else
+           {
+             l = make_new_tmp_statement (TREE_TYPE (l), BIT_XOR_EXPR, l, r, 
stmt);
+             r = tcst;
+           }
+       }
+    }
+  else if (tcst)
+    {
+      l = rebuild_vector_tree (vexpr, BIT_XOR_EXPR, NULL, false);
+      if (l)
+        r = tcst;
+      else
+        {
+         l = tcst;
+         r = NULL_TREE;
+       }
+    }
+  else
+    l = rebuild_vector_tree (vexpr, BIT_XOR_EXPR, &r, false);
+
+  if (inv)
+    {
+      if (r)
+        l = make_new_tmp_statement (TREE_TYPE (l), BIT_XOR_EXPR, l, r, stmt);
+      gsi = gsi_for_stmt (stmt);
+      gimple_assign_set_rhs_with_ops (&gsi, BIT_NOT_EXPR, l, NULL_TREE);
+    }
+  else
+    {
+      gsi = gsi_for_stmt (stmt);
+      if (r)
+        gimple_assign_set_rhs_with_ops (&gsi, BIT_XOR_EXPR, l, r);
+      else
+        gimple_assign_set_rhs_from_tree (&gsi, l);
+    }
+  update_stmt (gsi_stmt (gsi));;
+  return true;
+}
+
+/* Helper function to merge:compares
+   If IS_IOR is TRUE. then merge VCMP elements
+   - (X != 0) | (Y != 0) -> (X | Y) != 0
+   - (X == 0) & (Y == 0) -> (X | Y) == 0.
+
+   If IS_IOR is FALSE. then merge VCMP elements
+   - (X != ~0) | (Y != ~0) -> (X & Y) != ~0
+   - (X == ~0) & (Y == ~0) -> (X & Y) == ~0.
+
+   CODE specifies the final compare expression code. It can be either
+   EQ_EXPR, or NE_EXPR.
+
+   If DESTSTMT is not NULL, then final compare instruction
+   gets assigned to it, otherwise an new temporary is used
+   and stored in VEXPR vector.
+
+   Special note:  We try to merge first elements of compatible
+   types, before doing final merge by casting up to widest type.  */
+static void
+merge_bitwise_compares (VEC(tree, heap) **vcmp, VEC(tree, heap) **vexpr,
+                       enum tree_code code, gimple deststmt, bool is_ior)
+{
+  unsigned int i;
+  unsigned int len = VEC_length (tree, *vcmp);
+  tree l, r, x, cmp_type = NULL_TREE;
+  VEC(tree, heap) *vtmp = NULL;
+  enum tree_code cmp_code;
+
+  /* If we have just one compare-element, then simply
+     move it to ORG vector and return.  */
+  if (len <= 1)
+    {
+      FOR_EACH_VEC_ELT (tree, *vcmp, i, x)
+        {
+         VEC_safe_push (tree, heap, *vexpr, x);
+         VEC_ordered_remove (tree, *vcmp, i);
+       }
+
+      VEC_free (tree, heap, *vcmp);
+      *vcmp = NULL;
+      return;
+    }
+
+  /* First merge all elements with compatible type, and move
+     intermediate result into VTMP vector.  */
+  while (VEC_length (tree, *vcmp) > 0)
+    {
+      cmp_type = NULL_TREE;
+      l = NULL_TREE;
+      for (i = 0; VEC_iterate (tree, (*vcmp), (i), (x));)
+       {
+         gimple s = SSA_NAME_DEF_STMT (x);
+         r = gimple_assign_rhs1 (s);
+         if (!l)
+           {
+             l = r;
+             cmp_type = TREE_TYPE (x);
+             VEC_ordered_remove (tree, *vcmp, i);
+           }
+         else if (types_compatible_p (TREE_TYPE (r), TREE_TYPE (l)))
+           {
+             l = make_new_tmp_statement (TREE_TYPE (l),
+                                         (is_ior ? BIT_IOR_EXPR
+                                                 : BIT_AND_EXPR), l, r, s);
+             VEC_ordered_remove (tree, *vcmp, i);
+           }
+         else
+           ++i;
+       }
+      VEC_safe_push (tree, heap, vtmp, l);
+    }
+  VEC_free (tree, heap, *vcmp);
+  *vcmp = NULL;
+
+  /* Now do final merge of all elements in VTMP vector by casting to
+     wider-type.  */
+  l = NULL_TREE;
+  r = NULL_TREE;
+  FOR_EACH_VEC_ELT (tree, vtmp, i, x)
+    {
+      tree type;
+      type = TREE_TYPE (x);
+
+      /* If we handle (X & Y) ==/!= ~0 case, we need
+         to work on signed types.  */
+      if (!is_ior && TYPE_UNSIGNED (type) && VEC_length (vtmp) > 1)
+        {
+         type = signed_type_for (type);
+         x = make_new_tmp_statement (type, NOP_EXPR, x, NULL_TREE,
SSA_NAME_DEF_STMT (x));
+       }
+      if (!l)
+        {
+         l = x;
+         r = type;
+       }
+      else
+        {
+         if (TYPE_PRECISION (r) < TYPE_PRECISION (type))
+           {
+             l = make_new_tmp_statement (type, NOP_EXPR, l, NULL_TREE,
SSA_NAME_DEF_STMT (x));
+             r = type;
+           }
+         else if (TYPE_PRECISION (r) > TYPE_PRECISION (type)
+                  || !types_compatible_p (r, type))
+           x = make_new_tmp_statement (r, NOP_EXPR, x, NULL_TREE,
SSA_NAME_DEF_STMT (x));
+         l = make_new_tmp_statement (r, (is_ior ? BIT_IOR_EXPR
+                                                : BIT_AND_EXPR),
+                                     l, x, SSA_NAME_DEF_STMT (x));
+       }
+    }
+
+  /* Generate final L != 0, or L == 0 to statement.
+     a) (X == 0) & (Y == 0) -> (X | Y) == 0
+     b) (X != 0) | (Y != 0) -> (X | Y) != 0
+     c) (X == -1) & (Y == -1) -> (X & Y) == -1
+     d) (X != -1) | (Y != -1) -> (X & Y) != -1
+   */
+  cmp_code = (code == BIT_AND_EXPR ? EQ_EXPR : NE_EXPR);
+  if (!deststmt)
+    {
+      tree cst_val = build_zero_cst (TREE_TYPE (l));
+
+      if (!is_ior)
+        cst_val = fold_build1 (BIT_NOT_EXPR, TREE_TYPE (l), cst_val);
+      l = make_new_tmp_statement (cmp_type,
+                                 cmp_code, l,
+                                 cst_val,
+                                 SSA_NAME_DEF_STMT (l));
+      VEC_safe_push (tree, heap, *vexpr, l);
+    }
+  else
+    {
+      gimple_stmt_iterator gsi;
+      tree cst_val = build_zero_cst (TREE_TYPE (l));
+
+      if (!is_ior)
+        cst_val = fold_build1 (BIT_NOT_EXPR, TREE_TYPE (l), cst_val);
+
+      gsi = gsi_for_stmt (deststmt);
+      gimple_assign_set_rhs_with_ops (&gsi, cmp_code, l,
+                                     cst_val);
+      update_stmt (deststmt);
+    }
+  VEC_free (tree, heap, vtmp);
+}
+
+/* Sink bitwise-not expression out of bitwise-binary sequence for STMT.  The
+   tree-code of the statement-sequence CODE.
+   Following patterns are used for transformations:
+     ~X and ~Y -> ~(X or Y)
+     ~X or ~Y -> ~(X and Y)
+     ~X xor ~Y -> X xor Y
+     ~X xor Y -> ~(X xor Y)
+     ~X xor CST -> X xor CST' (with CST' = ~CST).
+     (X == 0) and (Y == 0) -> (X | Y) == 0, if X and Y have
+     integral types.
+     (X == -1) and (Y == -1) -> (X & Y) == -1, if X and Y have
+     integral types.
+     (X != 0) or (Y != 0) -> (X | Y) != 0, if X and Y have
+     integral types.
+     (X != -1) or (Y != -1) -> (X & Y) != -1, if X and Y have
+     integral types.
+
+   */
+static void
+operate_bitwise_stmt (gimple stmt, enum tree_code code)
+{
+  unsigned int i = 0;
+  tree x, tcst = NULL_TREE;
+  tree tnot = NULL_TREE, torg = NULL_TREE;
+  tree l, r;
+  gimple_stmt_iterator gsi;
+  VEC(tree, heap) *vcst = NULL;
+  VEC(tree, heap) *vnot = NULL;
+  VEC(tree, heap) *vcmp_zero = NULL;
+  VEC(tree, heap) *vcmp_m1 = NULL;
+  VEC(tree, heap) *vexpr = NULL;
+  enum tree_code icode;
+  bool do_repropagate = false;
+
+  if (gimple_assign_rhs_code (stmt) != code)
+    abort ();
+
+  walk_bitwise_stmt_elems (stmt, code, &vcst, &vnot,
+                          &vcmp_zero, &vcmp_m1, &vexpr);
+
+  /* There are constants,  */
+  if (VEC_length (tree, vcst) > 1)
+    do_repropagate = true;
+  /* There are comparisons-to-zero,  */
+  else if (VEC_length (tree, vcmp_zero) > 1
+          || VEC_length (tree, vcmp_m1) > 1)
+    do_repropagate = true;
+  /* There are not-expressions,  */
+  else if (VEC_length (tree, vnot) > 1)
+    do_repropagate = true;
+  /* If we have (~X & CST), we convert to ~(X | CST') with CST'=~CST.
+     If we have (~X | CST), we convert to ~(X & CST') with CST'=~CST.  */
+  else if (VEC_length (tree, vnot) == 1
+           && VEC_length (tree, vexpr) == 0
+           && VEC_length (tree, vcmp_zero) == 0
+           && VEC_length (tree, vcmp_m1) == 0
+           && VEC_length (tree, vcst) >= 1)
+     do_repropagate = true;
+  /* For xor-expressions we do following conversion:
+     a) (~X ^ CST) -> X ^ CST'; with CST'=~CST.
+     b) (~X ^ Y) -> ~(X ^ Y).  */
+  else if (code == BIT_XOR_EXPR
+          && VEC_length (tree, vnot) == 1)
+    do_repropagate = true;
+
+  if (!do_repropagate)
+    {
+      VEC_free (tree, heap, vcst);
+      VEC_free (tree, heap, vnot);
+      VEC_free (tree, heap, vcmp_zero);
+      VEC_free (tree, heap, vcmp_m1);
+      VEC_free (tree, heap, vexpr);
+      return;
+    }
+
+  l = gimple_assign_rhs1 (stmt);
+  r = gimple_assign_rhs2 (stmt);
+
+  /* First combine integral constant bitwise operations.  */
+  FOR_EACH_VEC_ELT (tree, vcst, i, x)
+    {
+      if (!tcst)
+        tcst = x;
+      else
+        tcst = fold_build2 (code, TREE_TYPE (x), tcst, x);
+    }
+  VEC_free (tree, heap, vcst);
+  vcst = NULL;
+
+  /* Do we have only vcmp_zero elements, then directly assign result
+     of combine to final stmt.  */
+  if (!tcst && !vexpr && !vnot && !vcmp_m1)
+    {
+      merge_bitwise_compares (&vcmp_zero, &vexpr, code, stmt, true);
+      remove_visited_stmt_chain (l);
+      remove_visited_stmt_chain (r);
+      return;
+    }
+  /* Do we have only vcmp_m1 elements, then directly assign result
+     of combine to final stmt.  */
+  if (!tcst && !vexpr && !vnot && !vcmp_zero)
+    {
+      merge_bitwise_compares (&vcmp_m1, &vexpr, code, stmt, false);
+      remove_visited_stmt_chain (l);
+      remove_visited_stmt_chain (r);
+      return;
+    }
+
+  merge_bitwise_compares (&vcmp_zero, &vexpr, code, NULL, true);
+  merge_bitwise_compares (&vcmp_m1, &vexpr, code, NULL, false);
+
+  /* If we deal only with constants, assign
+     calculated constant.  */
+  if (tcst && !vnot && !vexpr)
+    {
+      gsi = gsi_for_stmt (stmt);
+      gimple_assign_set_rhs_from_tree (&gsi, tcst);
+      update_stmt (gsi_stmt (gsi));
+      remove_visited_stmt_chain (l);
+      remove_visited_stmt_chain (r);
+      return;
+    }
+
+  if (code == BIT_XOR_EXPR)
+    {
+      operate_bitwise_xor_stmt (stmt, tcst, vnot, vexpr);
+      remove_visited_stmt_chain (l);
+      remove_visited_stmt_chain (r);
+      VEC_free (tree, heap, vnot);
+      VEC_free (tree, heap, vexpr);
+      return;
+    }
+
+  /* See if we can eleminate constant.  */
+  if (tcst
+      && ((code == BIT_AND_EXPR && integer_all_onesp (tcst))
+          || (code == BIT_IOR_EXPR && integer_zerop (tcst))))
+    tcst = NULL;
+
+  /* Propagate bitwise and/or statements.  */
+
+  /* Inverse for binary and/or.  */
+  icode = (code == BIT_IOR_EXPR ? BIT_AND_EXPR : BIT_IOR_EXPR);
+
+  /* Build inverse tree for bitwise-not part.  */
+  if (VEC_length (tree, vnot) > 0)
+    {
+      tnot = rebuild_vector_tree (vnot, icode, NULL, true);
+      torg = rebuild_vector_tree (vexpr, code, NULL, false);
+      if (tnot && !tcst && !torg)
+        {
+         gsi = gsi_for_stmt (stmt);
+         gimple_assign_set_rhs_with_ops (&gsi, BIT_NOT_EXPR, tnot, NULL_TREE);
+         update_stmt (gsi_stmt (gsi));
+         remove_visited_stmt_chain (l);
+         remove_visited_stmt_chain (r);
+         VEC_free (tree, heap, vnot);
+         VEC_free (tree, heap, vexpr);
+         return;
+       }
+      if (!tnot)
+       abort ();
+      tnot = make_new_tmp_statement (TREE_TYPE (tnot), BIT_NOT_EXPR, tnot,
+                                    NULL_TREE, SSA_NAME_DEF_STMT (tnot));
+      if (torg)
+       {
+         if (!tcst)
+           {
+             gsi = gsi_for_stmt (stmt);
+             gimple_assign_set_rhs_with_ops (&gsi, code, tnot, torg);
+             update_stmt (gsi_stmt (gsi));
+             remove_visited_stmt_chain (l);
+             remove_visited_stmt_chain (r);
+             VEC_free (tree, heap, vnot);
+             VEC_free (tree, heap, vexpr);
+             return;
+           }
+         tnot = make_new_tmp_statement (TREE_TYPE (tnot), code, tnot, torg,
+                                        SSA_NAME_DEF_STMT (tnot));
+       }
+      gsi = gsi_for_stmt (stmt);
+      gimple_assign_set_rhs_with_ops (&gsi, code, tnot, tcst);
+      update_stmt (gsi_stmt (gsi));
+      remove_visited_stmt_chain (l);
+      remove_visited_stmt_chain (r);
+      VEC_free (tree, heap, vnot);
+      VEC_free (tree, heap, vexpr);
+      return;
+    }
+  if (!tcst)
+    torg = rebuild_vector_tree (vexpr, code, &tcst, false);
+  else
+    torg = rebuild_vector_tree (vexpr, code, NULL, false);
+  gsi = gsi_for_stmt (stmt);
+  if (tcst)
+    gimple_assign_set_rhs_with_ops (&gsi, code, torg, tcst);
+  else
+    gimple_assign_set_rhs_from_tree (&gsi, torg);
+  update_stmt (gsi_stmt (gsi));
+  remove_visited_stmt_chain (l);
+  remove_visited_stmt_chain (r);
+  VEC_free (tree, heap, vnot);
+  VEC_free (tree, heap, vexpr);
+}
+
+/* Repropagate bitwise-operations back to de-normalized from.
+   ~X op ~Y -> ~(X op' Y).  */
+static void
+repropagate_bitwise (void)
+{
+  unsigned int i = 0;
+  tree x;
+  gimple user;
+  enum tree_code ucode;
+
+  FOR_EACH_VEC_ELT (tree, bitwise_ops, i, x)
+    {
+      gimple stmt;
+      enum tree_code code;
+
+      if (TREE_CODE (x) != SSA_NAME)
+        continue;
+      stmt = SSA_NAME_DEF_STMT (x);
+      if (!stmt || !is_gimple_assign (stmt))
+        continue;
+      code = gimple_assign_rhs_code (stmt);
+      if (code == BIT_NOT_EXPR)
+        continue;
+      if (code != BIT_XOR_EXPR && code != BIT_IOR_EXPR
+          && code != BIT_AND_EXPR)
+        continue;
+      /* Check if we are on a root-element. */
+      if (has_single_use (x)
+          && (user = get_single_immediate_use (x)) != NULL
+          && is_gimple_assign (user))
+        {
+         ucode = gimple_assign_rhs_code (user);
+         if (ucode == BIT_AND_EXPR
+             || ucode == BIT_IOR_EXPR
+             || ucode == BIT_XOR_EXPR)
+           continue;
+       }
+      operate_bitwise_stmt (stmt, code);
+    }
+}
+
 /* Repropagate the negates back into subtracts, since no other pass
    currently does it.  */

@@ -2320,29 +3460,65 @@ can_reassociate_p (tree op)
    we want to break up k = t - q, but we won't until we've transformed q
    = b - r, which won't be broken up until we transform b = c - d.

+   Break up comparison !=/== 0 operations of bitwise-or operations for
+   being able to optimize within combined conditions.
+   (A | B) != 0 -> (A != 0) || (B != 0)
+   (A | B) == 0 -> (A == 0) && (B != 0)
+
+   Break up logical-not expressions of bitwise boolean-typed and/or/xor
+   operations for being able to optimize wihin combined conditions.
+   ~(A | B) -> ~A | ~B
+   ~(A & B) -> ~A & ~B
+   ~(A ^ B) -> A ^ ~B (special case if B is a constant)
+
    En passant, clear the GIMPLE visited flag on every statement.  */

 static void
-break_up_subtract_bb (basic_block bb)
+break_up_expr_bb (basic_block bb)
 {
   gimple_stmt_iterator gsi;
   basic_block son;

-  for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
+  for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); )
     {
       gimple stmt = gsi_stmt (gsi);
       gimple_set_visited (stmt, false);
-
-      if (!is_gimple_assign (stmt)
-         || !can_reassociate_p (gimple_assign_lhs (stmt)))
+      if (!is_gimple_assign (stmt))
+        {
+         gsi_next (&gsi);
+         continue;
+        }
+      if (break_up_bitwise_combined_stmt (stmt))
        continue;

+      switch (gimple_assign_rhs_code (stmt))
+        {
+       case BIT_AND_EXPR:
+       case BIT_IOR_EXPR:
+       case BIT_XOR_EXPR:
+         /* We might want to remember this statement
+            for repropagation.  */
+         push_bitwise_op (gimple_assign_lhs (stmt));
+         break;
+       default:
+         break;
+       }
+
+      if (!can_reassociate_p (gimple_assign_lhs (stmt)))
+       {
+         gsi_next (&gsi);
+         continue;
+       }
+
       /* Look for simple gimple subtract operations.  */
       if (gimple_assign_rhs_code (stmt) == MINUS_EXPR)
        {
          if (!can_reassociate_p (gimple_assign_rhs1 (stmt))
              || !can_reassociate_p (gimple_assign_rhs2 (stmt)))
-           continue;
+           {
+             gsi_next (&gsi);
+             continue;
+           }

          /* Check for a subtract used only in an addition.  If this
             is the case, transform it into add of a negate for better
@@ -2354,11 +3530,12 @@ break_up_subtract_bb (basic_block bb)
       else if (gimple_assign_rhs_code (stmt) == NEGATE_EXPR
               && can_reassociate_p (gimple_assign_rhs1 (stmt)))
        VEC_safe_push (tree, heap, plus_negates, gimple_assign_lhs (stmt));
+      gsi_next (&gsi);
     }
   for (son = first_dom_son (CDI_DOMINATORS, bb);
        son;
        son = next_dom_son (CDI_DOMINATORS, son))
-    break_up_subtract_bb (son);
+    break_up_expr_bb (son);
 }

 /* Reassociate expressions in basic block BB and its post-dominator as
@@ -2524,7 +3701,8 @@ debug_ops_vector (VEC (operand_entry_t,
 static void
 do_reassoc (void)
 {
-  break_up_subtract_bb (ENTRY_BLOCK_PTR);
+  break_up_expr_bb (ENTRY_BLOCK_PTR);
+
   reassociate_bb (EXIT_BLOCK_PTR);
 }

@@ -2581,6 +3759,7 @@ init_reassoc (void)
   free (bbs);
   calculate_dominance_info (CDI_POST_DOMINATORS);
   plus_negates = NULL;
+  bitwise_ops = NULL;
 }

 /* Cleanup after the reassociation pass, and print stats if
@@ -2602,6 +3781,7 @@ fini_reassoc (void)
   free_alloc_pool (operand_entry_pool);
   free (bb_rank);
   VEC_free (tree, heap, plus_negates);
+  VEC_free (tree, heap, bitwise_ops);
   free_dominance_info (CDI_POST_DOMINATORS);
   loop_optimizer_finalize ();
 }
@@ -2615,6 +3795,7 @@ execute_reassoc (void)

   do_reassoc ();
   repropagate_negates ();
+  repropagate_bitwise ();

   fini_reassoc ();
   return 0;
Index: gcc/gcc/testsuite/gcc.dg/tree-ssa/reassoc-30.c
===================================================================
--- /dev/null
+++ gcc/gcc/testsuite/gcc.dg/tree-ssa/reassoc-30.c
@@ -0,0 +1,12 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized -ffast-math" } */
+
+int foo (int a, int b, int c, int d)
+{
+  int r1 = a != 0 & c != 0 & b != 0;
+  int r2 = a == 0 | b != 0 | d == 0;
+  return (r1 != 0 & r2 == 0);
+}
+
+/* { dg-final { scan-tree-dump-times "return 0" 1 "optimized"} } */
+/* { dg-final { cleanup-tree-dump "optimized" } } */
Index: gcc/gcc/testsuite/gcc.dg/tree-ssa/reassoc-31.c
===================================================================
--- /dev/null
+++ gcc/gcc/testsuite/gcc.dg/tree-ssa/reassoc-31.c
@@ -0,0 +1,12 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized -ffast-math" } */
+
+int foo (int a, int b, int c, int d)
+{
+  int r1 = a != 0 & c != 0 & b != 0;
+  int r2 = a == 0 | b != 0 | d == 0;
+  return (r1 == 0 | r2 != 0);
+}
+
+/* { dg-final { scan-tree-dump-times "return 1" 1 "optimized"} } */
+/* { dg-final { cleanup-tree-dump "optimized" } } */
Index: gcc/gcc/testsuite/gcc.dg/tree-ssa/reassoc-26.c
===================================================================
--- /dev/null
+++ gcc/gcc/testsuite/gcc.dg/tree-ssa/reassoc-26.c
@@ -0,0 +1,12 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized -ffast-math" } */
+
+int foo (int a, int b, int c, int d)
+{
+  int r1 = (a | b | c) == 0;
+  int r2 = (a | d | c) != 0 | b == 0;
+  return (r1 == 0 | r2 != 0);
+}
+
+/* { dg-final { scan-tree-dump-times "return 1" 1 "optimized"} } */
+/* { dg-final { cleanup-tree-dump "optimized" } } */
Index: gcc/gcc/testsuite/gcc.dg/tree-ssa/reassoc-27.c
===================================================================
--- /dev/null
+++ gcc/gcc/testsuite/gcc.dg/tree-ssa/reassoc-27.c
@@ -0,0 +1,12 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized -ffast-math" } */
+
+int foo (int a, int b, int c, int d)
+{
+  int r1 = a & ~c & b;
+  int r2 = ~a | b | ~d;
+  return (r1 & ~r2);
+}
+
+/* { dg-final { scan-tree-dump-times "return 0" 1 "optimized"} } */
+/* { dg-final { cleanup-tree-dump "optimized" } } */
Index: gcc/gcc/testsuite/gcc.dg/tree-ssa/reassoc-28.c
===================================================================
--- /dev/null
+++ gcc/gcc/testsuite/gcc.dg/tree-ssa/reassoc-28.c
@@ -0,0 +1,12 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized -ffast-math" } */
+
+int foo (int a, int b, int c, int d)
+{
+  int r1 = a & ~c & b;
+  int r2 = ~a | b | ~d;
+  return (~r1 | r2);
+}
+
+/* { dg-final { scan-tree-dump-times "return -1" 1 "optimized"} } */
+/* { dg-final { cleanup-tree-dump "optimized" } } */
Index: gcc/gcc/testsuite/gcc.dg/tree-ssa/reassoc-29.c
===================================================================
--- /dev/null
+++ gcc/gcc/testsuite/gcc.dg/tree-ssa/reassoc-29.c
@@ -0,0 +1,12 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized -ffast-math" } */
+
+int foo (int a, int b, int c, int d)
+{
+  int r1 = ~(a | b | c);
+  int r2 = (a | d | c) | ~b;
+  return (~r1 | r2);
+}
+
+/* { dg-final { scan-tree-dump-times "return -1" 1 "optimized"} } */
+/* { dg-final { cleanup-tree-dump "optimized" } } */
Index: gcc/gcc/testsuite/gcc.dg/tree-ssa/reassoc-32.c
===================================================================
--- /dev/null
+++ gcc/gcc/testsuite/gcc.dg/tree-ssa/reassoc-32.c
@@ -0,0 +1,25 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized -ffast-math" } */
+
+int foo (int a, int b, int c, int d)
+{
+  return (~a & ~b) ^ (~c | ~d);
+}
+
+int foo2 (int a, int b, int c, int d)
+{
+  return (~a & ~b & ~c & ~d) ^ 0xff;
+}
+
+int foo3 (int a, int b, int c, int d)
+{
+  return ~a ^ b ^ ~c ^ ~d ^ 0xff;
+}
+
+int foo4 (int a, int b, int c, int d)
+{
+  return ~a ^ ~b ^ ~c ^ ~d;
+}
+
+/* { dg-final { scan-tree-dump-times " ~" 0 "optimized"} } */
+/* { dg-final { cleanup-tree-dump "optimized" } } */
Index: gcc/gcc/testsuite/gcc.dg/tree-ssa/reassoc-33.c
===================================================================
--- /dev/null
+++ gcc/gcc/testsuite/gcc.dg/tree-ssa/reassoc-33.c
@@ -0,0 +1,19 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized -ffast-math" } */
+
+int foo (int a, int b, int c, int d)
+{
+  int r = (a == -1) & (b == -1);
+  int q = (c == -1) & (d == -1);
+  return r & q;
+}
+
+int bar (int a, int b, int c, int d)
+{
+  int r = (a != -1) | (b != -1);
+  int q = (c != -1) | (d != -1);
+  return r | q;
+}
+
+/* { dg-final { scan-tree-dump-times " == -1" 2 "optimized"} } */
+/* { dg-final { cleanup-tree-dump "optimized" } } */
Index: gcc/gcc/testsuite/gcc.dg/tree-ssa/reassoc-34.c
===================================================================
--- /dev/null
+++ gcc/gcc/testsuite/gcc.dg/tree-ssa/reassoc-34.c
@@ -0,0 +1,19 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized -ffast-math" } */
+
+int foo (int a, char b, short c, long d)
+{
+  int r = (a == -1) & (b == -1);
+  int q = (c == -1) & (d == -1);
+  return r & q;
+}
+
+int bar (int a, char b, short c, long d)
+{
+  int r = (a != -1) | (b != -1);
+  int q = (c != -1) | (d != -1);
+  return r | q;
+}
+
+/* { dg-final { scan-tree-dump-times " == -1" 2 "optimized"} } */
+/* { dg-final { cleanup-tree-dump "optimized" } } */
Index: gcc/gcc/testsuite/gcc.dg/tree-ssa/reassoc-35.c
===================================================================
--- /dev/null
+++ gcc/gcc/testsuite/gcc.dg/tree-ssa/reassoc-35.c
@@ -0,0 +1,25 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized -ffast-math" } */
+
+int foo (int a, unsigned char b, unsigned short c, unsigned long d)
+{
+  unsigned char b1 = ~0;
+  unsigned short c1 = ~0;
+  unsigned long d1 = ~0;
+  int r = (a == -1) & (b == b1);
+  int q = (c == c1) & (d == d1);
+  return r & q;
+}
+
+int bar (int a, unsigned char b, unsigned short c, unsigned long d)
+{
+  unsigned char b1 = ~0;
+  unsigned short c1 = ~0;
+  unsigned long d1 = ~0;
+  int r = (a != -1) | (b != b1);
+  int q = (c != c1) | (d != d1);
+  return r | q;
+}
+
+/* { dg-final { scan-tree-dump-times " == -1" 2 "optimized"} } */
+/* { dg-final { cleanup-tree-dump "optimized" } } */

Reply via email to