On Tue, 15 May 2018, Richard Biener wrote:

> On Tue, 15 May 2018, Richard Biener wrote:
> 
> > On Tue, 15 May 2018, Richard Biener wrote:
> > 
> > > On Tue, 15 May 2018, Richard Biener wrote:
> > > 
> > > > On Tue, 15 May 2018, Richard Biener wrote:
> > > > 
> > > > > On Mon, 14 May 2018, Kugan Vivekanandarajah wrote:
> > > > > 
> > > > > > Hi,
> > > > > > 
> > > > > > Attached patch handles PR63185 when we reach PHI with temp != NULLL.
> > > > > > We could see the PHI and if there isn't any uses for PHI that is
> > > > > > interesting, we could ignore that ?
> > > > > > 
> > > > > > Bootstrapped and regression tested on x86_64-linux-gnu.
> > > > > > Is this OK?
> > > > > 
> > > > > No, as Jeff said we can't do it this way.
> > > > > 
> > > > > If we end up with multiple VDEFs in the walk of defvar immediate
> > > > > uses we know we are dealing with a CFG fork.  We can't really
> > > > > ignore any of the paths but we have to
> > > > > 
> > > > >  a) find the merge point (and the associated VDEF)
> > > > >  b) verify for each each chain of VDEFs with associated VUSEs
> > > > >     up to that merge VDEF that we have no uses of the to classify
> > > > >     store and collect (partial) kills
> > > > >  c) intersect kill info and continue walking from the merge point
> > > > > 
> > > > > in b) there's the optional possibility to find sinking opportunities
> > > > > in case we have kills on some paths but uses on others.  This is why
> > > > > DSE should be really merged with (store) sinking.
> > > > > 
> > > > > So if we want to enhance DSEs handling of branches then we need
> > > > > to refactor the simple dse_classify_store function.  Let me take
> > > > > an attempt at this today.
> > > > 
> > > > First (baby) step is the following - it arranges to collect the
> > > > defs we need to continue walking from and implements trivial
> > > > reduction by stopping at (full) kills.  This allows us to handle
> > > > the new testcase (which was already handled in the very late DSE
> > > > pass with the help of sinking the store).
> > > > 
> > > > I took the opportunity to kill the use_stmt parameter of
> > > > dse_classify_store as the only user is only looking for whether
> > > > the kills were all clobbers which I added a new parameter for.
> > > > 
> > > > I didn't adjust the byte-tracking case fully (I'm not fully 
> > > > understanding
> > > > the code in the case of a use and I'm not sure whether it's worth
> > > > doing the def reduction with byte-tracking).
> > > > 
> > > > Your testcase can be handled by reducing the PHI and the call def
> > > > by seeing that the only use of a candidate def is another def
> > > > we have already processed.  Not sure if worth special-casing though,
> > > > I'd rather have a go at "recursing".  That will be the next
> > > > patch.
> > > > 
> > > > Bootstrap & regtest running on x86_64-unknown-linux-gnu.
> > > 
> > > Applied.
> > > 
> > > Another intermediate one below, fixing the byte-tracking for
> > > stmt with uses.  This also re-does the PHI handling by simply
> > > avoiding recursion by means of a visited bitmap and stopping
> > > at the DSE classify stmt when re-visiting it instead of failing.
> > > This covers Pratamesh loop case for which I added ssa-dse-33.c.
> > > For the do-while loop this still runs into the inability to
> > > handle two defs to walk from.
> > > 
> > > Bootstrap & regtest running on x86_64-unknown-linux-gnu.
> > 
> > Ok, loop handling doesn't work in general since we run into the
> > issue that SSA form across the backedge is not representing the
> > same values.  Consider
> > 
> >  <bb 3>
> >  # .MEM_22 = PHI <.MEM_12(D)(2), .MEM_13(4)>
> >  # n_20 = PHI <0(2), n_7(4)>
> >  # .MEM_13 = VDEF <.MEM_22>
> >  bytes[n_20] = _4;
> >  if (n_20 > 7)
> >    goto <bb 5>;
> > 
> >  <bb 4>
> >  n_7 = n_20 + 1;
> >  # .MEM_15 = VDEF <.MEM_13>
> >  bytes[n_20] = _5;
> >  goto <bb 3>;
> > 
> > then when classifying the store in bb4, visiting the PHI node
> > gets us to the store in bb3 which appears to be killing.
> > 
> >        if (gimple_code (temp) == GIMPLE_PHI)
> > -       defvar = PHI_RESULT (temp);
> > +       {
> > +         /* If we visit this PHI by following a backedge then reset
> > +            any info in ref that may refer to SSA names which we'd need
> > +            to PHI translate.  */
> > +         if (gimple_bb (temp) == gimple_bb (stmt)
> > +             || dominated_by_p (CDI_DOMINATORS, gimple_bb (stmt),
> > +                                gimple_bb (temp)))
> > +           /* ???  ref->ref may not refer to SSA names or it may only
> > +              refer to SSA names that are invariant with respect to the
> > +              loop represented by this PHI node.  */
> > +           ref->ref = NULL_TREE;
> > +         defvar = PHI_RESULT (temp);
> > +         bitmap_set_bit (visited, SSA_NAME_VERSION (defvar));
> > +       }
> > 
> > should be a workable solution for that.  I'm checking that, but
> > eventually you can think of other things that might prevent us from
> > handling backedges.  Note the current code tries to allow
> > looking across loops but not handle backedges of loops the
> > original stmt belongs to.
> 
> Just to mention before I leave for the day I think I've identified
> a latent issue where I just fail to produce a testcase right now
> which is that non-return exits from a function are not reliably
> visited given they do not all have virtual operands, like
> for example resx.  Thus consider
> 
> void foo (int *p, int x)
> {
>   *p = 0;
>   if (x)
>     resx;
>   *p = 1;
>   return;
> }
> 
> where we will never visit any stmts on the resx path and thus think
> the *p = 0 store is dead (all with current trunk of course).
> 
> Will continue to dig tomorrow.

PR85803.

I am currently re-testing the following on x86_64-unknown-linux-gnu
with --param dse-max-alias-queries-per-store=100000 in BOOT_CFLAGS.

I'll refrain from doing the separate-walks-for-each-def thing
but will try to fix PR85757 by pattern matching the
def-has-single-virtual-use-that-is-another-def-in-the-vector case
in which case we can remove it from further processing.

Richard.

2018-05-16  Richard Biener  <rguent...@suse.de>

        * params.def (PARAM_DSE_MAX_ALIAS_QUERIES_PER_STORE): New param.
        * doc/invoke.texi (dse-max-alias-queries-per-store): Document.
        * tree-ssa-dse.c: Include tree-ssa-loop.h.
        (check_name): New callback.
        (dse_classify_store): Track cycles via a visited bitmap of PHI
        defs and simplify handling of in-loop and across loop dead stores
        and properly fail for loop-variant refs.  Handle byte-tracking with
        multiple defs.  Use PARAM_DSE_MAX_ALIAS_QUERIES_PER_STORE for
        limiting the walk.

        * gcc.dg/tree-ssa/ssa-dse-32.c: New testcase.
        * gcc.dg/tree-ssa/ssa-dse-33.c: Likewise.


commit 5c287e6190448176bd1ff56083a1c922e890189a
Author: Richard Guenther <rguent...@suse.de>
Date:   Tue May 15 14:03:32 2018 +0200

    dse#2

diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi
index ca3772bbebf..b78a2d67b0d 100644
--- a/gcc/doc/invoke.texi
+++ b/gcc/doc/invoke.texi
@@ -10439,6 +10439,11 @@ Average number of iterations of a loop.
 Maximum size (in bytes) of objects tracked bytewise by dead store elimination.
 Larger values may result in larger compilation times.
 
+@item dse-max-alias-queries-per-store
+Maximum number of queries into the alias oracle per store.
+Larger values result in larger compilation times and may result in more
+removed dead stores.
+
 @item scev-max-expr-size
 Bound on size of expressions used in the scalar evolutions analyzer.
 Large expressions slow the analyzer.
diff --git a/gcc/params.def b/gcc/params.def
index dad47ec2b00..5c4e2c95072 100644
--- a/gcc/params.def
+++ b/gcc/params.def
@@ -547,6 +547,11 @@ DEFPARAM(PARAM_DSE_MAX_OBJECT_SIZE,
         "Maximum size (in bytes) of objects tracked bytewise by dead store 
elimination.",
         256, 0, 0)
 
+DEFPARAM(PARAM_DSE_MAX_ALIAS_QUERIES_PER_STORE,
+        "dse-max-alias-queries-per-store",
+        "Maximum number of queries into the alias oracle per store.",
+        256, 0, 0)
+
 DEFPARAM(PARAM_SCEV_MAX_EXPR_SIZE,
         "scev-max-expr-size",
         "Bound on size of expressions used in the scalar evolutions analyzer.",
diff --git a/gcc/testsuite/gcc.dg/tree-ssa/ssa-dse-32.c 
b/gcc/testsuite/gcc.dg/tree-ssa/ssa-dse-32.c
new file mode 100644
index 00000000000..eea0d8d5cf0
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/tree-ssa/ssa-dse-32.c
@@ -0,0 +1,13 @@
+/* { dg-do compile } */
+/* { dg-options "-O -fdump-tree-dse1-details" } */
+
+void f(int n)
+{
+  char *p = __builtin_malloc (1);
+  int i;
+  do
+    *p = 0;
+  while (++i < n);
+}
+
+/* { dg-final { scan-tree-dump-times "Deleted dead store" 1 "dse1" } } */
diff --git a/gcc/testsuite/gcc.dg/tree-ssa/ssa-dse-33.c 
b/gcc/testsuite/gcc.dg/tree-ssa/ssa-dse-33.c
new file mode 100644
index 00000000000..853cbb88190
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/tree-ssa/ssa-dse-33.c
@@ -0,0 +1,21 @@
+/* { dg-do compile } */
+/* { dg-options "-O -fdump-tree-dse1-details" } */
+
+void f(char *p, int n)
+{
+  for (int i = 0; i < n; ++i)
+    *p = 0;  /* Removed by DSE.  */
+  *p = 1;
+}
+
+void g(char *p, int n)
+{
+  int i = 0;
+  do
+    *p = 0;  /* Removed by DSE.  */
+  while (++i < n);
+  *p = 1;
+}
+
+
+/* { dg-final { scan-tree-dump-times "Deleted dead store" 2 "dse1" } } */
diff --git a/gcc/tree-ssa-dse.c b/gcc/tree-ssa-dse.c
index 126592a1d13..32a25b9eb1e 100644
--- a/gcc/tree-ssa-dse.c
+++ b/gcc/tree-ssa-dse.c
@@ -35,6 +35,7 @@ along with GCC; see the file COPYING3.  If not see
 #include "tree-cfgcleanup.h"
 #include "params.h"
 #include "alias.h"
+#include "tree-ssa-loop.h"
 
 /* This file implements dead store elimination.
 
@@ -515,6 +516,21 @@ live_bytes_read (ao_ref use_ref, ao_ref *ref, sbitmap live)
   return true;
 }
 
+/* Callback for dse_classify_store calling for_each_index.  Verify that
+   indices are invariant in the loop with backedge PHI in basic-block DATA.  */
+
+static bool
+check_name (tree, tree *idx, void *data)
+{
+  basic_block phi_bb = (basic_block) data;
+  if (TREE_CODE (*idx) == SSA_NAME
+      && !SSA_NAME_IS_DEFAULT_DEF (*idx)
+      && dominated_by_p (CDI_DOMINATORS, gimple_bb (SSA_NAME_DEF_STMT (*idx)),
+                        phi_bb))
+    return false;
+  return true;
+}
+
 /* A helper of dse_optimize_stmt.
    Given a GIMPLE_ASSIGN in STMT that writes to REF, classify it
    according to downstream uses and defs.  Sets *BY_CLOBBER_P to true
@@ -527,7 +543,8 @@ dse_classify_store (ao_ref *ref, gimple *stmt,
                    bool *by_clobber_p = NULL)
 {
   gimple *temp;
-  unsigned cnt = 0;
+  int cnt = 0;
+  auto_bitmap visited;
 
   if (by_clobber_p)
     *by_clobber_p = true;
@@ -539,58 +556,50 @@ dse_classify_store (ao_ref *ref, gimple *stmt,
   temp = stmt;
   do
     {
-      gimple *use_stmt, *defvar_def;
+      gimple *use_stmt;
       imm_use_iterator ui;
       bool fail = false;
       tree defvar;
 
-      /* Limit stmt walking to be linear in the number of possibly
-         dead stores.  */
-      if (++cnt > 256)
-       return DSE_STORE_LIVE;
-
       if (gimple_code (temp) == GIMPLE_PHI)
-       defvar = PHI_RESULT (temp);
+       {
+         /* If we visit this PHI by following a backedge then we have to
+            make sure ref->ref only refers to SSA names that are invariant
+            with respect to the loop represented by this PHI node.  */
+         if (dominated_by_p (CDI_DOMINATORS, gimple_bb (stmt),
+                             gimple_bb (temp))
+             && !for_each_index (ref->ref ? &ref->ref : &ref->base,
+                                 check_name, gimple_bb (temp)))
+           return DSE_STORE_LIVE;
+         defvar = PHI_RESULT (temp);
+         bitmap_set_bit (visited, SSA_NAME_VERSION (defvar));
+       }
       else
        defvar = gimple_vdef (temp);
-      defvar_def = temp;
       auto_vec<gimple *, 10> defs;
       FOR_EACH_IMM_USE_STMT (use_stmt, ui, defvar)
        {
-         cnt++;
-
-         /* If we ever reach our DSE candidate stmt again fail.  We
-            cannot handle dead stores in loops.  */
-         if (use_stmt == stmt)
+         /* Limit stmt walking.  */
+         if (++cnt > PARAM_VALUE (PARAM_DSE_MAX_ALIAS_QUERIES_PER_STORE))
            {
              fail = true;
              BREAK_FROM_IMM_USE_STMT (ui);
            }
+
+         /* We have visited ourselves already so ignore STMT for the
+            purpose of chaining.  */
+         if (use_stmt == stmt)
+           ;
          /* In simple cases we can look through PHI nodes, but we
             have to be careful with loops and with memory references
             containing operands that are also operands of PHI nodes.
             See gcc.c-torture/execute/20051110-*.c.  */
          else if (gimple_code (use_stmt) == GIMPLE_PHI)
            {
-             /* Make sure we are not in a loop latch block.  */
-             if (gimple_bb (stmt) == gimple_bb (use_stmt)
-                 || dominated_by_p (CDI_DOMINATORS,
-                                    gimple_bb (stmt), gimple_bb (use_stmt))
-                 /* We can look through PHIs to regions post-dominating
-                    the DSE candidate stmt.  */
-                 || !dominated_by_p (CDI_POST_DOMINATORS,
-                                     gimple_bb (stmt), gimple_bb (use_stmt)))
-               {
-                 fail = true;
-                 BREAK_FROM_IMM_USE_STMT (ui);
-               }
-             /* Do not consider the PHI as use if it dominates the
-                stmt defining the virtual operand we are processing,
-                we have processed it already in this case.  */
-             if (gimple_bb (defvar_def) != gimple_bb (use_stmt)
-                 && !dominated_by_p (CDI_DOMINATORS,
-                                     gimple_bb (defvar_def),
-                                     gimple_bb (use_stmt)))
+             /* If we already visited this PHI ignore it for further
+                processing.  */
+             if (!bitmap_bit_p (visited,
+                                SSA_NAME_VERSION (PHI_RESULT (use_stmt))))
                defs.safe_push (use_stmt);
            }
          /* If the statement is a use the store is not dead.  */
@@ -600,25 +609,20 @@ dse_classify_store (ao_ref *ref, gimple *stmt,
                 structure for USE_STMT and in doing so we find that the
                 references hit non-live bytes and thus can be ignored.  */
              if (byte_tracking_enabled
-                 && (!gimple_vdef (use_stmt) || defs.is_empty ()))
+                 && is_gimple_assign (use_stmt))
                {
-                 if (is_gimple_assign (use_stmt))
+                 ao_ref use_ref;
+                 ao_ref_init (&use_ref, gimple_assign_rhs1 (use_stmt));
+                 if (valid_ao_ref_for_dse (&use_ref)
+                     && use_ref.base == ref->base
+                     && known_eq (use_ref.size, use_ref.max_size)
+                     && !live_bytes_read (use_ref, ref, live_bytes))
                    {
-                     /* Other cases were noted as non-aliasing by
-                        the call to ref_maybe_used_by_stmt_p.  */
-                     ao_ref use_ref;
-                     ao_ref_init (&use_ref, gimple_assign_rhs1 (use_stmt));
-                     if (valid_ao_ref_for_dse (&use_ref)
-                         && use_ref.base == ref->base
-                         && known_eq (use_ref.size, use_ref.max_size)
-                         && !live_bytes_read (use_ref, ref, live_bytes))
-                       {
-                         /* If this statement has a VDEF, then it is the
-                            first store we have seen, so walk through it.  */
-                         if (gimple_vdef (use_stmt))
-                           defs.safe_push (use_stmt);
-                         continue;
-                       }
+                     /* If this is a store, remember it as we possibly
+                        need to walk the defs uses.  */
+                     if (gimple_vdef (use_stmt))
+                       defs.safe_push (use_stmt);
+                     continue;
                    }
                }
 

Reply via email to