https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94131

Jakub Jelinek <jakub at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |law at gcc dot gnu.org

--- Comment #3 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
I see multiple issues:
1)
           if (base
              && DECL_P (base)
              && TREE_CODE (TREE_TYPE (base)) == ARRAY_TYPE
              && TYPE_SIZE_UNIT (TREE_TYPE (base))
              && poff.is_constant (&off))
            {
              tree basetype = TREE_TYPE (base);
              tree size = TYPE_SIZE_UNIT (basetype);
              ++off;   /* Increment for the terminating nul.  */
              pdata->maxlen = fold_build2 (MINUS_EXPR, size_type_node, size,
                                           build_int_cst (size_type_node,
off));
              pdata->maxbound = pdata->maxlen;
            }
One can't use TYPE_SIZE_UNIT this way during GIMPLE passes.  Gimplification
(gimplify_type_sizes) will create when needed temporary VAR_DECLs for the
sizes and during gimplification they are valid:
  char[0:D.1936] * a.1;
  char a[0:D.1936] [value-expr: *a.1];
...
      x.0 = x;
      _1 = (long int) x.0;
      _2 = _1 + -1;
      D.1936 = (sizetype) _2;
...
      a.1 = __builtin_alloca_with_align (D.1940, 8);
      _12 = &(*a.1)[0];
(and could be used the way the pass wants assuming the use is added somewhere
dominated by the setting of the var) but in the actual IL nothing actually
ensures those variables aren't optimized away as unused later on; typically
they'll be around only during -O0 or with -g if lucky they might have debug
statements for them:
  # DEBUG D#2 => (long int) x_1(D)
  # DEBUG D#1 => D#2 + -1
  # DEBUG D.1936 => (sizetype) D#1
but it isn't something that can be actually used in the IL for anything.  What
you could is track down the __builtin_alloca_with_align call that allocates the
memory for the array and derive something from the argument it is called with,
that is the actual byte size of the object.

2) another thing is as clearly maxlen and maxbound can be non-INTEGER_CSTs
(minlen is probably ok), then the code can't pass it to tree_int_cst_lt and
similar; there is one TREE_CODE (pdata->maxbound) != INTEGER_CST but no similar
checks

3) and the last one is that exactly for that TREE_CODE (pdata->maxbound) !=
INTEGER_CST case, IMNSHO the function needs to punt if the trees aren't say
operand_equal_p, if we have in a PHI "abcdef" and a VLA (or vice versa), unless
we'd have a SSA_NAME + const and VRP would tell us something, I'm afraid we
can't determine which of those two will be longer at runtime

Reply via email to