https://gcc.gnu.org/bugzilla/show_bug.cgi?id=77898

--- Comment #6 from Martin Sebor <msebor at gcc dot gnu.org> ---
I meant a subrange of the i variable (i.e., a subrange of int).  The range of
every variable is necessarily bounded by its type so returning a range of
[INT_MIN, INT_MAX] for an int isn't terribly helpful.  It's no different than
the range returned for i without the if statement:

void f (int i)
{
  const char *p = "ab";

  p += i;

  unsigned long n = __builtin_object_size (p, 2);

  if (n < 2 || 3 < n)
    __builtin_abort ();
}

I understand that the range is the most accurate it can be at this stage (after
EVRP and before VRP1) and the get_range_info() function doesn't have enough
smarts to indicate whether there's a chance the range might improve (e.g.,
after inlining or even with LTO).

I suspect your suggestion is what I'm going to have to go with.  What bothers
me about it is that it means embedding assumptions into the tree-object-size
pass about the number of times it runs, and throwing away possibly optimal
results computed in the initial runs of the pass and only using the last one,
even if the last one is no better than the first.

In general this approach also denies downstream clients of a pass the benefit
of gradually refining their results based on gradual refinements of the results
provided by it.  In this case, it means preventing the tree-object-size pass
from returning a potentially useful if not optimal size of an object based on
the type (but not the value) of the offset.  Instead, the tree-object-size pass
must return a "don't know" value even though it "knows" that the value is in
some range.

Reply via email to