On Fri, 2006-03-24 at 18:50 +0100, Duncan Sands wrote:

> Hi Jeff, while your patch catches many cases, the logic seems a bit wonky
> for types with TYPE_MIN/TYPE_MAX different to those that can be deduced
> from TYPE_PRECISION.  For example, there is nothing stopping inner_type
> having a bigger TYPE_PRECISION than outer_type, but a smaller
> [TYPE_MIN,TYPE_MAX] range.  For example, outer_type could be a byte with
> range 0 .. 255, and inner_type could be an integer with range 10 .. 20.
> I think this logic:
I really wouldn't expect that to happen terribly often.  If you think
it's worth handling, then feel free to cobble some code together and
submit it.  It shouldn't be terribly complex.


> By the way, I hacked tree-vrp to start all value ranges for INTEGRAL_TYPE_P
> variables to [TYPE_MIN, TYPE_MAX].  It certainly helps with eliminating many
> Ada range checks.  Maybe the compiler will even bootstrap :)
The thing to check will be compile-time performance -- in general
with propagators such as VRP, CCP and CPROP it's compile-time
advantageous to drop to a VARYING state in the lattice as quickly
as possible.  The flipside is you can sometimes miss transformations
when there's cases when you can use a VARYING object in an expression
and get a useful value/range.

Basically the two extremes we need to look at are:

  1. Give everything a range, even if it's just TYPE_MIN/TYPE_MAX.  In
     this case VR_VARYING disappears completely as it's meaningless.

  2. Anytime we're going to record a range TYPE_MIN/TYPE_MAX, drop to
     VARYING.  The trick then becomes to find all those cases where we
     have an expression involving a VARYING which produces a useful
     range (such as widening typecasts, IOR with nonzero, etc etc).


Jeff

Reply via email to