Kenneth Zadeck <zad...@naturalbridge.com> writes:
> Index: tree-vrp.c
> ===================================================================
> --- tree-vrp.c        (revision 205597)
> +++ tree-vrp.c        (working copy)
> @@ -2611,22 +2611,28 @@ extract_range_from_binary_expr_1 (value_
>  
>        signop sign = TYPE_SIGN (expr_type);
>        unsigned int prec = TYPE_PRECISION (expr_type);
> -      unsigned int prec2 = (prec * 2) + (sign == UNSIGNED ? 2 : 0);
>  
>        if (range_int_cst_p (&vr0)
>         && range_int_cst_p (&vr1)
>         && TYPE_OVERFLOW_WRAPS (expr_type))
>       {
> -       wide_int sizem1 = wi::mask (prec, false, prec2);
> -       wide_int size = sizem1 + 1;
> +       /* vrp_int is twice as wide as anything that the target
> +          supports so it can support a full width multiply.  No
> +          need to add any more padding for an extra sign bit
> +          because that comes with the way that WIDE_INT_MAX_ELTS is
> +          defined.  */ 
> +       typedef FIXED_WIDE_INT (WIDE_INT_MAX_PRECISION * 2) 
> +         vrp_int;
> +       vrp_int sizem1 = wi::mask <vrp_int> (prec, false);
> +       vrp_int size = sizem1 + 1;
>  
>         /* Extend the values using the sign of the result to PREC2.
>            From here on out, everthing is just signed math no matter
>            what the input types were.  */
> -       wide_int min0 = wide_int::from (vr0.min, prec2, sign);
> -       wide_int max0 = wide_int::from (vr0.max, prec2, sign);
> -       wide_int min1 = wide_int::from (vr1.min, prec2, sign);
> -       wide_int max1 = wide_int::from (vr1.max, prec2, sign);
> +       vrp_int min0 = wi::to_vrp (vr0.min);
> +       vrp_int max0 = wi::to_vrp (vr0.max);
> +       vrp_int min1 = wi::to_vrp (vr1.min);
> +       vrp_int max1 = wi::to_vrp (vr1.max);

I think we should avoid putting to_vrp in tree.h if vrp_int is only
local to this block.  Instead you could have:

          typedef generic_wide_int
             <wi::extended_tree <WIDE_INT_MAX_PRECISION * 2> > vrp_int_cst;
          ...
          vrp_int_cst min0 = vr0.min;
          vrp_int_cst max0 = vr0.max;
          vrp_int_cst min1 = vr1.min;
          vrp_int_cst max1 = vr1.max;

> @@ -228,15 +228,16 @@ along with GCC; see the file COPYING3.
>  #endif
>  
>  /* The MAX_BITSIZE_MODE_ANY_INT is automatically generated by a very
> -   early examination of the target's mode file.  Thus it is safe that
> -   some small multiple of this number is easily larger than any number
> -   that that target could compute.  The place in the compiler that
> -   currently needs the widest ints is the code that determines the
> -   range of a multiply.  This code needs 2n + 2 bits.  */
> -
> +   early examination of the target's mode file.  The WIDE_INT_MAX_ELTS
> +   can accomodate at least 1 more bit so that unsigned numbers of that
> +   mode can be represented.  This will accomodate every place in the
> +   compiler except for a multiply routine in tree-vrp.  That function
> +   makes its own arrangements for larger wide-ints.  */

I think we should drop the "This will accomodate..." bit, since it'll soon
get out of date.  Maybe something like:

    Note that it is still possible to create fixed_wide_ints that have
    precisions greater than MAX_BITSIZE_MODE_ANY_INT.  This can be useful
    when representing a double-width multiplication result, for example.  */

>  #define WIDE_INT_MAX_ELTS \
> -  ((4 * MAX_BITSIZE_MODE_ANY_INT + HOST_BITS_PER_WIDE_INT - 1) \
> -   / HOST_BITS_PER_WIDE_INT)
> +  (((MAX_BITSIZE_MODE_ANY_INT + HOST_BITS_PER_WIDE_INT - 1)  \
> +    / HOST_BITS_PER_WIDE_INT) + 1)

I think this should be:

  (MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_WIDE_INT + 1)

We only need an extra HWI if MAX_BITSIZE_MODE_ANY_INT is an exact multiple
of HOST_BITS_PER_WIDE_INT.

Looks good to me otherwise FWIW.

You probably already realise this, but for avoidance of doubt, Richard
was also asking that we reduce MAX_BITSIZE_MODE_ANY_INT to 128 on x86_64,
since that's the largest scalar_mode_supported_p mode.

Thanks,
Richard

Reply via email to