On 10/16/2013 02:45 PM, Richard Sandiford wrote:
Kenneth Zadeck <zad...@naturalbridge.com> writes:
I note that we immediately return in the above case, so

        if (precision < xprecision + HOST_BITS_PER_WIDE_INT)
          {
            len = wi::force_to_size (scratch, val, len, xprecision,
precision, UNSIGNED);
            return wi::storage_ref (scratch, len, precision);
          }

applies only when the desired precision is a multiple of a HWI.
yes
I assume it adds that extra zero word in case the MSB is set,
right?
yes
I am confused about the condition written here and
how we look at precision and not xprecision when deciding how
to extend - given a 8-bit precision unsigned 0xff and precision == 64
we do not even consider sign-extending the value because we look
at precision and not xprecision.  Don't we need to look at
xprecision here?
I do not think so.   There are three cases here:

1) xprecision < precision:
say xprecision = 8 and precision = 32.

The number is between 0 and 255 and after canonicalization the number
will be between 0 and 255.

2) xprecision == precision
not much to talk about here.

3) xprecision > precision

We need to loose the upper bits of the input and it does not matter what
they were.    The only thing that we are concerned about is that the
bits above what is now the sign bit pointed to by precision are matched
in the bits above precision.
(3) is gone with the assert though.
you seem to be correct here.
As mentioned in my message yesterday, I thought your new way of canonising
unsigned tree constants meant that there was always an upper zero bit.
Is that right?
i believe this is correct.
If so, xprecision < precision is a no-op, because the number always
has the right form for wider precisions.  The only difficult case is
xprecision == precision, since then we need to peel off any upper -1 HWIs.
say my HWI size is 8 bits (just to keep from typing a million 'f's. if i have a 16 bit unsigned number that is all 1s in the tree world it is 3 hwis
0x00 0xff 0xff.

but inside regular wide int, it would take 1 wide int whose value is 0xff.
inside of max it would be the same as the tree, but then the test precision < xprecision + hbpwi never kicks in because precision is guaranteed to be huge.

inside of addr_wide_int, i think we tank with the assertion.

the case actually comes up on the ppc because they do a lot of 128 bit math. I think i got thru the x86-64 without noticing this.



If that's right and xprecision == precision can use val with an adjusted len,
then...

After all an assert precision == xprecision
does not work in this routine.

Quickly execute.exp tested only.

To avoid code quality wreckage we have to implement a different way
of allocating the 'scratch' storage of wide_int_ref_storage
(well, ideally we wouldn't need it ...).  I thought of simply
allocating it via alloca (), but we need to do that in all
callers that build a wide_int_ref (eventually via a hidden
default arg).  But as that's a template specialization of
generic_wide_int ... so the option is to provide a function
wrapper inside wi:: for this, like
I want richard and mike to be the people who respond to the next
point.    I am not a c++ person and all of this storage manager layer
stuff gives me a headache.
...the use of the scratch array goes away completely for addr_wide_int
and max_wide_int.  If this code is on a hot path for wide_int then maybe
we should simply require that wide_int operations involving trees have
explicit extensions, like in rtl.  (I.e. only do the implicit extension
for addr_wide_int and max_wide_int.)
i do not understand how this is true.
I'm not opposed to going back to a separate scratch array if all else fails,
but it's really ugly, so I'd like to look at the alternatives first...

Thanks,
Richard

Reply via email to