On 12/15/2013 03:54 AM, Richard Sandiford wrote:
Kenneth Zadeck <zad...@naturalbridge.com> writes:
The current world
is actually structured so that we never ask about overflow for the two
larger classes because the reason that you used those classes was that
you never wanted to have this discussion. So if you never ask about
overflow, then it really does not matter because we are not going to
return enough bits for you to care what happened on the inside. Of
course that could change and someone could say that they wanted overflow
on widest-int. Then the comment makes sense, with revisions, unless
your review of the code that wants overflow on widest int suggests that
they are just being stupid.
But widest_int is now supposed to be at least 1 bit wider than widest
input type (unlike previously where it was double the widest input type).
So I can definitely see cases where we'd want to know whether a
widest_int * widest_int result overflows.
My point is that the widest_int * widest_int would normally be a signed
multiplication rather than an unsigned multiplication, since the extra
1 bit of precision allows every operation to be signed. So it isn't
a case of whether the top bit of a widest_int will be set, but whether
we ever reach here for widest_int in the first place. We should be
going down the sgn == SIGNED path rather than the sgn == UNSIGNED path.
widest_int can represent an all-1s value, usually interpreted as -1.
If we do go down this sgn == UNSIGNED path for widest_int then we will
instead treat the all-1s value as the maximum unsigned number, just like
for any other kind of wide int.
As far as this function goes there really is no difference between
wide_int, offset_int and widest_int. Which is good, because offset_int
and widest_int should just be wide_ints that are optimised for a specific
and fixed precision.
Thanks,
Richard
I am now seriously regretting letting richi talk me into changing the
size of the wide int buffer from being 2x of the largest mode on the
machine. It was a terrible mistake AND i would guess making it smaller
does not provide any real benefit.
The problem is that when you use widest-int (and by analogy offset int)
it should NEVER EVER overflow. Furthermore we need to change the
interfaces for these two so that you cannot even ask!!!!!! (i do not
believe that anyone does ask so the change would be small.)
offset_int * offset_int could overflow too, at least in the sense that
there are combinations of valid offset_ints whose product can't be
represented in an offset_int. E.g. (1ULL << 67) * (1ULL << 67).
I think that was always the case.
see answer below.
There are a huge set of bugs on the trunk that are "fixed" with wide-int
because people wrote code for double-int thinking that it was infinite
precision. So they never tested the cases of what happens when the
size of the variable needed two HWIs. Most of those cases were
resolved by making passes like tree-vrp use wide-int and then being
explicit about the overflow on every operation, because with wide-int
the issue is in your face since things overflow even for 32 bit
numbers. However, with the current widest-int, we will only be safe for
add and subtract by adding the extra bit. In multiply we are exposed.
The perception is that widest-int is a good as infinite precision and no
one will ever write the code to check if it overflowed because it only
rarely happens.
All operations can overflow. We would need 2 extra bits rather than 1
extra bit to stop addition overflowing, because the 1 extra bit we already
have is to allow unsigned values to be treated as signed. But 2 extra bits
is only good for one addition, not a chain of two additions.
That's why ignoring overflow seems dangerous to me. The old wide-int
way might have allowed any x * y to be represented, but if nothing
checked whether x * y was bigger than expected then x * y + z could
overflow.
Thanks,
Richard
it is certainly true that in order to do an unbounded set of operations,
you would have to check on every operation. so my suggestion that we
should remove the checking from the infinite precision would not support
this. but the reality is that there are currently no places in the
compiler that do this.
Currently all of the uses of widest-int are one or two operations, and
the style of code writing is that you do these and then you deal with
the overflow at the time that you convert the widest-int to a tree. I
think that it is important to maintain the style of programming where
for a small finite number of computations do not need to check until
they convert back.
The problem with making the buffer size so tight is that we do not have
an adequate reserves to allow this style for any supportable type.
I personally think that 2x + some small n is what we need to have.
i am not as familiar with how this is used (or to be used when all of
the offset math is converted to use wide-int), but there appear to be
two uses of multiply. one is the "harmless" mult by 3" and the other
is where people are trying to compute the size of arrays. These last
operations do need to be checked for overflow. The question here is
do you want to force those operations to overflow individually or do you
want to check when you convert out. Again, i think 2x + some small
number is what we might want to consider.
kenny