Richard Biener <richard.guent...@gmail.com> writes:
> Richard Sandiford <rdsandif...@googlemail.com> wrote:
>>Richard Biener <richard.guent...@gmail.com> writes:
>>>> At the rtl level your idea does not work.   rtl constants do not
>>have a mode
>>>> or type.
>>>
>>> Which is not true and does not matter.  I tell you why.  Quote:
>>
>>It _is_ true, as long as you read "rtl constants" as "rtl integer
>>constants" :-)
>>
>>> +#if TARGET_SUPPORTS_WIDE_INT
>>> +
>>> +/* Match CONST_*s that can represent compile-time constant integers.
>> */
>>> +#define CASE_CONST_SCALAR_INT \
>>> +   case CONST_INT: \
>>> +   case CONST_WIDE_INT
>>>
>>> which means you are only replacing CONST_DOUBLE with wide-int.
>>> And _all_ CONST_DOUBLE have a mode.  Otherwise you'd have no
>>> way of creating the wide-int in the first place.
>>
>>No, integer CONST_DOUBLEs have VOIDmode, just like CONST_INT.
>>Only floating-point CONST_DOUBLEs have a "real" mode.
>
> I stand corrected. Now that's one more argument for infinite precision
> constants, as the mode is then certainly provided by the operations
> similar to the sign. That is, the mode (or size, or precision) of 1
> certainly does not matter.

I disagree.  Although CONST_INT and CONST_DOUBLE don't _store_ a mode,
they are always interpreted according to a particular mode.  It's just
that that mode has to be specified separately.  That's why so many
rtl functions have (enum machine_mode, rtx) pairs.

Infinite precision seems very alien to rtl, where everything is
interpreted according to a particular mode (whether that mode is
stored in the rtx or not).

For one thing, I don't see how infinite precision could work in an
environment where signedness often isn't defined.  E.g. if you optimise
an addition of two rtl constants, you don't know (and aren't supposed
to know) whether the values involved are "signed" or "unsigned".  With
fixed-precision arithmetic it doesn't matter, because both operands must
have the same precision, and because bits outside the precision are not
significant.  With infinite precision arithmetic, the choice carries
over to the next operation.  E.g., to take a 4-bit example, you don't
know when constructing a wide_int from an rtx whether 0b1000 represents
8 or -8.  But if you have no precision to say how many bits are significant,
you have to pick one.  Which do you choose?  And why should we have to
make a choice at all?  (Note that this is a different question to
whether the internal wide_int representation is sign-extending or not,
which is purely an implementation detail.  The same implementation
principle applies to CONST_INTs: the HWI in a CONST_INT is always
sign-extended from the msb of the represented value, although of course
the CONST_INT itself doesn't tell you which bit the msb is; that has to
be determined separately.)

A particular wide_int isn't, and IMO shouldn't be, inherently signed
or unsigned.  The rtl model is that signedness is a question of
interpretation rather than representation.  I realise trees are
different, because signedness is a property of the type rather
than operations on the type, but I still think fixed precision
works with both tree and rtl whereas infinite precision doesn't
work with rtl.

I also fear there are going to be lots of bugs where we forget to
truncate the result of an N-bit operation from "infinite" precision
to N bits before using it in the next operation (as per Kenny's ring
explanation).  With finite precision, and with all-important asserts
that the operands have consistent precisions, we shouldn't have any
hidden bugs like that.

If there are parts of gcc that really want to do infinite-precision
arithmetic, mpz_t ought to be as good as anything.

Thanks,
Richard

Reply via email to