On Mon, Apr 8, 2013 at 2:43 PM, Kenneth Zadeck <zad...@naturalbridge.com> wrote:
>
> On 04/08/2013 06:46 AM, Richard Biener wrote:
>>
>> On Sun, Apr 7, 2013 at 7:16 PM, Kenneth Zadeck <zad...@naturalbridge.com>
>> wrote:
>>>
>>> Richard,
>>>
>>> You advocate that I should be using an infinite precision
>>> representation and I advocate a finite precision representation where
>>> the precision is taken from the context.  I would like to make the
>>> case for my position here, in a separate thread, because the other
>>> thread is just getting too messy.
>>>
>>> At both the tree level and the rtl level you have a type (mode is just
>>> bad rep for types) and both of those explicitly have precisions. The
>>> semantics of the programming languages that we implement define, or at
>>> least recommend, that most operations be done in a precision that is
>>> implementation dependent (or like java a particular machine
>>> independent precision).  Each hardware platform specifies exactly how
>>> every operation is done.  I will admit that infinite precision is more
>>> esthetically pleasing than what i have done, but exact precision
>>> matches the needs of these clients.  The problem is that the results
>>> from infinite precision arithmetic differ in many significant ways
>>> from finite precision math.  And the number of places where you have
>>> to inject a precision to get the expected answer, ultimately makes the
>>> infinite precision representation unattractive.
>>>
>>> As I said on Thursday, whenever you do operations that do not satisfy
>>> the requirements of a mathematical ring (add sub and mul are in a
>>> ring, divide, shift and comparisons are not) you run the risk of
>>> getting a result that is not what would have been obtained with either
>>> a strict interpretation of the semantics or the machine. Intuitively
>>> any operation that looks at the bits above the precision does not
>>> qualify as an operation that works in a ring.
>>>
>>> The poster child for operations that do not belong to a ring is division.
>>> For my example, I am using 4 bit integers because it makes the
>>> examples easy, but similar examples exist for any fixed precision.
>>>
>>> Consider 8 * 10 / 4
>>>
>>> in an infinite precision world the result is 20, but in a 4 bit
>>> precision world the answer is 0.
>>>
>>> another example is to ask if
>>>
>>> -10 * 10 is less than 0?
>>>
>>> again you get a different answer with infinite precision.   I would argue
>>> that if i declare a variable of type uint32 and scale my examples i have
>>> the right to expect the compiler to produce the same result as the
>>> machine would.
>>>
>>> While C and C++ may have enough wiggle room in their standards so that
>>> this is just an unexpected, but legal, result as opposed to being wrong,
>>> everyone will hate you (us) if we do this.  Furthermore, Java explicitly
>>> does
>>> not allow this (not that anyone actually uses gcj).  I do not know
>>> enough about go, ada and fortran to say how it would effect them.
>>>
>>> In looking at the double-int class, the only operation that does not
>>> fit in a ring that is done properly is shifting.  There we explicitly
>>> pass in the precision.
>>>
>>> The reason that we rarely see this kind of problem even though
>>> double-int implements 128 bit infinite precision is that currently
>>> very little of the compiler actually uses infinite precision in a
>>> robust way.   In a large number of places, the code looks like:
>>>
>>> if (TYPE_PRECISION (TREE_TYPE (...)) < HOST_BITS_PER_WIDE_INT)
>>>     do something using inline operators.
>>> else
>>>     either do not do something or use const-double,
>>>
>>> such code clears out most of these issues before the two passes that
>>> embrace infinite precision get a chance to do much damage.  However,
>>> my patch at the rtl level gets rid of most of this kind of code and
>>> replaces it with calls to wide-int that currently uses only operations
>>> within the precision.  I assume that if i went down the infinite
>>> precision road at the tree level, that all of this would come to the
>>> surface very quickly.  I prefer to not change my rep and not have to
>>> deal with this later.
>>>
>>> Add, subtract, multiply and the logicals are all safe.  But divide,
>>> remainder, and all of the comparisons need explicit precisions.  In
>>> addition operations like clz, ctl and clrsb need precisions.  In total
>>> about half of the functions would need a precision passed in.  My
>>> point is that once you have to start passing in the precision in for all
>>> of those operations, it seems to be cleaner to get the precision from
>>> the leaves of the tree as I currently do.
>>>
>>> Once you buy into the math in a particular precision world, a lot of
>>> the other issues that you raise are just settled.  Asking how to extend
>>> a value beyond it's precision is like asking what the universe was like
>>> before
>>> the big bang.  It is just something you do not need to know.
>>>
>>> I understand that you would like to have functions like x + 1 work,
>>> and so do I. I just could not figure out how to make them have
>>> unsurprising semantics.  In particular, g++ did not seem to be happy
>>> with me defining two plus operators, one for each of signed and
>>> unsigned HWIs.  It seems like if someone explicitly added a wide_int
>>> and an unsigned HWI that they had a right to have the unsigned hwi not
>>> be sign extended.  But if you can show me how to do this, i am happy
>>> to go down that road.
>>
>> I advocate the infinite precision signed representation as one solution
>> to avoid the issues that come up with your implementation (as I currently
>> have access to) which has a representation with N bits of precision
>> encoded with M <= N bits and no sign information.  That obviously
>> leaves operations on numbers of that representation with differing
>> N undefined.  You define it by having coded the operations which as far
>> as I can see simply assume N is equal for any two operands and
>> the effective sign for extending the M-bits encoding to the common
>> N-bits precision is "available".  A thorough specification of both
>> the encoding scheme and the operation semantics is missing.
>
> This is a perfectly reasonable request.
>
>> I can side-step both of these issues nicely by simply using
>> a infinite precision signed representation and requiring the client to
>> explicitely truncate / extend to a specific precision when required.
>> I also leave open the possibility to have the _encoding_ be always
>> the same as an infinite precision signed representation but to always
>> require an explicitely specified target precision for each operation
>> (which rules out the use of operator overloading).
>>
>> Citing your example:
>>
>>    8 * 10 / 4
>>
>> and transforming it slightly into a commonly used pattern:
>>
>>    (byte-size * 8 + bit-size) / 8
>
> Patterns like this are generally not encoded in either double int or
> wide-int.   I do not think that anyone has taken the position that all math
> be done in a wide math, only the math on the variables that appear in the
> programs.

Especially sizetype arithmetic - which appears in programs - is generally
done in double_int _exactly_ to cater for the above issue.  And that
assumes (usually without checking) that a double-int can encode
sizeof(size_t) * 8 + 3 bits in precision.  Which would not be true for
a 32bit HWI host and a 64bit target (which is why we have need_hwint64).

>> then I argue that what people want here is this carried out in
>> _infinite_ precision!  Even if byte-size happens to come from
>> a sizetype TREE_INT_CST with 64bit precision.  So either
>> choice - having a fixed-precision representation or an
>> infinite-precision representation - can and will lead to errors
>> done by the programmer.  And as you can easily build a
>> finite precision wrapper around an infinite precision implementation
>> but not the other way around it's obvious to me what the
>> implementation should provide.
>>
>> Richard.
>
> Infinite precision does get rid of the issue that you have to specify the
> precision at the leaves.    However, for the vast majority expression trees
> that get built, the leaves already have their precision in the type or the
> mode.   On the other hand to keep the math sane, the infinite precision
> requires either external truncation (ugly) or specification of the precision
> and many interior nodes of the expression tree (just as ugly as my putting
> it at the constant leaves).

Indeed the only "pretty" operation is with fully infinite precision computes.
But as you most of the time have a single operation per stmt performing
one truncation/extension for the result isn't that bad.  And only supporting
explicit truncation/extension is the 2nd most "pretty" option.  Certainly
the need to provide a precision when constructing wide_int::one () isn't
pretty either ...

> The other problem, which i invite you to use the full power of your c++
> sorcery on, is the one where defining an operator so that wide-int +
> unsigned hwi is either rejected or properly zero extended.    If you can do
> this, I will go along with your suggestion that the internal rep should be
> sign extended.

That's very easy with template specialization (not with overloading as
you might have figured out).

typdef signed long HOST_WIDE_INT;
template <typename T>
struct to_shwi
{
  void to_shwi (HOST_WIDE_INT *s, HOST_WIDE_INT x)
    { s[0] = x; }
};

template <>
struct to_shwi<unsigned HOST_WIDE_INT>
{
  void to_shwi (HOST_WIDE_INT *s, unsigned HOST_WIDE_INT x)
    { s[0] = x; s[1] = 0; }
};

gives a method to properly initialize the sign-extended HWI representation
from any integer type.  Needs additional specializations if init from
long long is suported and long long is bigger than HOST_WIDE_INT.

That is, I suppose your issue was stuff like

   wide_int operator+(unsigned HOST_WIDE_INT);
   wide_int operator+(HOST_WIDE_INT);

then you'd write that as

   template <class T>
   wide_int operator+(T);

and in the implementation use the to_swhi class (or a class providing
exactly what you need) that is specialized for the unsigned HOST_WIDE_INT
case (smaller operands automatically are promoted in the right way to
signed HOST_WIDE_INT).

>  Saying that constants are always sign extended seems ok, but
> there are a huge number of places where we convert unsigned hwis as the
> second operand and i do not want that to be a trap.  I went thru a round of
> this, where i did not post the patch because i could not make this work.
> And the number of places where you want to use an hwi as the second operand
> dwarfs the number of places where you want to use a small integer constant.

See above - this isn't an issue if you use templates for the "overloads".

Richard.

>>>
>>> Kenny
>
>

Reply via email to