Paul D. Anderson wrote:
Don Wrote:

Paul D. Anderson wrote:
The implementation will comply with IEEE 754-2008. I just wanted to illustrate 
that precision can depend on the operation as well as the operands.

Paul
I'm not sure why you think there needs to be a precision depending on the operation. The IEEE 754-2008 has the notion of "Widento" precision, but AFAIK it's primarily to support x87 -- it's pretty clear that the normal mode of operation is to use the precision which is the maximum precision of the operands. Even when there is a Widento mode, it's only provided for the non-storage formats (Ie, 80-bit reals only).

I think it should operate the way int and long does:
typeof(x*y) is x if x.mant_dig >= y.mant_dig, else y.
What you might perhaps do is have a global setting for adjusting the default size of a newly constructed variable. But it would only affect constructors, not temporaries inside expressions.

I agree. I seem to have stated it badly, but I'm not suggesting there is a 
precision inherent in an operation. I'm just considering the possibility that 
the user may not want the full precision of the operands for the result of a 
particular operation.

Java's BigDecimal class allows for this by including a context object as a 
parameter to the operation. For D we would allow for this by changing the 
context, where the precision in the context would govern the precision of the 
result.

So the precision of a new variable, in the absence of an explicit instruction, 
would be the context precision, and the precision of the result of an operation 
would be no greater than the context precision, but would otherwise be the the 
larger of the precision of the operands.

So if a user wanted to perform arithmetic at a given precision he would set the 
context precision to that value. If he wanted the precision to grow to match 
the largest operand precision (i.e., no fixed precision) he would set the 
context precision to be larger than any expected value. (Or maybe we could 
explicitly provide for this.)

You could increase the precision of a calculation by changing the precision of one of the operands. For that to work, you would need a way to create a new variable which copies the precision from another one.


In any case, the operation would be carried out at the operand precision(s) and 
only rounded, if necessary, at the point of return.

I hope that's a clearer statement of what I'm implementing. If I've still 
missed something important, let me know.

In any case, it seems to me that it doesn't affect the code very much. It's much easier to decide such things once you have working code and some real use cases.

Reply via email to