For me, the nice thing (if I understand this correctly) is that UNUMs let 
me know that there *was* roundoff error, whereas with currently IEEE binary 
*and* decimal standards, you have no way of telling.

On Wednesday, July 29, 2015 at 11:14:43 AM UTC-4, Steven G. Johnson wrote:
>
>
>
> On Wednesday, July 29, 2015 at 10:30:41 AM UTC-4, Tom Breloff wrote:
>>
>> Correct me if I'm wrong, but (fixed-size) decimal floating-point has most 
>> of the same issues as floating point in terms of accumulation of errors, 
>> right? 
>>
>
> What "issues" are you referring to?  There are a lot of crazy myths out 
> there about floating-point arithmetic.
>
> For any operation that you could perform exactly in fixed-point arithmetic 
> with a given number of bits, the same operation will also be performed 
> exactly in decimal floating-point with the same number of bits for the 
> signficand. However, for the same total width (e.g. 64 bits), decimal 
> floating point sacrifices a few bits of precision in exchange for dynamical 
> scaling (i.e. the exponent), which gives exact representations for a vastly 
> expanded dynamic range. 
>
> Furthermore, for operations that *do* involve roundoff error in either 
> fixed- or decimal floating-point arithmetic with a fixed number of bits, 
> the error accumulation is usually vastly better in floating point than 
> fixed-point.  (e.g. there is no equivalent of pairwise summation, with 
> logarithmic error growth, in fixed-point arithmetic.)
>
> If you want no roundoff errors, ever, then you have no choice but to use 
> some kind of (slow) arbitrary-precision type, and even then there are 
> plenty of operations you can't allow (e.g. division, unless you are willing 
> to use arbitrary-precision rationals with exponential complexity) or square 
> roots.
>
>  
>

Reply via email to