On Thursday, 30 July 2015 16:07:46 UTC+2, Steven G. Johnson wrote:
>
> The problem is that if you interpret an exact unum as the open interval 
> between two adjacent exact values, what you have is essentially the same as 
> interval arithmetic.  The result of each operation will produce intervals 
> that are broader and broader (necessitating lower and lower precision 
> unums), with the well known problem that the interval quickly becomes 
> absurdly pessimistic in real problems (i.e. you quickly and prematurely 
> discard all of your precision in a variable-precision format like unums).
>
> The real problem with interval arithmetic is not open vs. closed 
> intervals, it is this growth of the error bounds in realistic computations 
> (due to the dependency problem and similar).  (The focus on infinite and 
> semi-infinite open intervals is a sideshow.  If you want useful error 
> bounds, the important things are the *small* intervals.)
>
> If you discard the interval interpretation with its rapid loss of 
> precision, what you are left with is an inexact flag per value, but with no 
> useful error bounds. And I don't believe that this is much more useful than 
> a single inexact flag for a set of computations as in IEEE.
>

The thing is, these are *exactly *the criticisms Gustafson has of 
traditional interval arithmetic. In fact, he's even more critical of 
interval arithmetic than he is of floats, as far as I can see. However, he 
claims that ubounds don't share the "absurd pessimism" problem. Supposedly, 
traditional interval arithmetic by necessity needs to be more pessimistic 
about its boundaries due to rounding, and only using closed endpoint 
instead of allowing for open intervals. Unums instead are (supposedly) more 
precise about the information loss they have, and thus (supposedly) don't 
blow up as badly. Again, his claims, not mine. I'm not saying you're wrong, 
or even sure if you disagree as much as you might think you are (although 
I'm pretty sure you wouldn't like the tone he uses when describing 
traditional methods though).

I agree with the others about the grain of salt (unums/ubounds/uboxes *always 
*come out on top in his examples, which does make you wonder), but on the 
other hand: given that the mathematica implementation of his methods are 
open source, his claims *should* be verifiable (they can be found here 
under Downloads/Updates 
<https://www.google.com/url?q=https%3A%2F%2Fwww.crcpress.com%2FThe-End-of-Error-Unum-Computing%2FGustafson%2F9781482239867&sa=D&sntz=1&usg=AFQjCNG9ezAr5A_BTmpUT6WdVBIYDvaIhA>,
 
Simon Byrne linked it earlier. I also found a Python port 
<https://github.com/jrmuizel/pyunum>).

Reply via email to