Personally, I'm just trying to figure out what the "secret sauce" is.

   - Are unum operations associative? If so, then how is that accomplished?
   It seems like the answer is "not really", at least not in the sense that
   one usually means it – i.e. that doing the operations in different orders
   produces the same results.


   - Is there some fundamental reason why unum's are better than intervals
   when it comes to limit divergence? The answer seems to be no – or at least
   that unums don't do anything that you couldn't also do to limit the
   divergence of intervals.

What do seem like interesting ideas are the inexact bit and variable
precision.

   - Using the inexact bit to represent either a value or an interval with
   the same type is clever and I do like how it covers *all* of the real
   number line. On the other hand, you can represent an exact value with a
   closed interval and equal endpoints.


   - Variable precision gives the type more flexibility than floats in much
   the same way that floats are more flexible than fixed-point numbers – it's
   a point even further on the flexibility versus complexity tradeoff. These
   days, probably a good tradeoff, given that we have more transistors than we
   know what to do with.

Are these things enough to warrant changing how all the hardware everywhere
in the world does arithmetic? It's certainly worth implementing and seeing
how well it works, and Julia is a uniquely good language for doing such
experiments.

On Thu, Jul 30, 2015 at 4:50 PM, Tom Breloff <t...@breloff.com> wrote:

> So I see a few recurring themes in this discussion:
>
> 1) "Floats can do anything Unums can do if you make them big enough".
> I mostly agree with this... But that's a similar argument to saying that we
> could just represent a UInt64 by an immutable with 8 UInt8 fields. Sure you
> could do it, but its not a very elegant solution.
>
> 2) "Unum intervals and float intervals are the same thing if they have the
> same precision." This I don't think I agree with, if there is an exact unum
> involved in the calc. I feel like incorporating this flag for exactness
> (the ubit) is the key point, and changes the results immensely.  You could
> just make an immutable with a Float64 and a Bool (the ubit) and
> mostly accomplish the same thing... So the Unum is just one way to
> accomplish this.
>
> 3) "we shouldn't explore alternatives to floats, because floats are what
> is currently optimized in hardware."  Really? Where's your adventurous
> spirit? There are some good ideas that, if embraced by a community of
> forward-thinking scientists, could most certainly be optimized in hardware
> someday.
>
> All this to say... I see promise in the concepts of flexible precision and
> exactness information, and I think it can be a more elegant medium to
> attack some problems.
>
> If unums and floats were both optimized in hardware and equivalently fast,
> I would likely choose unums all the time. With a software implementation,
> it would depend on the application whether there's any value. Either way,
> I'm working on a prototype and you can decide for yourself if you see any
> value in it.
>
> On Thursday, July 30, 2015, Stefan Karpinski <ste...@karpinski.org> wrote:
>
>> It seems like you could apply all of these tricks to intervals to the
>> same effect, and it would still be faster since Float64 ops are implemented
>> in hardware. For example, given an Interval type, you can define
>> square(::Interval) so that the lower bound is 0; you can also define
>> dot(::Vector{Interval}, ::Vector{Interval}) cleverly, etc. (In fact, these
>> things seem better done at the language level than at the hardware level.
>> Moreover, none of this really addresses the fundamental issue – there is no
>> systematic solution to the divergence problem is provided, just a
>> collection of hacks to make slightly less bad in certain special
>> circumstances.
>>
>> On Thu, Jul 30, 2015 at 3:54 PM, Jason Merrill <jwmerr...@gmail.com>
>> wrote:
>>
>>> On Thursday, July 30, 2015 at 3:10:24 PM UTC-4, Job van der Zwan wrote:
>>>>
>>>> On Thursday, 30 July 2015 16:07:46 UTC+2, Steven G. Johnson wrote:
>>>>>
>>>>> The problem is that if you interpret an exact unum as the open
>>>>> interval between two adjacent exact values, what you have is essentially
>>>>> the same as interval arithmetic.  The result of each operation will 
>>>>> produce
>>>>> intervals that are broader and broader (necessitating lower and lower
>>>>> precision unums), with the well known problem that the interval quickly
>>>>> becomes absurdly pessimistic in real problems (i.e. you quickly and
>>>>> prematurely discard all of your precision in a variable-precision format
>>>>> like unums).
>>>>>
>>>>> The real problem with interval arithmetic is not open vs. closed
>>>>> intervals, it is this growth of the error bounds in realistic computations
>>>>> (due to the dependency problem and similar).  (The focus on infinite and
>>>>> semi-infinite open intervals is a sideshow.  If you want useful error
>>>>> bounds, the important things are the *small* intervals.)
>>>>>
>>>>> If you discard the interval interpretation with its rapid loss of
>>>>> precision, what you are left with is an inexact flag per value, but with 
>>>>> no
>>>>> useful error bounds. And I don't believe that this is much more useful 
>>>>> than
>>>>> a single inexact flag for a set of computations as in IEEE.
>>>>>
>>>>
>>>> The thing is, these are *exactly *the criticisms Gustafson has of
>>>> traditional interval arithmetic. In fact, he's even more critical of
>>>> interval arithmetic than he is of floats, as far as I can see. However, he
>>>> claims that ubounds don't share the "absurd pessimism" problem. Supposedly,
>>>> traditional interval arithmetic by necessity needs to be more pessimistic
>>>> about its boundaries due to rounding, and only using closed endpoint
>>>> instead of allowing for open intervals. Unums instead are (supposedly) more
>>>> precise about the information loss they have, and thus (supposedly) don't
>>>> blow up as badly. Again, his claims, not mine. I'm not saying you're wrong,
>>>> or even sure if you disagree as much as you might think you are (although
>>>> I'm pretty sure you wouldn't like the tone he uses when describing
>>>> traditional methods though).
>>>>
>>>> I agree with the others about the grain of salt (unums/ubounds/uboxes 
>>>> *always
>>>> *come out on top in his examples, which does make you wonder), but on
>>>> the other hand: given that the mathematica implementation of his methods
>>>> are open source, his claims *should* be verifiable (they can be found here
>>>> under Downloads/Updates
>>>> <https://www.google.com/url?q=https%3A%2F%2Fwww.crcpress.com%2FThe-End-of-Error-Unum-Computing%2FGustafson%2F9781482239867&sa=D&sntz=1&usg=AFQjCNG9ezAr5A_BTmpUT6WdVBIYDvaIhA>,
>>>> Simon Byrne linked it earlier. I also found a Python port
>>>> <https://github.com/jrmuizel/pyunum>).
>>>>
>>>
>>> If you inspect the specific examples of challenge problems that
>>> Gustafson gives in Chapter 14 of his book, the open vs. closed interval
>>> distinction doesn't actually make an important appearance. The main ways
>>> that ubounds do better than Mathematica's Intervals are
>>>
>>> 1. Fused operations allow getting around specific cases of the
>>> dependency problem. E.g. using squareu[x] instead of x*x allows putting 0
>>> as a lower bound of the result, and fdotu allows avoiding cancelation in
>>> dot products (and as a special case, sums and differences).
>>> 2. Sometimes the unums are allowed to have more bits in their mantissa
>>> than the bounds of the float Intervals.
>>> 3. The ubounds often use fewer bits in their representation (averaged
>>> over a whole calculation) than Interval alternatives.
>>>
>>> Only the first two are relevant to correctness/precision. Number 3 is a
>>> performance issue that I don't want to discuss.
>>>
>>> Looking at the specific examples,
>>>
>>> * "Wrath of Kahan, 1"
>>> Intervals eventually blow up to [-Inf, Inf]. Ubounds also diverge (from
>>> the text, "If you keep going, the left endpoint falls below 6, and then
>>> diverges towards -Inf, but remember that a unum environment can
>>> automatically detect when relative width gets to high..."). They diverge
>>> slightly more slowly in this case because he has allowed 2^6=64 bits in the
>>> mantissa of the ubounds endpoints, whereas the double Interval bounds have
>>> only 52 bits in their mantissa.
>>>
>>> * "Wrath of Kahan, 2"
>>> In the unum computation, squaring operations are fused with squareu, but
>>> float Interval calculations are not fused. I also believe the check for the
>>> ubounds containing zero in e[z] is erroneous in a way that makes this
>>> example very deceiving, but I don't want to go into that detail here.
>>>
>>> * "Rump's royal pain"
>>> Uses fused powu and squareu operations, and allows up to 2^7=128 bits in
>>> the mantissa for ubounds, whereas double intervals are computed without
>>> fused operations and only allow 52 bits in their mantissa. The fused
>>> operations are critical here.
>>>
>>> * "Quadratic formula"
>>> Main advantage comes from using fused squareu operations, and allowing
>>> more bits in the mantissa of the unums than in the mantissa of the single
>>> precision floats. No comparison to float Intervals here.
>>>
>>> * "Bailey's numerical nightmare"
>>> Unum Computation is first scaled, and then done almost entirely using
>>> fused dot products, which makes this into an integer computation until the
>>> final division in the case of unums. The float Interval computation wasn't
>>> ever scaled, and the float computation was scaled, but used no fused
>>> operations.
>>>
>>> * "Fast Fourier Transforms"
>>> I haven't read this example very closely so no comment.
>>>
>>> My conclusion is that in all of these specific test cases, Intervals
>>> with float endpoints could get answers that are just as precise/accurate,
>>> if they used the same fused operations, and if their mantissa size was as
>>> large as the maximum mantissa size allowed for the unums. Even so, this is
>>> a useful observation. Environments that provide Intervals should provide a
>>> useful set of fused operations over them to reduce the dependency problem
>>> in a usefully large set of cases. If open vs. closed endpoints are
>>> important, those could easily be added to any type of Interval
>>> representation at the cost of 2 extra bits per interval.
>>>
>>> The performance claims might eventually be legitimate (with new
>>> hardware), but I can't personally evaluate them.
>>>
>>
>>

Reply via email to