Job:  I think you're on the right track.  Given current hardware
optimizations, you might as well create a mostly fixed-length
specification, but with the advantages of "truth" that the ubit provides.
I'm thinking through an implementation that uses generated functions and
parameterized Unum types, which provide as much precision/accuracy that you
need, filling up a standard bit size (8/16/32/64/128), very similar to the
"unpacked unums" that Gustafson describes.

On Wed, Jul 29, 2015 at 5:47 PM, Job van der Zwan <j.l.vanderz...@gmail.com>
wrote:

> On Thursday, 30 July 2015 00:00:56 UTC+3, Steven G. Johnson wrote:
>>
>> Job, I'm basing my judgement on the presentation.
>>
>
> Ah ok, I was wondering I feel like those presentations give a general
> impression, but don't really explain the details enough. And like I said,
> your critique overlaps with Gustafson's own critique of traditional
> interval arithmetic, so I wasn't sure if you meant that you don't buy his
> suggested alternative ubox method after reading the book, or indicated
> scepticism based on earlier experience, but without full knowledge of what
> his suggested alternative is.
>
> To be clear, it wasn't a "you should read the book" putdown - I hate
> comments like that, they destroy every meaningful discussion.
>
> The more I think about it, the more I think the ubit is actually the big
> breakthrough. As a thought experiment, let's ignore the whole
> flexible-bitsize bit and just take an existing float, but replace the last
> bit of the fraction with a ubit. What happens?
>
> Well.. we give up one bit of *precision* in the fraction, but *our set of
> representations is still the same size*. We still have the same number of
> floats as before! It's just that half of them is now exact (with one bit
> less precision), and the other half represents open intervals between these
> exact numbers. Which lets you represent the entire real number line
> accurately (but with limited precision, unless they happen to be equal to
> an exact float).
>
> Compare that to traditional floating point numbers, which are all exact,
> but unless your calculation is exactly that floating point representation
> (which is very rare) guaranteed to be off.
>
> Think about it this way: regular floats represent a finite subset of
> rational numbers. More bits increase that finite subset. But add a ubit,
> and you got the entire real number line. With limited precision, and using
> the same finite number of representations as a float that has the same
> number of bits, but still.
>
> I'd like to hear some feedback on the example I used earlier from people
> more versed in this topic than me:
>
>  ubound arithmetic tells you that 1 / (0, 1] = [1, inf), and that [1, inf)
>> / inf = 0
>
>
> Thanks to the ubit, the arithmetic is as simple as "divide by both
> endpoints to get the new interval endpoints".  It's so simple that I have a
> hard time believing this was not possible with interval arithmetic before,
> and again I don't know that much about the topic.
>
> So is it really true, like Gustafson claims, that interval arithmetic so
> far always used *closed* intervals that accepted *rounded* answers?
> Because if that is true, and even is that is the only thing unums solve...
> well then I'm sold :P
>

Reply via email to