Also to add to Scotts list... there are precisely 2 representations of NaN,
and we could potentially use them as a replacement for Nullable.  i.e. the
first NaN could represent 0/0 and similar, while the second NaN could
represent a missing value.  So instead of a "Array{Nullable{Float64},N}",
you would just have "Array{Unum{4,7},N}".  isnan(u) and isnull(u) could be
mutually exclusive.

I foresee the potential of specialized arrays as well, which could strip
out the "size" fields of the unum and put that metadata in the array type,
keeping the format fixed within the array.  This allows constant indexing
and other niceties, but still allowing for arbitrary-precision intermediate
calcs, with exactness tracking.

On Wed, Jul 29, 2015 at 10:38 AM, Tom Breloff <t...@breloff.com> wrote:

> Scott:  Is your number format a public (open source) specification?  How
> does it differ from decimal floating point?
>
> On Wed, Jul 29, 2015 at 10:30 AM, Tom Breloff <t...@breloff.com> wrote:
>
>> Correct me if I'm wrong, but (fixed-size) decimal floating-point has most
>> of the same issues as floating point in terms of accumulation of errors,
>> right?  For certain cases, such as adding 2 prices together, I agree that
>> decimal floating point would work ($1.01 + $2.02 == $3.03), but for those
>> cases it's easier to represent the values as integers: (101 + 202 == 303 ;
>> prec=2), which is basically what I do now.
>>
>> In terms of storage and complexity, I would expect that decimal floating
>> point numbers are bloated as compared to floats.  You're giving up speed
>> and memory in order to guarantee an exact representation falls in base
>> 10... I could understand how this is occasionally useful, but I can't
>> imagine you'd want that in the general case.
>>
>> In terms of hardware support... obviously it doesn't exist today, but it
>> could in the future:
>> http://www.theplatform.net/2015/03/12/the-little-chip-that-could-disrupt-exascale-computing/
>>
>> Either way, I would think there's enough potential to the idea to at
>> least prototype and test, and maybe it will prove to be more useful than
>> you expect.
>>
>> On Wed, Jul 29, 2015 at 10:10 AM, Job van der Zwan <
>> j.l.vanderz...@gmail.com> wrote:
>>
>>> On Wednesday, 29 July 2015 16:50:21 UTC+3, Steven G. Johnson wrote:
>>>>
>>>> Regarding, unums, without hardware support, at first glance they don't
>>>> sound practical compared to the present alternatives (hardware or software
>>>> fixed-precision float types, or arbitrary precision if you need it). And
>>>> the "ubox" method for error analysis, even if it overcomes the problems of
>>>> interval arithmetic as claimed, sounds too expensive to use on anything
>>>> except for the smallest-scale problems because of the large number of boxes
>>>> that you seem to need for each value whose error is being tracked.
>>>>
>>>
>>> Well, I don't know enough about traditional methods to say if they're
>>> really as limited as Gustafson claims in his book, or if he's just
>>> cherry-picking. Same about the cost of using uboxe
>>>
>>> However, ubound arithmetic tells you that 1 / (0, 1] = [1, inf), and
>>> that [1, inf) / inf = 0. The ubounds describing those interval results are
>>> effectively just a pair of floating point numbers, plus a ubit to signal
>>> whether an endpoint is open or not. That's a very simple thing to
>>> implement. Not sure if there's any arbitrary precision method that deal
>>> with this so elegantly - you probably know better than I do.
>>>
>>
>>
>

Reply via email to