The wiki should be active now.

John: welcome to the thread! I hope you'll find the time to review the
implementation I'm designing as well as contribute to the wiki.

On Fri, Jul 31, 2015 at 1:49 PM, John Gustafson <johngustaf...@earthlink.net
> wrote:

> Here is how you can represent the square root of 2 with a finite number of
> symbols: "1.414…"
>
> The "…" means "There are more decimals after the last one shown, not all
> zeros and not all nines." If the trailing digits were all zeros, we would
> instead write "1.414" and it would be exact. If the trailing digits were
> all nines, we would write "1.415" and again it would be exact. "1.414…" is
> shorthand for the open interval (1.414, 1.415), which is not something you
> can express with traditional interval arithmetic since all the intervals
> are closed.
>
> This is how the ubit part of a unum works. It is appended to the fraction
> bit string, and if it is 1, then there is a "…" to show it is between two
> floats. If it is 0, then the fraction is exact. This allows you to "tile"
> the real number line with no redundancy; every real number is represented
> by exactly one unum, and there is no rounding. There is simply admission of
> inexactness, which becomes part of the number's self-description.
>
> Most of the woes of numerical analysis come from the belief that you have
> to replace every result with an exact number, even if it is incorrect. We
> call it "rounding," but it might better be termed "guessing." It is a form
> of error. The unum representation makes computing with real values as
> mathematically correct as computing with integers. Even if you dial the
> accuracy way, way down, to like a single-bit exponent and a single-bit
> fraction (this is possible!) the unums will not lie to you about a real
> value. They are simply less accurate, which is far preferable to being very
> precise but… wrong.
>
>
> On Wednesday, July 29, 2015 at 8:14:43 AM UTC-7, Steven G. Johnson wrote:
>>
>>
>>
>> On Wednesday, July 29, 2015 at 10:30:41 AM UTC-4, Tom Breloff wrote:
>>>
>>> Correct me if I'm wrong, but (fixed-size) decimal floating-point has
>>> most of the same issues as floating point in terms of accumulation of
>>> errors, right?
>>>
>>
>> What "issues" are you referring to?  There are a lot of crazy myths out
>> there about floating-point arithmetic.
>>
>> For any operation that you could perform exactly in fixed-point
>> arithmetic with a given number of bits, the same operation will also be
>> performed exactly in decimal floating-point with the same number of bits
>> for the signficand. However, for the same total width (e.g. 64 bits),
>> decimal floating point sacrifices a few bits of precision in exchange for
>> dynamical scaling (i.e. the exponent), which gives exact representations
>> for a vastly expanded dynamic range.
>>
>> Furthermore, for operations that *do* involve roundoff error in either
>> fixed- or decimal floating-point arithmetic with a fixed number of bits,
>> the error accumulation is usually vastly better in floating point than
>> fixed-point.  (e.g. there is no equivalent of pairwise summation, with
>> logarithmic error growth, in fixed-point arithmetic.)
>>
>> If you want no roundoff errors, ever, then you have no choice but to use
>> some kind of (slow) arbitrary-precision type, and even then there are
>> plenty of operations you can't allow (e.g. division, unless you are willing
>> to use arbitrary-precision rationals with exponential complexity) or square
>> roots.
>>
>>
>>
>

Reply via email to