On Friday, 21 February 2014 at 09:04:40 UTC, Ivan Kazmenko wrote:
On Friday, 21 February 2014 at 05:21:53 UTC, Frustrated wrote:
I think though adding a "repeating" bit would make it even more
accurate so that repeating decimals within the bounds of maximum
bits used could be represented perfectly. e.g., 1/3 = 0.3333...
could be represented perfectly with such a bit and sliding fp
type. With proper cpu support one could have 0.3333... * 3 = 1
exactly.

I believe that the repeating decimals, or better, repeating binary fractions, will hardly be more useful than a rational representation like p/q. The reason is, if we take a reciprocal 1/q and represent it as a binary, decimal, or other fixed-base fraction, the representation is surely periodic, but the upper bound on the length of its period is as high as q-1, and it is not unlikely to be the exact bound.

For example, at http://en.wikipedia.org/wiki/Binary_number#Fractions , note that the binary fraction for 1/11 has period 10, and for 1/13 the period is 12.

Thus repeating decimal for a fraction p/q will take up to q-1 bits when we store it as a repeating decimal, but log(p)+log(q) bits when stored as a rational number (numerator and denominator).


No, you are comparing apples to oranges. The q is not the same in
both equations.

The number of bits for p/q when p and q are stored separately is
log[2](p) + log[2](q). But when p/q is stored as a repeating
decimal(assuming it repeats), then it is a fixed constant
dependent on the number of bits.

So in some cases the rational expression is cheaper and in other
cases the repeating decimal is cheaper.

e.g., .1.... in theory could take 1 bit to represent wile 1/9
takes 1 + 4 = 5.

(the point being is that the rational representation sometimes
takes way more bits than the repeating decimal representation,
other times it doesn't. Basically if we use n bits to represent
the repeating decimal we can't represent repetitions that require
more than n bits, in this case a rational expression may be
cheaper. If we use some type of compression then rational
expressions would, in general, be cheaper the larger n is.

e.g.,

1/9 takes 5 bits to represent as as a rational faction(4 if we
require the numerator to be 1).

0.1.... takes 1 bit to represent in theory but for n bits, it
takes n bits.
e.g.,

1/9 = 11111111/10^8 = 0.000111... in decimal. Which takes 6 bits
to store(000111).

If n was large and fixed then I think statistically it would be
best to use rational expressions rather than repeating decimals.
But rational expressions are not unique and that may cause some
problems with representation... unless the cpu implemented some
algorithms for reduction. 1/9 = 10/90 but 10/90 takes way more
bits to represent than 1/9 even though they represent the same
repeating decimal and the repeating decimals bits are fixed.

In any case, if the fpu had a history of calculations it could
potentially store the calculations in an expression tree and
attempt to reduce them. e.g., p/q*(q/p) = 1. A history could also
be useful for constants in that multiplying several constants
together could produce a new constant which would not introduce
any new accumulation errors.





Reply via email to