On Friday, 21 February 2014 at 20:27:18 UTC, Frustrated wrote:
On Friday, 21 February 2014 at 09:04:40 UTC, Ivan Kazmenko wrote:
Thus repeating decimal for a fraction p/q will take up to q-1 bits when we store it as a repeating decimal, but log(p)+log(q) bits when stored as a rational number (numerator and denominator).

No, you are comparing apples to oranges. The q is not the same in both equations.

Huh?  I believe my q is actually the same q in both.

What I am saying is, more formally:

1. Representing p/q as a rational number asymptotically takes on the order of log(p)+log(q) bits.

2. Representing p/q as a repeating binary fraction asymptotically takes on the order of log(p)+q bits in the worst case, and this worst case is common.

Make that a decimal fraction instead of a binary fraction if you wish, the statement remains the same.

The number of bits for p/q when p and q are stored separately is
log[2](p) + log[2](q). But when p/q is stored as a repeating
decimal(assuming it repeats), then it is a fixed constant
dependent on the number of bits.

A rational represented as a fixed-base (binary, decimal, etc.) fraction always has a period. Sometimes the period is degenerate, that is, consists of a single zero: 1/2 = 0.100000... = 0.1 as a binary fraction. Sometimes there is a non-degenerate pre-period part before the period: 13/10 = 1.0100110011 = 1.0(1001) as a binary fraction, the "1.0" part being the pre-period and the "(1001)" part the period. The same goes for decimal fractions.

So in some cases the rational expression is cheaper and in other
cases the repeating decimal is cheaper.

Sure, it depends on the p and q, I was talking about the asymptotic case. Say, for p and q uniformly distributed in [100..999], I believe the rational representation is much shorter on average. We can measure that if the practical need arises... :)

In practice, one may argue that large prime denominators don't occur too often; well, for such applications, fine then.

If n was large and fixed then I think statistically it would be
best to use rational expressions rather than repeating decimals.

Yeah, that's similar to what I was trying to state.

But rational expressions are not unique and that may cause some
problems with representation... unless the cpu implemented some
algorithms for reduction. 1/9 = 10/90 but 10/90 takes way more
bits to represent than 1/9 even though they represent the same
repeating decimal and the repeating decimals bits are fixed.

Given a rational, bringing it to lowest terms is as easy (yet costly) as calculating the GCD.

In any case, if the fpu had a history of calculations it could
potentially store the calculations in an expression tree and
attempt to reduce them. e.g., p/q*(q/p) = 1. A history could also be useful for constants in that multiplying several constants
together could produce a new constant which would not introduce
any new accumulation errors.

As far as I know, most compilers do that for constants known at compile time. As for the run-time usage of constants, note that the pool of constants will still have to be stored somewhere, and addressed somehow, and this may cancel out the performance gains.

Ivan Kazmenko.

Reply via email to