I have to confess that I tend to work with
and think in terms of fixed point arithmetic.
If the source of error in floating point is
primarily the loss of guard bits (binary places)
when an intermediate number has an integer part
say 50 bits long, then I can well understand why the
analysis of errors differs fundamentally in the two cases.

Brian Beesley wrote:

> > It is the signed actual roundoff errors which I maintain are
> > normally distributed with mean zero.
> 
> 1. The signed actual roundoff errors are drawn from a discrete not a 
> continuous distribution.

With a fixed 16 (say) binary places of fractional part and a standard deviation
of say 2^-5 the distinction between discrete (binomial) and continuous (normal)
is pedantic to say the least.
Random walk arguments (as mentioned before) lead to a binomial distribution.
 > 
> 2. The Central Limit Theorem does not apply because the individual outliers 
> are exactly that, not some combination of independent data points drawn from 
> similar distributions.
> 
> 3. The difference between possible values varies in a non-uniform way - think 
> of this as graininess increasing as the effective number of guard bits drops.
> 
> 4. Occasionally the number of guard bits may drop so far that the graininess 
> of the round-off error "sample" drops to 0.125, 0.25, possibly 0.5 or even 
> worse 1.0 if we're really unlucky with the contributory values in "phase 
> space" and have been too optimistic with our choice of run length.
> 
> 5. Because of the graininess effect, the rarity of these events doesn't seem 
> to be amenable to arguments based on standard deviations from an assumed 
> underlying normal distribution. At the very best we have to deal with a curve 
> which periodically halves its height but doubles its length e.g. what appears 
> to be a safe 12 s.d. from the mean may in fact only be a distinctly dodgy 6 
> s.d. or a completely unsafe 3 s.d. away depending on how many steps in the 
> exponent have been crossed. (6 s.d. is dodgy because we are dealing with a 
> sample size of the order of p/20 elements in each iteration, i.e. p^2/20 in 
> total - with p ~ 30,000,000 we have to get all ~4.5*10^14 roundings correct.)

It sounds as if an analysis should determine the frequency with which the
number of guard bits drops to dangerous levels.

I must concede that this is not a problem in fixed point (IFF you have
enough bits to contain the integer part at all times!).
Do you agree that my conjecture about the normal distribution of errors
makes sense in fixed point?

David



_________________________________________________________________
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
_______________________________________________
Prime mailing list
[email protected]
http://hogranch.com/mailman/listinfo/prime

Reply via email to