>
> uint u1 = *[32 random bits]*;
> uint u2 = *[32 random bits]*;
> uint a = u1 >> 5, b = u2 >> 6;
> return (a * 67108864.0 + b) * (1.0 / 9007199254740992.0);
>
> Can anyone explain this magic? Is this correct?
>

This bit trick is no longer important to decode. I was trying to figure out
how to convert a UInt64 of random bits into a normalised double in the
range [0,1) in such a way that all 2^53 possible floating values could be
produced. Some web sites suggested simply doing this:

uint u = *[64 random bits]*
double d = (u >> 11) * 1.11022302462515654e-16   // * 2-53

I was suspicious that subtle rounding problems might produce a defective
output range, but a quick experiment in LINQPad that fed limit values into
the calculation demonstrated that a complete set of significand bit
patterns was generated at the limits. This supplies good evidence that the
above shift-and-multiply converts 53 of the 64 random bits into a complete
possible double range.

*Greg K*

Reply via email to