Dennis Nichols <[EMAIL PROTECTED]> writes:

> At 4/13/01 03:42 AM, Peter Holm wrote:
> >mysql> select floor(23.49999999999999 + 0.5);   => 23
> >mysql> select floor(23.499999999999999 + 0.5);  => 24
> >Why are there different results?
> 
> Apparently the closest (most accurate) expression of the 
> first constant (23.49999999999999) as a floating point 
> number using binary arithmetic is less than 23.5 while 
> the closest expression of the second constant 
> (23.499999999999999) is 23.5 or greater. 

This has to do with IEEE floating-point format. Both these are maintained
internally as "double"s (as C programs usually do). An IEEE double is 64
bits long, with a 48-bit mantissa (the fractional part). 2^(-48) == 3*10^-15
(== basically, around 16 decimal digits of precision, with the 16th digit
being approximate).

Now, if you look at 23.49999999999999 (the first number), it has 16 digits
(don't forget the part before the period - it's *significant* digits we're
talking about here, since the format normalizes numbers so that the binary
part before the period is always "1"). The second number has 17 digits, so
the string-to-number converter tries to get the 16-digit number that closest
approximates the 17-digit number (in this case, rounding it up to
23.5000000000000).

(OK, the *real* nitpickers, back off :-). I'm just doing approximate math
here, dredging up ancient memories of life in compiler and CPU-land..)
--
Shankar Unni.
[EMAIL PROTECTED]

Reply via email to