On Thu, 19 Jan 2006 12:16:22 +0100, =?ISO-8859-1?Q?Gerhard_H=E4ring?= <[EMAIL 
PROTECTED]> wrote:
[...]
>
>floating points are always imprecise, so you wouldn't want them as an 
Please, floating point is not "always imprecise." In a double there are
64 bits, and most patterns represent exact rational values. Other than
infinities and NaNs, you can't pick a bit pattern that doesn't have
a precise, exact rational value. BTW, you'd need a 64-bit CPU to get 
range(-2**53,2**53+1)
but the 53 bits of available precision a float (IEEE 754 double) can represent
each integer in that range exactly (and of course similar sets counting by 2 or 
4 etc.)

You can't represent all arbitarily chosen reals exactly as floats, that's true,
but that's not the same as saying that "floating points are always imprecise."

As a practical matter it is hard to track when floating point calculations lose
exactness (though UIAM there are IEEE 754 hardware features that can support 
that),
so it is just easier to declare all floating point values to be tainted with 
inexactness
from the start, even though it isn't so.

1.0 is precisely represented as a float. So is 1.5 and so are more other values 
than
you can count with an ordinary int ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to