On Sat, 21 Jan 2006 14:28:20 +1100, Steven D'Aprano <[EMAIL PROTECTED]> wrote:

>On Fri, 20 Jan 2006 04:25:01 +0000, Bengt Richter wrote:
>
>> On Thu, 19 Jan 2006 12:16:22 +0100, =?ISO-8859-1?Q?Gerhard_H=E4ring?= 
>> <[EMAIL PROTECTED]> wrote:
>> [...]
>>>
>>>floating points are always imprecise, so you wouldn't want them as an 
>> Please, floating point is not "always imprecise." In a double there are
>> 64 bits, and most patterns represent exact rational values. Other than
>> infinities and NaNs, you can't pick a bit pattern that doesn't have
>> a precise, exact rational value. 
>
>Of course every float has a precise rational value.
>0.1000000000000000000001 has a precise rational value:
>
>1000000000000000000001/10000000000000000000000
>
Good, I'm glad that part is clear ;-)

>But that's hardly what people are referring to. The question isn't whether
"people"?
>every float is an (ugly) rational, but whether every (tidy) rational is a
>float. And that is *not* the case, simple rationals like 1/10 cannot be
>written precisely as floats no matter how many bits you use. 
See the next statement below. What did you think I meant?
>
>> You can't represent all arbitarily chosen reals exactly as floats, that's 
>> true,
>> but that's not the same as saying that "floating points are always 
>> imprecise."
>
>"Always" is too strong, since (for example) 1/2 can be represented
>precisely as a float. But in general, for any "random" rational value N/M,
>the odds are that it cannot be represented precisely as a float. And
>that's what people mean when they say floats are imprecise.
That's what *you* mean, I take it ;-) I suspect what most people mean is that 
they don't
really understand how floating point works in detail, and they'd rather not 
think about it
if they can substitute a simple generalization that mostly keeps them out of 
trouble ;-)

Besides, "cannot be represented precisely" is a little more subtle than numbers 
of bits.
E.g., one could ask, how does the internal floating point bit pattern for 
0.10000000000000001
(which incidentally is not the actual exact decimal value of the IEEE 754 bit 
pattern --
0.1000000000000000055511151231257827021181583404541015625 is the exact value)
*not* "represent" 0.1 precisely? E.g., if all you are interested in is one 
decimal fractional
digit, any float whose exact rational value is f where .05 <= f < 0.15 could be 
viewed as one
in a (quite large) set of peculiar error-correcting codes that all map to the 
exact value you
want to represent. This is a matter of what you mean by "represent" vs what is 
represented.
Float representations are just codes made of bits. If what you want is for 
'%5.2f'%f to
produce two reliably exact decimal fractional digits, you have a lot of choices 
for f. Chances
are f = 0.1 won't make for a surprise, which in some sense means that the float 
bits behind float('.1')
"represented" .1 exactly, even though they did so by way of an unambiguously 
associated nearby
but different mathematically exact value.

BTW, equally important to precision of individual numbers IMO is what happens 
to the precision of
results of operations on inexactly represented values. How do errors accumulate 
and eventually
cause purportedly precise results to differ from mathematically exact results 
more than the advertised
precision would seem to allow? This kind of question leads to laws about when 
and how to round,
and definition of legal usage for factors e.g. converting from one currency to 
another, where
the inverse conversion factor is not a mathematical inverse.

Practicality is beating up on purity all over the place ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to