Richard D. Moores wrote:
On Tue, Dec 15, 2009 at 23:30, Hugo Arts <[email protected]> wrote:
On Wed, Dec 16, 2009 at 5:12 AM, Richard D. Moores <[email protected]> wrote:
Before I can go below I need to know if you are saying that the
relevant doc is wrong. I took the original name for my function almost
directly from it. Near the bottom of
<http://docs.python.org/3.1/tutorial/floatingpoint.html#representation-error>
we find "meaning that the exact number stored in the computer is equal
to the decimal value
0.1000000000000000055511151231257827021181583404541015625." And
coincidence or no, that's precisely what float2Decimal() returns for
0.1 .
The docs are right. The function will give you the exact value of any
floating point number stored in memory.
OK!
I think what Dave is trying to
say is that if you want to store the exact value 0.1 on your computer,
this function won't help you do that.
Yes, I knew that.
It also won't help you avoid any
kinds of rounding errors, which is worth mentioning. If you want 0.1
represented exactly you'll need to use the Decimal module all the way
and avoid floating point entirely.
Of course, if all you want is to get a better understanding of what's
actually stored on your computer when you try to store 0.1 in a float,
this can be a useful tool.
Yes, that's what I wanted the function for, and with a name that would
be easy to remember.
I also recommend the article "What Every
Computer Scientist Should Know About Floating-point Arithmetic." It's
a very detailed explanation, though somewhat technical:
http://docs.sun.com/source/806-3568/ncg_goldberg.html
I'm sleepy now, but will dig into it tomorrow.
Thanks for your help, Hugo.
Dick
Hugo is right, and I was a little bit wrong. Not about the decimal
versus binary floating point, but about how 3.1's from_float() method
works. It's new in 3.1, and I didn't play with it enough. Apparently,
it adjusts the precision of the generated Decimal value so that it *can*
represent the binary value exactly. So the doc is correct that the
representation is exact.
Note also that it takes 56 digits to do that, which is part of what I
was trying to say. Although I was remembering 53 bits for the mantissa
of binary-fp, the concepts were right. It takes about as many decimal
digits to do it as it took for mantissa bits in the binary fp value.
If you really want to see what binary fp does, you might need to resort
to hex. Two methods of float are relevant, hex() and fromhex(). Check
out the following function I just wrote.
import decimal
float2decimal = decimal.Decimal.from_float
a = 0.1
print("internal repr of 0.1 is ", a.hex()) # 0x1.999999999999ap-4
def test(stringval):
floatval = float.fromhex(stringval)
decimalval = float2decimal(floatval)
print( stringval, "--", floatval, "--", decimalval )
test(" 0x1.9999999999999p-4")
test(" 0x1.999999999999ap-4") #This is 0.1, as best as float sees it
test(" 0x1.999999999999bp-4")
The output is (re-wordwrapped for email):
internal repr of 0.1 is 0x1.999999999999ap-4
0x1.9999999999999p-4 -- 0.1
-- 0.09999999999999999167332731531132594682276248931884765625
0x1.999999999999ap-4 -- 0.1
-- 0.1000000000000000055511151231257827021181583404541015625
0x1.999999999999bp-4 -- 0.1
-- 0.10000000000000001942890293094023945741355419158935546875
Notice that these are the closest values to 0.1 that can be represented
in a float, one below, and the first two above. You can't get any
values between those. When you print any of them, it shows a value of
0.1, presumably due to rounding during conversion to a decimal string.
The algorithm used in such conversion has changed many times in the
evolution of CPython.
And in case it's not obvious, the stringval items are 14 hex digits (56
bits), plus the letter "p", a sign, and an exponent value. These are
encoded into 64bits as a float. You can see such a value for any float
you like, by using the method hex() on the float object.
Hugo: thanks for the article reference. I was pleased to see it was
inspired by a lecture of professor Kahan. I haven't met him in a couple
of decades. He was the brains behind much of Intel's 8087
implementation, which was more-or-less used as a reference
implementation by the IEEE standard. And of course he was active on the
standard. I do recall there were a few liberties that the 8087 took
that he wanted to be illegal in the standard, but generally it was
assumed that the standard needed a real silicon implementation in order
to succeed. I wish I had a tenth of his smarts, with respect to
floating point.
DaveA
_______________________________________________
Tutor maillist - [email protected]
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor