Re: Inverse confusion about floating point precision
On 9 May 2005 11:06:22 -0700, "Dan Bishop" <[EMAIL PROTECTED]> wrote: >Skip Montanaro wrote: >> I understand why the repr() of float("95.895") is >"95.8949996". >> What I don't understand is why if I multiply the best approximation >to >> 95.895 that the machine has by 1 I magically seem to get the lost >> precision back. To wit: >> >> % python >> Python 2.3.4 (#12, Jul 2 2004, 09:48:10) >> [GCC 3.3.2] on sunos5 >> Type "help", "copyright", "credits" or "license" for more >information. >> >>> 95.895 >> 95.8949996 >> >>> 95.895 * 1 >> 958950.0 >> >> Why isn't the last result "958949.996"? IOW, how'd I get >back the >> lost bits? > >You were just lucky. > >The floating-point representation of 95.895 is exactly 6748010722917089 >* 2**-46. > >Multiplying by 1 gives you 6748010722917089 * 2**-46. But >floats can have only 53 significant bits, so this gets normalized to >8237317776998399.658203125 * 2**-33 and rounded to 8237317776998400 * >2**-33, which happens to be exactly equal to 958950. > >For analogy, consider a decimal calculator with only 3 significant >digits. On this calculator, 1/7=0.143, an error of 1/7000. >Multiplying 0.143 by 7 gives 1.001, which is rounded to 1.00, and so >you get an exact answer for 1/7*7 despite roundoff error in the >intermediate step. > In bits, the above appears as >>> prb(95.895) '101.111001010000101111010000101111' >>> len(prb(95.895).split('.')[1]) 46 >>> prb(95.895*2**46) '10001010000101111010000101111' >>> int(prb(95.895*2**46),2) 6748010722917089L >>> int(prb(95.895*2**46),2)*1 6748010722917089L >>> prb(int(prb(95.895*2**46),2)*1) '1110101001011101010001' >>> prb(int(prb(95.895*2**46),2)*1)[:53] '1110101001011' >>> int(prb(int(prb(95.895*2**46),2)*1),2)/2.**46 958950.0 >>> prb(int(prb(int(prb(95.895*2**46),2)*1),2)/2.**46) '111010100110' Regards, Bengt Richter -- http://mail.python.org/mailman/listinfo/python-list
Re: Inverse confusion about floating point precision
[Dan] >Dan> The floating-point representation of 95.895 is exactly >Dan> 6748010722917089 * 2**-46. [Skip Montanaro] > I seem to recall seeing some way to extract/calculate fp representation from > Python but can't find it now. I didn't see anything obvious in the > distribution. For Dan's example, >>> import math >>> math.frexp(95.895) (0.7491796874997, 7) >>> int(math.ldexp(_[0], 53)) 6748010722917089L -- http://mail.python.org/mailman/listinfo/python-list
Re: Inverse confusion about floating point precision
>> Why isn't the last result "958949.996"? IOW, how'd I get >> back the lost bits? Dan> You were just lucky. Thanks for the response (and to Tim as well). Dan> The floating-point representation of 95.895 is exactly Dan> 6748010722917089 * 2**-46. I seem to recall seeing some way to extract/calculate fp representation from Python but can't find it now. I didn't see anything obvious in the distribution. Thx, Skip -- http://mail.python.org/mailman/listinfo/python-list
Re: Inverse confusion about floating point precision
Skip Montanaro wrote: > I understand why the repr() of float("95.895") is "95.8949996". > What I don't understand is why if I multiply the best approximation to > 95.895 that the machine has by 1 I magically seem to get the lost > precision back. To wit: > > % python > Python 2.3.4 (#12, Jul 2 2004, 09:48:10) > [GCC 3.3.2] on sunos5 > Type "help", "copyright", "credits" or "license" for more information. > >>> 95.895 > 95.8949996 > >>> 95.895 * 1 > 958950.0 > > Why isn't the last result "958949.996"? IOW, how'd I get back the > lost bits? You were just lucky. The floating-point representation of 95.895 is exactly 6748010722917089 * 2**-46. Multiplying by 1 gives you 6748010722917089 * 2**-46. But floats can have only 53 significant bits, so this gets normalized to 8237317776998399.658203125 * 2**-33 and rounded to 8237317776998400 * 2**-33, which happens to be exactly equal to 958950. For analogy, consider a decimal calculator with only 3 significant digits. On this calculator, 1/7=0.143, an error of 1/7000. Multiplying 0.143 by 7 gives 1.001, which is rounded to 1.00, and so you get an exact answer for 1/7*7 despite roundoff error in the intermediate step. -- http://mail.python.org/mailman/listinfo/python-list
Re: Inverse confusion about floating point precision
[Skip Montanaro] > I understand why the repr() of float("95.895") is "95.8949996". > What I don't understand is why if I multiply the best approximation to > 95.895 that the machine has by 1 I magically seem to get the lost > precision back. To wit: > >% python >Python 2.3.4 (#12, Jul 2 2004, 09:48:10) >[GCC 3.3.2] on sunos5 >Type "help", "copyright", "credits" or "license" for more information. >>>> 95.895 >95.8949996 >>>> 95.895 * 1 >958950.0 > > Why isn't the last result "958949.996"? Because it's *still* not decimal arithmetic. You have 53 significant bits in the approximation to 95.895, and "958949.996" is itself a decimal approximation to the exact binary value stored (read the Tutorial appendix on fp issues for more on that). There are 14 significant bits in 1. The product thus has 53+14 = 67, or 53+14-1 = 66, significant bits, and has to be rounded to fit back into 53 significant bits. None of that happens in base 10. > IOW, how'd I get back the lost bits? It happened to round up. Here's a simpler example, where it happens to round down instead: >>> .1 0.10001 >>> .1 * 10 1.0 -- http://mail.python.org/mailman/listinfo/python-list
Inverse confusion about floating point precision
I understand why the repr() of float("95.895") is "95.8949996". What I don't understand is why if I multiply the best approximation to 95.895 that the machine has by 1 I magically seem to get the lost precision back. To wit: % python Python 2.3.4 (#12, Jul 2 2004, 09:48:10) [GCC 3.3.2] on sunos5 Type "help", "copyright", "credits" or "license" for more information. >>> 95.895 95.8949996 >>> 95.895 * 1 958950.0 Why isn't the last result "958949.996"? IOW, how'd I get back the lost bits? Thx, Skip -- http://mail.python.org/mailman/listinfo/python-list