16 base 10 digits / log base10( 2) = 53.1508495182 bits.   Obviously, 
fractional bits don't exist, so 53 bits. If you note that the first non-zero 
digit as 4, and the first digit after the 15 zeroes was 2, then you got an 
extra bit. 54 bits.  Where did the extra bit come from?  It came from the IEEE 
format's assumption that the top bit of the mantissa of a normalized floating 
point value must be 1.  Since we know what it must be, there is no reason to 
use an actual bit for it.  The 53 bits in the mantissa do not include the 
assumed top bit.

Isn't floating point fun?

--- Joseph S.

-----Original Message-----
From: Pieter van Oostrum <piete...@vanoostrum.org> 
Sent: Sunday, April 19, 2020 7:49 AM
To: python-list@python.org
Subject: Re: Floating point problem

"R.Wieser" <address@not.available> writes:

> Souvik,
>
>> I have one question here. On using print(f"{c:.32f}") where c= 2/5 
>> instead of getting 32 zeroes I got some random numbers. The exact 
>> thing is 0.40000000000000002220446049250313 Why do I get this and not 
>> 32 zeroes?
>
> Simple answer ?   The conversion routine runs outof things to say.
>
> A bit more elaborate answer ? You should not even have gotten that many 
> zeroes after the 0.4.    The precision of a 32-bit float is about 7 digits. 
> That means that all you can depend on is the 0.4 followed by 6 more digits. 
> Anything further is, in effect, up for grabs.
>
Most Python implementations use 64-bit doubles (53 bits of precision). See 
https://docs.python.org/3.8/tutorial/floatingpoint.html
--
Pieter van Oostrum
www: http://pieter.vanoostrum.org/
PGP key: [8DAE142BE17999C4]

-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to