On Sun, Sep 12, 2021 at 1:07 AM Peter J. Holzer <hjp-pyt...@hjp.at> wrote:
> If you have any "decimals" (i.e decimal digits to the right of your
> decimal point) then the input values won't be exactly representable and
> the nearest representation will use all available bits, thus losing some
> precision with most additions.

That's an oversimplification, though - numbers like 12345.03125 can be
perfectly accurately represented, since the fractional part is a
(negative) power of two.

The perceived inaccuracy of floating point numbers comes from an
assumption that a string of decimal digits is exact, and the
computer's representation of it is not. If I put this in my code:

ONE_THIRD = 0.33333

then you know full well that it's not accurate, and that's nothing to
do with IEEE floating-point! The confusion comes from the fact that
one fifth (0.2) can be represented precisely in decimal, and not in
binary.

Once you accept that "perfectly representable numbers" aren't
necessarily the ones you expect them to be, 64-bit floats become
adequate for a huge number of tasks. Even 32-bit floats are pretty
reliable for most tasks, although I suspect that there's little reason
to use them now - would be curious to see if there's any performance
benefit from restricting to the smaller format, given that most FPUs
probably have 80-bit or wider internal registers.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to