On 04/16/2015 01:03 AM, Jim Mooney wrote:
Why does Fraction interpret a number and string so differently? They come
out the same, but it seems rather odd

from fractions import Fraction
Fraction(1.64)
Fraction(7385903388887613, 4503599627370496)
Fraction("1.64")
Fraction(41, 25)
41/25
1.64
7385903388887613 / 4503599627370496
1.64


When a number isn't an exact integer (and sometimes when the integer is large enough), some common computer number formats cannot store the number exactly. Naturally we know about transcendentals, which cannot be stored exactly in any base. PI and E and the square-root of two are three well known examples.

But even rational numbers cannot be stored exactly unless they happen to match the base you're using to store them. For example, 1/3 cannot be stored exactly in any common base. In decimal, it'd be a repeating set of 3's. And whenever you stopped putting down threes, you've made an approximation.
    0.3333333333

Python defaults to using a float type, which is a binary floating point representation that uses the special hardware available in most recent computers. And in fact, when you use a literal number in your source, it's converted to a float by the compiler, not stored as the digits you typed.

The number you specified in decimal, 1.64, is never going to be stored in a finite number of binary bits, in a float.

>>> from fractions import Fraction
>>> from decimal import Decimal

>>> y = 1.64
Conversion to float appens at compile time, so the value given to y is already approximate.
   roughly equivalent to the following
>>> y = float("1.64")

>>> Fraction(y)
Fraction(7385903388887613, 4503599627370496)

If you converted it in string form instead to Decimal, then the number you entered would be saved exactly.

>>> x = Decimal("1.64")
This value is stored exactly.

>>> x
Decimal('1.64')

>>> Fraction(x)
Fraction(41, 25)


Sometimes it's convenient to do the conversion in our head, as it were.
Since 1.64 is shorthand for 164/100, we can just pass those integers to Fraction, and get an exact answer again.

>>> Fraction(164, 100)
Fraction(41, 25)


Nothing about this says that Decimal is necessarily better than float. It appears better because we enter values in decimal form and to use float, those have to be converted, and there's frequently a loss during conversion. But Decimal is slower and takes more space, so most current languages use binary floating point instead.

I implemented the math on a machine 40 years ago where all user arithmetic was done in decimal floating point. i thought it was a good idea at the time because of a principle I called "least surprise." There were roundoff errors, but only in places where you'd get the same ones doing it by hand.

History has decided differently. When the IEEE committee first met, Intel already had its 8087 implemented, and many decisions were based on what that chip could do and couldn't do. So that standard became the default standard that future implementations would use, whatever company.

--
DaveA
_______________________________________________
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor

Reply via email to