Mark Dickinson wrote:
On Sep 23, 7:31 pm, Terry Reedy <[EMAIL PROTECTED]> wrote:
Decimal is something of an anomaly in Python because it was written to
exactly follow an external standard, with no concessions to what would
be sensible for Python.  It is possible that that standard mandates that
Decimals not compare to floats.

I don't think the standard says anything about interactions between
Decimals and floats.

If there is not now, there could be in the future, and the decimal authors are committed to follow the standard wherever it goes. Therefore, the safe course, to avoid possible future deprecations due to doing too much, is to only do what is mandated.

But there's certainly been a feeling amongst at
least some of the developers that the job of Python's decimal module
is to implement the standard and no more, and that extensions to its
functionality belong elsewhere.

For the reason just stated. A slightly different take is this. The reason for following the standard is so that decimal code in Python is exact interconversion both from and to decimal code in other languages. (And one reason for *that* is that one purpose of the standard is to reliably implement legal and contractual standards for financial calculations.) Using extensions in Python could break/deprecate code translated away from Python.

Regarding equality, there's at least one technical issue:  the
requirement
that objects that compare equal hash equal.  How do you come up with
efficient hash operations for integers, floats, Decimals and Fractions
that satisfy this requirement?

For integral values, this is no problem.
>>> hash(1) == hash(1.0) == hash(decimal.Decimal(1)) == hash(fractions.Fraction(1)) == 1
True

For other arithmetic operations:  should the sum of a float and a
Decimal produce a Decimal or a float?  Why?  It's not at all clear to me that
either of these types is 'higher up' the numerical tower than the
other.

Floats and fractions have the same issue. Fractions are converted to floats. I can think of two reasons: float operations are faster; floats are my typically thought of as inexact and since the result is likely to be inexact (rounded), float is the more appropriate type to express that. Anyone who disagrees with the choice for their application can explicitly convert the float to a fraction.

Decimals can also be converted to floats (they also have a __float__ method). But unlike fractions, the conversion must be explicit, using float(decimal), instead of implicit, as with ints and fractions.

Someone *could* write a PyDecimal wrapper that would do implicit conversion and thereby more completely integrate decimals with other Python numbers, but I doubt that saving transitivity of equality will be sufficient motivation ;-).

Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to