Guido van Rossum wrote:
in some "intuitive complexity" sense an int is a simpler type than a float and a float is a simpler type than a Decimal
I don't think this analogy holds. In a mathematical sense, ints are a subset of reals, but binary and decimal floats are just alternative approximate representations of reals, neither one being inherently preferable over the other. One could argue that since all binary floats are exactly representable in decimal but not vice versa, decimal should be regarded as the wider type. But even this doesn't hold when you have a limited number of decimal digits available, which you always do at any given moment with the Decimal type. And even if there are enough digits, an exact conversion mightn't be what you really want. This problem doesn't arise with int->float conversion -- there is only one obvious way of chopping it to fit. -- Greg _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com