On Mon, Mar 22, 2010 at 1:56 PM, Raymond Hettinger <raymond.hettin...@gmail.com> wrote: > > On Mar 22, 2010, at 10:00 AM, Guido van Rossum wrote: > > Decimal + float --> Decimal > > If everybody associated with the Decimal implementation wants this I > won't stop you; as I repeatedly said my intuition about this one (as > opposed to the other two above) is very weak. > > That's my vote.
I've been lurking on this thread so far, but let me add my +1 to this option. My reasoning is that Decimal is a "better" model of Real than float and mixed operations should not degrade the result. "Better" can mean different things to different people, but to me the tie breaker is the support for contexts. I would not want precision to suddenly change in the middle of calculation I add 1.0 instead of 1. This behavior will also be familiar to users of other "enhanced" numeric types such as NumPy scalars. Note that in the older Numeric, it was the other way around, but after considerable discussion, the behavior was changed. _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com