Mark Dickinson added the comment:
> I think *both* proposals are sensible. Fraction already has .from_decimal
> (using Decimal), so .to_decimal (also using Decimal) is sensible.
Well, there's a difference: conversion from Decimal to Fraction is
well-defined, with a unique, unambiguous result (excluding non-convertibles
like infinities and nans); in particular, the value of any Decimal is exactly
representable as a Fraction, so there's little information loss. (There *is*
still some information loss, since Decimal('1.00') and Decimal('1.0') both
covert to the same fraction, for example.)
On the other hand, not every Fraction is exactly representable as a Decimal, so
the result of conversion from Fraction to Decimal needs information about how
many decimal places to produce, what rounding mode to use, what the ideal
exponent should be in the case of exact results, etc. I think Zachary's idea
of supporting a context argument, and using the current context if none is
supplied, is the way to go here. The division should end up using an ideal
exponent of 0, which doesn't seem unreasonable.
To the patch: It looks fine, as far as it goes. It needs tests. To avoid the
repetition of the division code, I'd suggest doing something like:
if context is None:
context = getcontext()
Yes, supporting __format__ is going to be a bit more work. :-)
----------
_______________________________________
Python tracker <[email protected]>
<http://bugs.python.org/issue15136>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe:
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com