[Random832 <random...@fastmail.com>]
> My suggestion was for a way to make it so that if an exact result is
> exactly representable at any precision you get that result, with
> rounding only applied for results that cannot be represented exactly
> regardless of precision.

That may have been the suggestion in your head ;-) , but - trust me on
this - it's taken a long time to guess that from what you wrote.
Again, you could have saved us all a world of troubles by giving
concrete examples.

What you wrote just now adds another twist:  apparently you DO want
rounding in some cases ("results that cannot be represented exactly
regardless of precision"). The frac2dec function I suggested
explicitly raised a ValueError in that case instead, and you said at
the time that function would do what you wanted.

> It seems like this is incompatible with the design of the decimal module, but
> it's ***absolutely*** untrue that "if an exact result is exactly 
> representable,
> then that's the result you get", because *the precision is not part of the
> representation format*.

Precision certainly is part of the model in the standards the decimal
module implements. The decimal module slightly extends the standards
by precisely defining what happens for basic operations when mixing
operands of different precisions:  the result is as if computed to
infinite precision, then rounded once at the end according to the
context rounding mode, to the context precision.

> What threw me off is the fact that there is a single type that represents
> an unlimited number of digits, rather than separate types for each precision.
> I don't think that feature is shared with IEEE,

RIght, the standards don't define mixed-precision arithmetic, but
`decimal` does.

> and it creates a philosophical question of interpretation [the answer of which
> we clearly disagree on] of what it means for a result to be "exactly 
> representable",
> which doesn't exist with the IEEE fixed-length formats.

No, it does:  for virtually every operation, apart from the
constructor, "exactly representable" refers to the context's precision
setting.  If you don't believe that, see how and when the "inexact"
flag gets set. It's the very meaning of the "inexact" flag that "the
infinitely precise result was not exactly representable (i.e.,
rounding lost some information)".  It's impossible to divorce the
meaning of "inexact" from the context precision.

> At this point, doing the math in Fractions and converting back and forth
> to Decimal as necessary is probably good enough, though.

I still don't get it. The frac2dec function I wrote appeared to me to
be 99.9% useless, since, as already explained, there's nothing you can
_do_ with the result that doesn't risk rounding away its digits.  For
that reason, this feels like an extended "XY problem" to me.

>>> Incidentally, I also noticed the procedure suggested by the documentation 
>>> for
>>> doing fixed point arithmetic can result in incorrect double rounding in 
>>> some situations:
>>> >>> (D('1.01')*D('1.46')).quantize(TWOPLACES) # true result is 1.4746
>>> Decimal('1.48')

>> Are you using defaults, or did you change something?  Here under
>> 3.9.0, showing everything:

> This was with the precision set to 4, I forgot to include that.

A rather consequential omission ;-)

> With default precision the same principle applies but needs much longer
> operands to demonstrate. The issue is when there is a true result that
> the last digit within the context precision is 4, the one after it is 6, 7, 8,
> or 9, and the one before it is odd. The 4 is rounded up to 5, and that 5 is
> used to round up the previous digit.
>
> Here's an example with more digits - it's easy enough to generalize.
>
> >>> ctx.prec=28
> >>> (D('1.00000000000001')*D('1.49999999999996')).quantize(D('.00000000000001'))
> Decimal('1.49999999999998')
> >>> ctx.prec=29
> >>> (D('1.00000000000001')*D('1.49999999999996')).quantize(D('.00000000000001'))
> Decimal('1.49999999999997')
>
> The true result of the multiplication here is 1.4999999999999749999999999996,
> which requires 29 digits of precision
>
> [and, no, it's not just values that look like 999999 and 000001, but a brute 
> force
> search takes much longer for 15-digit operands than 3-digit ones]

Understood. The docs are only answering "Once I have valid two place
inputs, how do I maintain that invariant throughout an application?".
They don't mention double rounding, and I don't know whether the doc
author was even aware of the issue.

I argued with Mike Cowlishaw about it when the decimal spec was first
being written, pushing back against his claim that it naturally
supported fixed-point (as well as floating-point) decimal arithmetic.
Precisely because of  possible "double rounding" surprises when trying
to use the spec to _emulate_ fixed point.

You can worm around it by using "enough" extra precision, but I don't
recall the precise bounds needed.  For binary arithmetic, and + - * /,
if you compute to p bits first and then round again back to q bits,
provided p >= 2*q+2 you always get the same result as if you had
rounded the infinitely precise result to q bits directly.

But the decimal spec takes a different approach, which Python's docs
don't explain at all:  the otherwise-mysterious ROUND_05UP rounding
mode.  Quoting from the spec:

    http://speleotrove.com/decimal/damodel.html
    ...
    The rounding mode round-05up permits arithmetic at shorter
    lengths to be emulated in a fixed-precision environment without
    double rounding. For example, a multiplication at a precision of 9
    can be effected by carrying out the multiplication at (say) 16
    digits using round-05up and then rounding to the required length
    using the desired rounding algorithm.

In your original example,  1.01 * 1.46 rounds to 4-digit 1.474 under
ROUND_05UP. and then `quantize()` can be used to round that back to 1,
2, or 3 digits under any rounding mode you like.

Or, with your last example,

>>> with decimal.localcontext() as ctx:
...     ctx.rounding = decimal.ROUND_05UP
...     r = D('1.00000000000001')*D('1.49999999999996')
>>> r
Decimal('1.499999999999974999999999999')
>>> r.quantize(D('.00000000000001'))
Decimal('1.49999999999997')

> ...
> I think for some reason I'd assumed the mantissa was represented as a
> binary number, since the .NET decimal format [which isn't arbitrary-precision]
> does that I should probably have looked over the implementation more before
> jumping in.

As I recall, the pure-Python implementation used Python ints for
mantissas. Last I looked, libmpdec uses a vector of 64-bit (C)  ints,
effectively using base 10**19 (each 64-bit int is "a digit" in
range(10**19)).


>>> I'd like a way to losslessly multiply or divide a decimal by a power of ten
>>> at least... a sort of decimal equivalent to ldexp.

>> Again, without concrete examples, that's clear as mud to me.

> Er, in this case the conversion of fraction to decimal *is* the concrete
> example, it's a one-for-one substitution for the use of the string
> constructor: ldexp(n, -max(e2, e5)) in place of D(f"{n}E-{max(e2, e5}").

OK.  Note that the _actual_ spelling of ldexp in the decimal module is
"scaleb".  But, like virtually all other operations, scaleb() rounds
back to context precision.
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/HAKZZMLXLKLEI63GODM64W64YLWEB2BW/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to