On Sat, 15 May 2021 at 05:54, Steven D'Aprano <st...@pearwood.info> wrote:
>
> On Sat, May 15, 2021 at 01:57:29AM +1000, Chris Angelico wrote:
>
> > Decimal literals have a number of awkward wrinkles, so I'd leave them
> > aside for now;
>
> I'm surprised at this.
>
> Decimal literals have come up at least twice in the past, with a general
> consensus that they are a good idea. This is a the first time I've seen
> anyone suggest fraction literals (that I recall). The use of decimal
> literals have a major and immediate benefit to nearly everyone who uses
> Python for calculations: the elimination of rounding error from base-10
> literals like `0.1` and the internal binary representation.

Fraction also has this benefit except in a much more complete way
since *all* arithmetic can be exact. The discussion above has
mentioned the idea that you could write a fraction like `1/10F` or
`1F/10` but I would also want to be able to do something like `0.1F`
to use a decimal literal to represent an exact rational number.

In my experience direct language support for exact rationals is more
common than for decimal floating point. Computing things exactly is a
widely applicable feature whereas wanting a different kind of rounding
error is a more niche application. You mention that in some cases
Decimal gives exactness but if you follow this idea of exact
arithmetic to its conclusion you end up with something like Fraction
rather than something like Decimal.

I suspect that the fact that Python has a more extensively developed
decimal module has probably led many of us familiar with Python to
think that decimal arithmetic is somehow more useful or more commonly
needed or used than rational arithmetic.

> What are these awkward wrinkles for decimal literals?

They are not insurmountable but the main issues to resolve are about
the fact that Python's decimal implementation is a multiprecision
library. Not all decimal implementations are but Python's is. The
precision is controlled by a global context and affects all
operations. The simple way to implement decimal literals would be to
say that 1.001d is the same as Decimal('1.001') but then what about
-1.001d?

The minus sign is not part of the literal but rather an unary
operation so whereas the Decimal(...) constructor is always exact an
unary minus is an "operation" with a behaviour that is context
dependent e.g.:

>>> from decimal import Decimal as D, getcontext
>>> getcontext().prec = 3
>>>
>>> D('0.9999')
Decimal('0.9999')
>>> D('-0.9999')
Decimal('-0.9999')
>>> -D('0.9999')
Decimal('-1.00')
>>> +D('0.9999')
Decimal('1.00')

That would mean that a simple statement like x = -1.01d could assign
different values depending on the context. Maybe with the new parser
it is easier to change this so that an unary +/- can be part of the
literal. The context dependence here also undermines other benefits of
literals like the possibility of constant-folding in e.g. 0.01d +
1.00d (depending on the context this might compute different values or
it might raise an exception or set flags).

The other question is about which other languages to line up with. For
example C might gain a _Decimal128 type and maybe even hardware
support would become widespread for calculations with decimal in these
fixed width formats. Python would be able to leverage those if it
defines its decimals in the same way but that would mean departing
from the multiprecision model that the decimal module currently has.

These and other questions about how to implement decimal literals are
not necessarily hard to overcome but it naturally leads to ideas like:

> The implementation would not support the full General Decimal Arithmetic 
> specification -- for that,
> users would continue to use the decimal module.

Then you have this kind of hybrid where you have two different kinds
of decimal type. Presumably that means that the decimal literals
aren't really there for power users because they need to use the
decimal module with all of its extra features. If the decimal literals
aren't for users of the decimal module then who are they for?

I started trying to write a PEP about decimal literals some time ago
but what I found was that most arguments I could come up with for
decimal literals were really arguments for using decimal floating
point instead of binary floating point in the first place. In other
words if Python did not currently have float types then I would
advocate for using decimal as the *default* float type for literals
like `0.1`. It is problematic that the only non-integer literals in
Python are decimal literals that are converted into binary floating
point format when the vast majority of possible non-integer decimal
literals like `0.12345` can not be represented exactly in binary. Many
humans can understand decimal rounding much easier than binary
rounding so using decimal rounding for non-integer arithmetic would
make Python itself more understandable and intuitive.

We already have float though and it uses binary floating point and is
the implementation for ordinary decimal literals like 0.1. In that
context adding Decimal literals like 0.1d does not bring as much
benefit. Really it just lowers the bar a little bit for
less-experienced users to use the decimal module correctly and not
make mistakes like D(0.1). Maybe that's worth it or maybe not. For
novices it would be great to make 0.1 + 0.2 just do the right thing
but if they need to know that they should do 0.1d + 0.2d then the
unintuitive behaviour is still there to trip them up by default. For
experienced users the literals don't really add that much and probably
most code that uses the decimal module in anger does not really have
that many literals anyway.

Yes, 0.1d is a bit nicer than D('0.1') and slightly less error-prone
but how much of a benefit is that in practice?

Is it actually worth the churn or the increased complexity of having
an entirely new decimal type?

If it is necessary to have new fixed-width decimal types then probably
the first step to decimal literals is implementing those rather than
writing a PEP.

These same arguments could be made for Fraction literals but in my
experience rational arithmetic has more widespread use than decimal
arithmetic and also rational arithmetic is much easier to implement
and doesn't need to have multiple types or context dependent behaviour
etc. One thing I would like is for Fraction to be reimplemented in C
and made much faster. Having literals would be nice but Fraction would
need to at least be a builtin first.

--
Oscar
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/JV4G7NQIXE4TGZB53PPH6EBMQE7KXFNA/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to