On Thu, Mar 5, 2020 at 8:27 PM Steve Barnes <gadgetst...@live.co.uk> wrote: > > One of the lovely things about Python is that we have the capability to avoid > issues such as the vagaries of the floating point type with libraries such as > decimal and fractions. This is wonderous to me but comes with an issue that I > suspect is limiting its usage. That issue is that you have to litter your > code with constructors of those types, e.g. > > from decimal import * # I know why this is bad but it comes straight from > the examples > a = Decimal("0.1") # Needs to be a string to avoid being temporarily a float > b = Decimal("0.2") # Ditto > c = Decimal("0.3") # Ditto > > a + b == c # Magic this works!
As an aside, this complaint comes up in basically every language, and it's a consequence of the literal 0.1 not actually meaning one tenth. Nobody would bat an eyelid if you show that, say: x = 0.3333333 # one third y = 0.6666666 # two thirds x + y != 1.0 because they've obviously been rounded (those are NOT complete representations of those fractions). It's only with fifths and tenths that people are surprised, and only because they don't understand that computers work in binary :) That said, though, there are a number of good reasons for wanting to change the interpretation of literals. But by the time you get to executing the module code, it's too late - the literals get parsed and compiled into the code object, and they're already the values they are going to ultimately be. There are two approaches that would reasonably plausibly work for this. It's possible to redefine literal parsing using a future directive ("from __future__ import unicode_literals"), but those have to be defined entirely by the language. To do it in custom code, you'd need instead to run some code before your module is parsed - for example, an import hook. This would have to NOT be done by default - you'd have to do it on a per-module basis - because any change like this would break a lot of things. (Even if you keep it just to your module, using Decimal can create bizarre situations - such as where the average of two numbers is not between them.) Ultimately, I think the best solution in current Python is to just import Decimal as D and then use the shorter name. Hmm, is there a PEP regarding Decimal literals? I couldn't find one, although there is PEP 240 regarding rational literals. Maybe it's time to write up a rejected PEP explaining exactly what the problems are with Decimal literals. From memory, the problems are (a) it'd effectively require the gigantic decimal module to be imported by default, and (b) contexts don't work with literals. But since Decimal literals are such an obvious solution to most of the above problems, I think it'd be good to have a document saying why it won't help. ChrisA _______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/NPSSZNLXU5HIOPAAHXKF2YFMFZ6N5RAA/ Code of Conduct: http://python.org/psf/codeofconduct/