[Tim suggesting that I'm clueless and dazzled by sparkling lights] > There seems to be an unspoken "wow that's cool!" kind of belief > that because Python's Decimal representation is _potentially_ > unbounded, the constructor should build an object big enough to > hold any argument exactly (up to the limit of available memory). > And that would be appropriate for, say, an unbounded rational > type -- and is appropriate for Python's unbounded integers.
I have no such thoughts but do strongly prefer the current design. I recognize that it allows a user to specify an input at a greater precision than the current context (in fact, I provided the example). The overall design of the module and the spec is to apply context to the results of operations, not their inputs. In particular, the spec recognizes that contexts can change and rather than specifying automatic or implicit context application to all existing values, it provides the unary plus operation so that such an application is explicit. The use of extra digits in a calculation is not invisible as the calculation will signal Rounded and Inexact (if non-zero digits are thrown away). One of the original motivating examples was "schoolbook" arithmetic where the input string precision is incorporated into the calculation. IMO, input truncation/rounding is inconsistent with that motivation. Likewise, input rounding runs contrary to the basic goal of eliminating representation error. With respect to integration with the rest of Python (everything beyond that spec but needed to work with it), I suspect that altering the Decimal constructor is fraught with issues such as the string-to-decimal-to-string roundtrip becoming context dependent. I haven't thought it through yet but suspect that it does not bode well for repr(), pickling, shelving, etc. Likewise, I suspect that traps await multi-threaded or multi-context apps that need to share data. Also, adding another step to the constructor is not going to help the already disasterous performance. I appreciate efforts to make the module as idiot-proof as possible. However, that is a pipe dream. By adopting and exposing the full standard instead of the simpler X3.274 subset, using the module is a non-trivial exercise and, even for experts, is a complete PITA. Even a simple fixed-point application (money, for example) requires dealing with quantize(), normalize(), rounding modes, signals, etc. By default, outputs are not normalized so it is difficult even to recognize what a zero looks like. Just getting output without exponential notation is difficult. If someone wants to craft another module to wrap around and candy-coat the Decimal API, I would be all for it. Just recognize that the full spec doesn't have a beginner mode -- for better or worse, we've simulated a hardware FPU. Lastly, I think it is a mistake to make a change at this point. The design of the constructor survived all drafts of the PEP, comp.lang.python discussion, python-dev discussion, all early implementations, sandboxing, the Py2.4 alpha/beta, cookbook contributions, and several months in the field. I say we document a recommendation to use Context.create_decimal() and get on with life. Clueless in Boston P.S. With 28 digit default precision, the odds of this coming up in practice are slim (when was the last time you typed in a floating point value with more than 28 digits; further, if you had, would it have ruined your day if your 40 digits were not first rounded to 28 before being used). IOW, bug tracker lists hundreds of bigger fish to fry without having to change a published API (pardon the mixed metaphor). _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com