[Greg Ewing] > I don't see it's because of that. Even if D(whatever) > didn't ignore the context settings, you'd get the same > oddity if the numbers came from somewhere else with a > different precision.
Most users don't change context precision, and in that case there is no operation defined in the standard that can _create_ a decimal "with different precision". Python's Decimal constructor, however, can (Python's Decimal constructor performs an operation that's not in the standard -- it's a Python-unique extension to the standard). > I'm very uncomfortable about the whole idea of a > context-dependent precision. It just seems to be > asking for trouble. If you're running on a Pentium box, you're using context-dependent precision a few million times per second. Most users will be as blissfully unaware of decimal's context precsion as you are of the Pentium FPU's context precision. Most features in fp standards are there for the benefit of experts. You're not required to change context; those who need such features need them desperately, and don't care whether you think they should <wink>. An alternative is a God-awful API that passes a context object explicitly to every operation. You can, e.g., kiss infix "+" goodbye then. Some implementations of the standard do exactly that. You might want to read the standard before getting carried off by gut reactions: http://www2.hursley.ibm.com/decimal/ _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com