> Why should you? It only gives you 28 significant digits, while 64-bit > float (as in 32-bit version of Python) gives you 53 significant > digits. Also note, that on x86 FPU uses 80-bit registers. An then > Decimal executes over 1500 times slower.
64-bit floating point only gives you 53 binary bits, not 53 digits. That's approximately 16 decimal digits. And anyway, Decimal can be configured to support than 28 digits. > > >>> from timeit import Timer > >>> t1 = Timer('(1.0/3.0)*3.0 - 1.0') > >>> t2 = Timer('(Decimal(1)/Decimal(3))*Decimal(3)-Decimal(1)', > > 'from decimal import Decimal')>>> t2.timeit()/t1.timeit() > > 1621.7838879255889 > > If that's not enough to forget about Decimal, take a look at this: > > >>> (Decimal(1)/Decimal(3))*Decimal(3) == Decimal(1) > False > >>> ((1.0/3.0)*3.0) == 1.0 > > True Try ((15.0/11.0)*11.0) == 15.0. Decimal is actually returning the correct result. Your example was just lucky. Decimal was intended to solve a different class of problems. It provides predictable arithmetic using "decimal" floating point. IEEE-754 provides predictable arithmetic using "binary" floating point. casevh -- http://mail.python.org/mailman/listinfo/python-list