> Should I simply run the results of all calculations through something
> like this:
>
> from __future__ import division
> ...
> ...
> s=(int(round(s, 2)*100))/100
>
> Or should I be using Decimal on all money calculations?

Firstly - that does not magically fix the imprecisions in floating point
numbers. If that would work, it would be hardcoded into the interpreter, no?

I think that Decimal is the way to go here, but you do have another option.
Whenever you put in a number, remove the decimal point and store it as an
integer. Do all of you calculations with integers. Every time you have to
display a total, convert it then (but don't overwrite the variable! Convert
a temp)

Obviously this is a tough way to go in an easy language like python. That is
a solution I am considering using C. (I might just make it too..) That's why
I encouraged Decimal.

If you're interested in the integer representation of floats, like this
particular efficiency & precision demon (me!), then you will have to work
out ways to multiply and divide using pseudo floats... Not too difficult.
Say you want to multiply an integer against say 5.5 %. Multiply the total by
ten, 55, then divide by a 100. In that order. Of course you will still have
problems. For example do it over and over and you will overflow your
integer, but nevermind. Am I rambling? Ooops.

HTH,
tiger12506

_______________________________________________
Tutor maillist  -  Tutor@python.org
http://mail.python.org/mailman/listinfo/tutor

Reply via email to