Hi, just a thought: if you *always* work with "floats" with two decimals, you are in fact working with integers, but you represent them as a floats - confusing for the internal representation.
So why not work with int(float * 100) instead? This way you only have to take care of roundoffs etc when dividing. "int (+|-|*) int" = int "int / int" = int / int + int % int Integers are nice, me like integers. /per9000 -- http://mail.python.org/mailman/listinfo/python-list