Thanks to all for the various replies. They have all helped me to
refine my ideas on the subject. These are my latest thoughts.

Firstly, the Decimal type exists, it clearly works well, it is written
by people much cleverer than me, so I would need a good reason not to
use it. Speed could be a good reason, provided I am sure that any
alternative is 100% accurate for my purposes.

My approach is based on expressing a decimal number as a combination
of an integer and a scale, where scale means the number of digits to
the right of the decimal point.

Therefore 0.04 is integer 4 with scale 2, 1.1 is integer 11 with scale
1, -123.456 is integer -123456 with scale 3. I am pretty sure that any
decimal number can be accurately represented in this form.

All arithmetic is carried out using integer arithmetic, so although
there may be rounding differences, there will not be the spurious
differences thrown up by trying to use floats for decimal arithmetic.

I use a class called Number, with two attributes - an integer and a
scale. My first attempt required these two to be provided every time
an instance was created. Then I realised that this would cause loss of
precision if I chain a series of instances together in a calculation.
The constructor can now accept any of the following forms -

1. A digit (either integer or float) and a scale. It uses the scale
factor to round up the digit to the appropriate integer.

2. Another Number instance. It takes the integer and scale from the
other instance.

3. An integer, with no scale. It uses the integer, and assume a scale
of zero.

4. A float in string format (e.g. '1.1') with no scale. It uses the
number of digits to the right as the scale, and scales the number up
to the appropriate integer.

For addition, subtraction, multiplication and division, the 'other'
number can be any of 2, 3, or 4 above. The result is a new Number
instance. The scale of the new instance is based on the following rule
-

For addition and subtraction, the new scale is the greater of the two
scales on the left and right hand sides.

For multiplication, the new scale is the sum of the two scales on the
left and right hand sides.

For division, I could not think of an appropriate rule, so I just hard-
coded a scale of 9. I am sure this will give sufficient precision for
any calculation I am likely to encounter.

My Number class is now a bit more complicated than before, so the
performance is not as great, but I am still getting a four-fold
improvement over the Decimal type, so I will continue running with my
version for now.

My main concern is that my approach may be naive, and that I will run
into situations that I have not catered for, resulting in errors. If
this is the case, I will drop this like a hot potato and stick to the
Decimal type. Can anyone point out any pitfalls I might be unaware of?

I will be happy to show the code for the new Number class if anyone is
interested.

Thanks

Frank
--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to