On 4/27/24 18:46, Jon Elson via cctalk wrote:
> On 4/27/24 17:02, ben via cctalk wrote:
>> Did any one need REAL BCD math like the Big Boys had?
>>
>>
> No, this is a fallacy.  Binary arithmetic is as "accurate" as decimal. 
> Handling VERY large numbers in floating point loses some precision, but
> any computer can do multiple word binary quite well.  And, the obvious
> example is doing division in decimal still can end up with remainders. 
> Back in the day, banks were terribly worried about defalcation by the
> guys who maintain the daily interest program.  The classic story is the
> guy who adjusts the code to take those fractional cents that get rounded
> back to the bank and sends 10% to their own account.  Now, there are so
> many really serious ways fraudsters can steal from banks and their
> customers that nobody is too worried about that sort of inside job.

The issue comes about because money is based on decimal fractional
units.  Were we to have 128 cents to the dollar, there would be no
problem.  But one-tenth in decimal is a repeating fraction in base 2.

There were two ways of addressing this.

The first is to do arithmetic in scaled integers and post-scaling the
result.  Thus, $1.00 is kept as 100 internally (COBOL would have it as
pic 999..9V99).

When I wrote the math package for SuperCalc, the demand was that the
arithmetic be done in BCD.  So that's what I did.  CBASIC back in the
8080 days did its math in floating point decimal, as did a number of
other BASICs.  I lead a team that produced a business basic for the 8085
and x86--math was decimal.

Indeed, FPGAs have be used for decimal math recently;
cf.https://www.hindawi.com/journals/ijrc/2010/357839/

The NEC V-series x86 CPUs implement decimal string instructions.

There's still a healthy suspicion of binary math in the finance world.

--Chuck

Reply via email to