On 6/30/2014 4:57 AM, Don wrote:
Many people seem to have the bizarre idea that floating point is less accurate
than integer arithmetic. As if storing a value into a double makes it instantly
"fuzzy", or something.
In fact, providing that the the precision is large enough, every operation that
is exact in integers, is exact in floating point as well.
And if you perform a division using integers, you've silently lost precision.
So I'm not sure what benefit you'd gain by eschewing floating point.

1. 64 bit longs have more precision than 64 bit doubles.

2. My business accounts have no notion of fractional cents, so there's no reason to confuse the bookkeeping with them.

I understand that for purposes of calculating interest, you'd definitely want the intermediate answers to be in floating point. But when posting to an account, you want cents.

And these days, dealing with trillions of dollars, one is getting awfully close to the max precision of doubles :-)

Reply via email to