On Saturday, 9 February 2019 at 03:03:41 UTC, H. S. Teoh wrote:
On Sat, Feb 09, 2019 at 02:12:29AM +0000, Murilo via Digitalmars-d-learn wrote:
Why is it that in C when I attribute the number 99999912343000007654329925.7865 to a double it prints 99999912342999999470108672.0000 and in D it prints 99999912342999999000000000.0000 ? Apparently both languages cause a certain loss of precision(which is part of converting the decimal system into the binary system) but why is it that the results are different?

It's not only because of converting decimal to binary, it's also because double only has 64 bits to store information, and your number has far more digits than can possibly fit into 64 bits. Some number of bits are used up for storing the sign and exponent, so `double` really can only store approximately 15 decimal digits. Anything beyond that simply doesn't exist in a `double`, so attempting to print that many digits will inevitably produce garbage trailing digits. If you round the above outputs to about 15 digits, you'll see that they are essentially equal.

As to why different output is produced, that's probably just an artifact of different printing algorithms used to output the number. There may be a small amount of difference around or after the 15th digit because D implicitly converts to `real` (which on x86 is the 80-bit proprietary FPU representation) for some operations and rounds it back, while C only operates on 64-bit double. This may cause some slight difference in behaviour around the 15th digit or so.


Now, changing a little bit the subject. All FPs in D turn out to be printed differently than they are in C and in C it comes out a little more precise than in D. Is this really supposed to happen?


Reply via email to