On Nov 27, 10 05:25, Simen kjaeraas wrote:
Don <nos...@nospam.com> wrote:

The difference was discovered through the unit tests for the
mathematical Special Functions which will be included in the next
compiler release. Discovery of the discrepancy happened only because
of several features of D:

- built-in unit tests (encourages tests to be run on many machines)

- built-in code coverage (the tests include extreme cases, simply
because I was trying to increase the code coverage to high values)

- D supports the hex format for floats. Without this feature, the
discrepancy would have been blamed on differences in the
floating-point conversion functions in the C standard library.

This experience reinforces my belief that D is an excellent language
for scientific computing.

This sounds like a great sales argument. Gives us some bragging rights. :p


Thanks to David Simcha and Dmitry Olshansky for help in tracking this
down.

Great job!

Now, which of the results is correct, and has AMD and Intel been informed?


Intel is correct.

  yl2x(0x1.0076fc5cc7933866p+40L, LN2)
   == log(9240117798188457011/8388608)
   == 0x1.bba4a9f774f49d0a64ac5666c969fd8ca8e...p+4
                         ^


Reply via email to