Re: Phobos unit testing uncovers a CPU bug

2010-11-27 Thread KennyTM~

On Nov 27, 10 05:25, Simen kjaeraas wrote:

Don nos...@nospam.com wrote:


The difference was discovered through the unit tests for the
mathematical Special Functions which will be included in the next
compiler release. Discovery of the discrepancy happened only because
of several features of D:

- built-in unit tests (encourages tests to be run on many machines)

- built-in code coverage (the tests include extreme cases, simply
because I was trying to increase the code coverage to high values)

- D supports the hex format for floats. Without this feature, the
discrepancy would have been blamed on differences in the
floating-point conversion functions in the C standard library.

This experience reinforces my belief that D is an excellent language
for scientific computing.


This sounds like a great sales argument. Gives us some bragging rights. :p



Thanks to David Simcha and Dmitry Olshansky for help in tracking this
down.


Great job!

Now, which of the results is correct, and has AMD and Intel been informed?



Intel is correct.

  yl2x(0x1.0076fc5cc7933866p+40L, LN2)
   == log(9240117798188457011/8388608)
   == 0x1.bba4a9f774f49d0a64ac5666c969fd8ca8e...p+4
 ^




Re: Phobos unit testing uncovers a CPU bug

2010-11-27 Thread Dmitry Olshansky

On 26.11.2010 23:02, Don wrote:
The code below compiles to a single machine instruction, yet the 
results are CPU manufacturer-dependent.


import std.math;

void main()
{
 assert( yl2x(0x1.0076fc5cc7933866p+40L, LN2)
== 0x1.bba4a9f774f49d0ap+4L); // Pass on Intel, fails on AMD
}

The results for yl2x(0x1.0076fc5cc7933866p+40L, LN2) are:

Intel:  0x1.bba4a9f774f49d0ap+4L
AMD:0x1.bba4a9f774f49d0cp+4L

The least significant bit is different. This corresponds only to a 
fraction of a bit (that is, it's hardly important for accuracy. For 
comparison, sin and cos on x86 lose nearly sixty bits of accuracy in 
some cases!). Its importance is only that it is an undocumented 
difference between manufacturers.


The difference was discovered through the unit tests for the 
mathematical Special Functions which will be included in the next 
compiler release. Discovery of the discrepancy happened only because 
of several features of D:


- built-in unit tests (encourages tests to be run on many machines)

- built-in code coverage (the tests include extreme cases, simply 
because I was trying to increase the code coverage to high values)


- D supports the hex format for floats. Without this feature, the 
discrepancy would have been blamed on differences in the 
floating-point conversion functions in the C standard library.


This experience reinforces my belief that D is an excellent language 
for scientific computing.


Thanks to David Simcha and Dmitry Olshansky for help in tracking this 
down.

Glad to help!
I was genuinely intrigued because not more then a few weeks ago I 
discussed with a friend of mine a possibility of differences in FP 
calculations of AMD vs Intel.
You see, his scientific app yielded different results when working at 
home/at work, which is a frustrating experience. Since that's exactly 
same binary, written in Delphi (no C run-time involved and so on) and 
environment is pretty much the same... I suggested to check CPU vendors 
just in case... of course, different.


In the meantime, I sort of ported the test case to M$ c++ inline asm 
and posted it on AMD forums, let's see what they have to say.
http://forums.amd.com/forum/messageview.cfm?catid=319threadid=142893enterthread=y 
http://forums.amd.com/forum/messageview.cfm?catid=319threadid=142893enterthread=y


--
Dmitry Olshansky



Re: Phobos unit testing uncovers a CPU bug

2010-11-27 Thread Kagamin
Don Wrote:

 The great tragedy was that an early AMD processor gave much accurate sin 
 and cos than the 387. But, people complained that it was different from 
 Intel! So, their next processor duplicated Intel's hopelessly wrong trig 
 functions.

The same question goes to you. Why do you call this bug?


Re: Phobos unit testing uncovers a CPU bug

2010-11-27 Thread Lionello Lunesu

On 28-11-2010 5:49, Dmitry Olshansky wrote:

On 26.11.2010 23:02, Don wrote:

The code below compiles to a single machine instruction, yet the
results are CPU manufacturer-dependent.

import std.math;

void main()
{
assert( yl2x(0x1.0076fc5cc7933866p+40L, LN2)
== 0x1.bba4a9f774f49d0ap+4L); // Pass on Intel, fails on AMD
}

The results for yl2x(0x1.0076fc5cc7933866p+40L, LN2) are:

Intel: 0x1.bba4a9f774f49d0ap+4L
AMD: 0x1.bba4a9f774f49d0cp+4L

The least significant bit is different. This corresponds only to a
fraction of a bit (that is, it's hardly important for accuracy. For
comparison, sin and cos on x86 lose nearly sixty bits of accuracy in
some cases!). Its importance is only that it is an undocumented
difference between manufacturers.

The difference was discovered through the unit tests for the
mathematical Special Functions which will be included in the next
compiler release. Discovery of the discrepancy happened only because
of several features of D:

- built-in unit tests (encourages tests to be run on many machines)

- built-in code coverage (the tests include extreme cases, simply
because I was trying to increase the code coverage to high values)

- D supports the hex format for floats. Without this feature, the
discrepancy would have been blamed on differences in the
floating-point conversion functions in the C standard library.

This experience reinforces my belief that D is an excellent language
for scientific computing.

Thanks to David Simcha and Dmitry Olshansky for help in tracking this
down.

Glad to help!
I was genuinely intrigued because not more then a few weeks ago I
discussed with a friend of mine a possibility of differences in FP
calculations of AMD vs Intel.
You see, his scientific app yielded different results when working at
home/at work, which is a frustrating experience. Since that's exactly
same binary, written in Delphi (no C run-time involved and so on) and
environment is pretty much the same... I suggested to check CPU vendors
just in case... of course, different.

In the meantime, I sort of ported the test case to M$ c++ inline asm
and posted it on AMD forums, let's see what they have to say.
http://forums.amd.com/forum/messageview.cfm?catid=319threadid=142893enterthread=y
http://forums.amd.com/forum/messageview.cfm?catid=319threadid=142893enterthread=y




http://forums.amd.com/forum/messageview.cfm?catid=29threadid=135771
This post also talks about a fyl2x bug. Wonder if it's the same bug.

L.


Re: Phobos unit testing uncovers a CPU bug

2010-11-27 Thread Don

Kagamin wrote:

Don Wrote:

The great tragedy was that an early AMD processor gave much accurate sin 
and cos than the 387. But, people complained that it was different from 
Intel! So, their next processor duplicated Intel's hopelessly wrong trig 
functions.


The same question goes to you. Why do you call this bug?


The Intel CPU gives the correct answer, but AMD's is wrong. They should 
both give the correct result.