On Sat, 13 Apr 2013 02:07:39 -0400, Jeremy DeHaan <dehaan.jerem...@gmail.com> wrote:

I have a function that will calculate a random point on a circle based on a specified radius and total number of points. The only point in question is the first point. I get different values when the code compiles in Release and Debug mode.

Here is some code:

Vector2f getPoint(uint index)
{
                
        static const(float) pi = 3.141592654f;
                
        float angle = index * 2 * pi / m_pointCount - pi / 2;

                
        float x = cos(angle) * m_radius;
        float y = sin(angle) * m_radius;
                

        return Vector2f(m_radius + x, m_radius + y);
}

Vector2f is simply a struct that has 2 floats.

In debug mode this works as expected. Let's say the radius is 50. getPoint(0) returns a vector that prints X: 50 Y: 0. For some reason, the same function will return a vector that prints X: 50 Y: 4.77673e-14. Now, 4.77673e-14 is a crazy small number that might as well be 0, but why the difference?

I would suspect that the issue is floating point error. On certain hardware, the CPU uses higher-precision 80-bit floating points. When you store those back to doubles, the extra precision is truncated.

In debug mode, without optimization, the compiler will do exactly as you say, storing intermediate calculations and using the truncated stored data for the next line. But with optimizations turned on, the compiler might take shortcuts which allows it to use the higher-precision data still in the registers in the next line.

-Steve

Reply via email to