Here's hoping this doesn't get marked as a duplicate of 323, since the summary 
contains the keywords 
"floating-point" and "error."  :)

With the following trivial program, which simply does 20 subtractions, a logic 
error occurs during a 
comparison against the floating point value, believing that o < 0.05 is true 
when o == 0.05.

This is reproducible on multiple processors.  I've tried it with gcc 3.3 on Mac 
OS X 10.3 (powerppc g4), 
as well as gcc 3.3 on Redhat 9.0 (i686) and run into the same result.  (For 
kicks, I did try -ffloat-store, 
as suggested in the 323 thread, but this had no effect).  The problem occurs at 
all optimization levels I 
tried.

int main()
{
    float o = 1.0;
    while (1)
    {
        printf("o: %f\n", o);
        if (o < 0.05) break;

        o -= 0.05;
    }

    printf("final o: %f\n", o);
    return 0;
}

-- 
           Summary: Floating-point error with simple subtraction.
           Product: gcc
           Version: 3.3
            Status: UNCONFIRMED
          Severity: normal
          Priority: P2
         Component: c
        AssignedTo: unassigned at gcc dot gnu dot org
        ReportedBy: tob at idlehands dot net
                CC: gcc-bugs at gcc dot gnu dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19177

Reply via email to