https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99234

            Bug ID: 99234
           Summary: Regression: wrong result for 1.0/3.0 when
                    -fno-omit-frame-pointer -frounding-math used together
           Product: gcc
           Version: 10.2.1
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: c++
          Assignee: unassigned at gcc dot gnu.org
          Reporter: vz-gcc at zeitlins dot org
  Target Milestone: ---

Please see the following test case minimized by cvise:
---------------------------------- >8 --------------------------------------
// When compiled with
//
//  g++ -Wall -O2 -fno-omit-frame-pointer -frounding-math test.cpp
//
// using x86_64-w64-mingw32-g++ version 10-win32 20210110 (GCC) from Debian
// g++-mingw-w64-x86-64-win32 10.2.1-6+24.1 package or gcc version 10.2.0
// (Rev6, Built by MSYS2 project) this test case computes a wrong result for
// the second division below.
#include <cmath>
#include <cstdio>
#include <stdexcept>
#include <string>

double numeric_io_cast(const std::string& from)
{
    char* rendptr;
    double value = std::strtod(from.c_str(), &rendptr);
    if('\0' != *rendptr)
        throw std::logic_error("");

    return value;
}

double numeric_io_cast(char const* from)
{
    return numeric_io_cast(std::string(from));
}

int main(int, char* [])
{
    std::printf("1.0 / 3.0 = %f\n", 1.0 / 3.0); // => 1.0 / 3.0 = 0.333333

    try {
        numeric_io_cast("");
        numeric_io_cast("1e");
    } catch(std::logic_error const&) {
    }

    std::printf("1.0 / 3.0 = %f\n", 1.0 / 3.0); // => 1.0 / 3.0 = 0.000000

    return 0;
}
---------------------------------- >8 --------------------------------------

The comments in the code indicate the output produced when using MinGW gcc
10.2. The same code worked fine with different versions of gcc 8 and also works
when using native Linux gcc 10.2.1. It always works fine for 32 bit compiler
(which produces completely different code).

Also, just in case it can be helpful (even though I'm afraid it isn't...), it
the problem disappears when using the 10.x compilers mentioned above if either
of the following is true:

1. You use -O0 or -O3 (but it gives the same results with -O1 and -O2).
2. You don't use -fno-omit-frame-pointer.
3. You don't use -frounding-math.
4. You use 2.0, 4.0 or 8.0 instead of 3.0 (but it fails similarly with 5.0, 7.0
etc).
5. You use something other than std::logic_error (or a class derived from it
such as std::invalid_argument), i.e. throwing a simple class, even deriving
from std::exception, makes the problem disappear.
6. You remove the (clearly unnecessary!) numeric_io_cast() overload taking
char* and rely on implicit conversion doing the same thing.

Looking at the generated code the problem seems relatively clear: the compiler
assumes the xmm register containing the result is preserved when it's
clobbered. But I have no idea where to start looking in gcc to find why does it
assume this and how to fix this.

I'd also like to note that IMO it's a pretty bad bug because it silently
results in completely wrong behaviour in the code which worked perfectly well
before and, of course, it took quite some time before we actually found what
was going on and why did the tests suddenly start failing in "impossible" ways
after upgrading to MinGW 10.

Reply via email to