The test case: #include <iostream>
using namespace std; int main() { unsigned int test = 1067320345; cout << reinterpret_cast<float &>(test) << endl; cout << "reinterpret_cast<float &>(test): " << reinterpret_cast<float &>(test) << endl; } When compiled using -g or -O1, the correct output is produced: 1.2345 reinterpret_cast<float &>(test): 1.2345 When compiled using -O2, the first reinterpret_cast<> produces incorrect results. The actual value produced depends on the OS: -NaN reinterpret_cast<float &>(test): 1.2345 I tested this on Solaris 8 and on Linux Redhat WS3, with identical results. My conclusion is that we're looking at an optimizer bug. -- Summary: reinterpret_cast<> yields different (and incorrect) results when using -O2 Product: gcc Version: 4.0.1 Status: UNCONFIRMED Severity: normal Priority: P3 Component: c++ AssignedTo: unassigned at gcc dot gnu dot org ReportedBy: jason dot elbaum at gmail dot com http://gcc.gnu.org/bugzilla/show_bug.cgi?id=25355