https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110182

            Bug ID: 110182
           Summary: GCC generates incorrect results for simple Eigen Casts
                    / Subtractions at -O2 or above for a 3 dimensional
                    vector
           Product: gcc
           Version: 13.1.1
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: c++
          Assignee: unassigned at gcc dot gnu.org
          Reporter: lakin at vividtheory dot com
  Target Milestone: ---

Created attachment 55286
  --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=55286&action=edit
The minimal reproducing example

I am attaching a simple example program that shows how gcc is getting an
incorrect result for a simple program which is attempting to represent a 1D
matrix of doubles as two floating point matrices, one for the closest value as
a float and one for the remainder.  

gcc 11 gets correct results at -O1 and -O2, but not -O3. gcc 12/13 get
incorrect results at -O2 and -O3, but not at -O1.

I've attached the source file below, and am more than happy to provide more
information / intermediate outputs. However, one can reproduce this even at
godbolt: https://godbolt.org/z/j7P5of7xo 

More info here:
https://gist.github.com/lakinwecker/9ef9dbde94c018a33f4c33822c6d93ad

Reply via email to