http://gcc.gnu.org/bugzilla/show_bug.cgi?id=43716
--- Comment #45 from Jakub Jelinek <jakub at gcc dot gnu.org> 2012-12-18 10:23:59 UTC --- I've bisected this and the bug went away (or has gone latent) with http://gcc.gnu.org/viewcvs?root=gcc&view=rev&rev=189915 There is a huge number of changes in *.optimized dumps, mainly D.NNNNN DECL_UID numbers but also SSA_NAME versions, if I abstract from those, the only real change is in a comparison: - x_15 = MIN_EXPR <x_572, x_317>; + x_382 = MIN_EXPR <x_572, x_317>; ... - x_89 = MIN_EXPR <x_17, x_552>; + x_95 = MIN_EXPR <x_17, x_372>; x_320 = aaa11.v0011; - x_384 = MAX_EXPR <x_89, x_320>; - aaa13.dt = x_384; - if (prephitmp.1593_1033 > 2) + x_15 = MAX_EXPR <x_95, x_320>; + aaa13.dt = x_15; + if (prephitmp.1593_1034 > 2) ... <bb 100>: - if (x_342 < x_384) + if (x_15 > x_342) (x_342 is the same thing in both cases, the < vs. > is probably the result of canonicalizing the comparison based on SSA_NAME versions). Anyway, is it really worth it to have this open as P1 on questionable testcase (well, questionable is mainly whether the testcase doesn't assume IEEE 754 semantics to make -ffast-math invalid for it) where the problem is just latent (and unclear whether it is a compiler issue at all)?