http://gcc.gnu.org/bugzilla/show_bug.cgi?id=53501
Alexander Monakov <amonakov at gcc dot gnu.org> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|UNCONFIRMED |NEW Last reconfirmed| |2012-05-29 CC| |amonakov at gcc dot gnu.org Known to work| |4.3.2 Summary|incorrect loop optimization |[4.5/4.6/4.7/4.8 |with -O2 |Regression] scev introduces | |signed overflow Ever Confirmed|0 |1 --- Comment #1 from Alexander Monakov <amonakov at gcc dot gnu.org> 2012-05-29 16:00:01 UTC --- Confirmed, thanks for the report. This is a regression relative to 4.3 on all active branches (I haven't tested latest trunk though). GCC optimizes out the second loop when VRP proves that its upper bound k is negative. This is due to scev-cprop performing final value replacement, which obtains the final value of k after the first loop based on scalar evolution {2, +, 2} for n-1 latch executions as: ((int) ((unsigned int) n.0_28 + 2147483647) + 1) * 2 (which is a very roundabout way to get n*2, but that's beside the point ;) ) AFAICT forcing the signed type happens in chrec-apply: /* "{a, +, b} (x)" -> "a + b*x". */ x = chrec_convert_rhs (type, x, NULL); res = chrec_fold_multiply (TREE_TYPE (x), CHREC_RIGHT (chrec), x); res = chrec_fold_plus (type, CHREC_LEFT (chrec), res); Apropos, I find it quite strange that fold-const folds (n+0xffffffff)*2 to (n+0x7fffffff)*2, as opposed to n*2+0xfffffffe. Stopping my investigation here.