------- Comment #14 from tony3 at GarlandConsulting dot us 2010-01-27 20:03 ------- Yes, I'm now aware that gcc "meets the minimal requirements" of the C++ standard. That isn't my point. My point is whether what it does is acceptable behavior given that there are no warnings or errors. And I'm suggesting that it should either:
1. Be modified to work with all enumeration values (in or out of range) that fit within a byte. This would make the compiler OPERATE THE SAME WAY in all scenarios in the lion's share of situations. 2. Issue a warning or error when the programmer accidentally relies upon values outside of the defined enumeration range: period. (For sure when the enumeration happens to have 3, 7, 15, 31, 63, 127, etc. as the largest value--but better in all cases.) The issue is that gcc, in its present operation does not help the programmer and in fact is misleading. It should either be practical and provide support for what many programmers expect--no matter what the optimization level--or it should flag it as bad code and refuse to run it at any optimization level. The situation where it silently accepts it and runs it in the 99% case, but then converts it silently to a forever loop in the 1% case is simply a recipe for trouble. So does the C++ standard say that it is acceptable for the compiler to drop support for an out-of-range enumeration value in a way that the programmer has no idea it happens--but to support out-of-range enumeration values in other situations? In other words, if gcc is so provably correct according to the standard in refusing to support an out-of-range-by-one enumeration value, why does it run the code at lesser optimization levels? Couldn't the fact that it runs the code without complaint in the majority of cases by considered a bug? -- http://gcc.gnu.org/bugzilla/show_bug.cgi?id=42810