------- Comment #13 from mmitchel at gcc dot gnu dot org  2010-04-20 20:43 
-------
I think this optimization is valuable in some cases, so I think this is a
question of defaults, rather than of behavior per se.  While it may be useful
for some security-related applications not to eliminate the checks, but it is
clearly useful in other contexts to eliminate the checks.  "Optimizing away
bounds checking" is certainly not in and of itself a bug.

In some sense, this is similar to strict-aliasing, or signed-does-not-overflow
optimization.  There's a lot of code out there that depends on something that's
not portable.  It's not useful to break that code, but it is useful to be able
to optimize.  (One case where I think this optimization is likely to be
valuable is in switch/if statements where you're cascading through values; this
optimization can allow you to eliminate the catch-all case at the end, which is
certainly a win for code-size.)

I think the standard should say that you get an indeterminate value when you
convert an out-or-range input to an enum (and that such value may be outside
the bounds of the enum range), but that the program is not allowed to go and
reformat your disk.  Ada has a notion of errors like this; the damage is
bounded.  This doesn't need to be in the dereference NULL pointer category of
behavior, but neither should it have any guaranteed semantics, including that
the value is masked down.

So, I think we should have a C++ front-end option that says "interpret enums
strictly".  I think it should be off by default for compatibility with existing
code.


-- 

mmitchel at gcc dot gnu dot org changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |mmitchel at gcc dot gnu dot
                   |                            |org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=43680

Reply via email to