Another general point is that conceptually this is not
an optimization issue at all.

The programmer writes code that is undefined according
to the standard.

Whatever expectation the programmer has for this code
is based on either a fundamental misunderstanding of
the semantics of C, or there is a simple bug.

When you write such code, the compiler may do anything,
whether or not optimization is turned on. It's not a
security risk that the compiler fails to provide some
kind of behavior that the standard does not defined\.

The security risk is in writing undefined code, such
code has a (potentially serious) bug. Sure, this is
indeed a case where switching from one compiler to
another, one architecture to another, one version
of a particular commpiler to another, one set of
compilation switches to another etc can change the
behavior. It's even possible for such code to behave
differently on the same day with none of these
conditions changed, depending e.g. on what was
run before or is running at the same time.

If you write in a language which is not 100% safe
in this regard, you do have to worry about safety
considerations caused by undefined code, and depend
on proofs, code reviews, tools etc to ensure that
your code is free of such defects. That's worrisome,
but any bugs in supposedly secure/safe code are
worrisome, and there is no real reason to single
out the particular class of bugs corresponding to
use of undefined semantics in C.

To me, the whole notion of this vulnerability node
is flawed in that respect. You can write a lengthy
and useful book on pitfalls in C that must be
avoided, but I see no reason to turn such a book
into a cert advisory, let alone pick out a single
arbitrary example on a particular compiler!

Reply via email to