On Tuesday, 29 July 2014 at 10:40:33 UTC, John Colvin wrote:
On Tuesday, 29 July 2014 at 09:40:27 UTC, Marc Schütz wrote:
assert(a >= 0);
return a < 0;
is equivalent to
assert(a >= 0);
return true;
but only in non-release mode. In release mode, this
effectively becomes
return a < 0;
which is _not_ equivalent to
return true;
I believe this is was Ola is protesting about, and I agree
with him. Such optimizations must only happen if the check
stays.
you mean assert(a < 0) or assert(!(a >= 0)) right?
Ah, yes, of course. Alternatively, `return false;`.
In a correct program (a necessary but not sufficient condition
for which is to not violate it's asserts) it is the same.
The program is in error if a > 0, whether the assert is
compiled in or not. Running in debug mode can simply mean
"check my assumptions".
Yes, it only breaks if the program is incorrect. The difference
is that it asserts in non-release mode, and just goes on and
produces garbage in release mode.
Now, it's of course a valid standpoint to say that your program
is going to break anyway, because your assumptions (that you
expressed by `assert`) were wrong. But on the other hand, you
additionally inserted checks (`a < 0`) to test for it. The above
examples are artificial, but in reality these additional checks
could be located in an external library, and could have been
written by a different author.
It might not be such a good idea to circumvent the checks. In
this sense, asserts would indeed make the library code - which
might have been completely safe by itself - suddenly unsafe.
Of course, this is only relevant if the compiler first optimizes,
and then removes the asserts (in release mode). If it removes the
asserts first, and optimizes after, everything is fine.