On Friday, 1 August 2014 at 21:50:59 UTC, Jonathan M Davis wrote:
On Friday, 1 August 2014 at 20:30:19 UTC, Daniel Gibson wrote:
Am 01.08.2014 22:16, schrieb eles:
On Friday, 1 August 2014 at 17:43:27 UTC, Timon Gehr wrote:
On 08/01/2014 07:19 PM, Sebastiaan Koppe wrote:
The debug and the release build may be subjected to
different input
and hence traverse different traces of abstract states. It
is not
valid to say that an assertion will never fail just because
it hasn't
failed yet.
Yes, but is the same for the C apps. There, you have no
assertion in the
release build, the release build is optimized (I imagine very
few would
use -O0 on it...), then the sefault happens.
But there checks are not optimized away because of assert.
assert(x != NULL);
if(x != NULL) { ... }
in C the if check won't be optimized away in NDEBUG builds, in
equivalent D code it would, because the assert would make the
compiler assume that x will never be NULL at that point.
And why is that a problem? By definition, if an assertion
fails, your code is in an invalid state,
Only in an ideal world. In practice, the condition in the
assertion could itself be incorrect. It could be a leftover after
a refactoring, for instance.
and by compiling out assertions, you're basically assuming that
they all pass, so you're code's already screwed if the
assertion would have failed had it been compiled in. I don't
see how having the compiler then use that assertion for
optimizations really costs you anything. Worst case, it just
makes already invalid code more invalid. You're screwed
regardless if the assertion would have failed. And if it would
have succeeded, then you just got an efficiency boost thanks to
the assertion.
Thinking about it, I'm actually wondering if I should use
assertions _more_ so that the compiler might be able to do
better optimizations in -release. The extra cost in non-release
builds could be worth that extra boost in -release, and as far
as correctness goes, it never hurts to have more assertions.
- Jonathan M Davis