On 11/02/2013 01:56 PM, bearophile wrote:
Timon Gehr:

Well, if there is a formal proof of correctness, checking for overflow
at runtime is as pointless as limiting oneself to a language without
undefined behaviour in its basic semantics.

Perhaps you remember the funny quote by Knuth: "Beware of bugs in the
above code; I have only proved it correct, not tried it."

Proof of correctness are usually limited to certain subsystems, ...

As long as additional ad-hoc techniques for error avoidance are fruitful, the formal proof of correctness does not cover enough cases. (Of course, one may not be able to construct such a proof.)

This is a common strategy for systems written in Spark-Ada, and this
is done even by NASA (and indeed was the cause of one famous crash of a NASA 
probe).

Well, I think it is funny to consider a methodology adequate in hindsight if it has resulted in a crash. Have the techniques advocated by Walter been thoroughly applied in this project?

And generally in high integrity systems you don't want to use a language
with undefined behaviour in its basic constructs because such language
is harder to test for the compiler writer too. If you take a look at
blogs that today discuss the semantics of C programs you see there are
cases where the C standard is ambiguous and the GCC/Intel/Clang
compilers behave differently.

None of those compilers is proven correct.

This is a fragile foundation you don't want to build high integrity software on.

Therefore, obviously.

It took decades to write a certified C compiler.


No, it took four years. CompCert was started in 2005.

Reply via email to