Timon Gehr:

Well, if there is a formal proof of correctness, checking for overflow at runtime is as pointless as limiting oneself to a language without undefined behaviour in its basic semantics.

Perhaps you remember the funny quote by Knuth: "Beware of bugs in the above code; I have only proved it correct, not tried it."

Proof of correctness are usually limited to certain subsystems, because writing a proof for the whole system usually it too much hard, or takes too much time/money, or because you have to interface with modules written by subcontractors. Often there is an "outer world" you are not including in your proof. This is a common strategy for systems written in Spark-Ada, and this is done even by NASA (and indeed was the cause of one famous crash of a NASA probe). So you rely on the language and simpler means to assure the safety of interfaces between the sections of the program.

Another purpose for the run-time tests is to guard against random flips of bits, caused by radiations, cosmic rays, interferences, heat noise, etc. Such run-time tests are present in the probes on Mars, because even space-hardened electronics sometimes errs (and relying on other back-up means is not enough, as I have explained in the precedent post).

And generally in high integrity systems you don't want to use a language with undefined behaviour in its basic constructs because such language is harder to test for the compiler writer too. If you take a look at blogs that today discuss the semantics of C programs you see there are cases where the C standard is ambiguous and the GCC/Intel/Clang compilers behave differently. This is a fragile foundation you don't want to build high integrity software on. It took decades to write a certified C compiler.

Bye,
bearophile

Reply via email to