On Wednesday, 8 June 2016 at 00:39:54 UTC, Walter Bright wrote:
On the contrary, I think a formal proof would be very valuable. I am just skeptical of the notion that a proof is automatically correct. I've read about mistakes being found in many published mathematical proofs. I read somewhere that Hilbert made many mistakes in his proofs, even though the end results turned out correct.

Well, you cannot prevent errors in the requirements, but you can eliminate errors in the proof, so if the requirements are too complex you have a bad deal. The theorem prover is separate from the proof verifier. It works like this:

1. Human specifies the requirements (e.g. assert(...) in D)

2. Theorem prover takes program + requirements + strategies (prodding the prover along the right track) and emits a loooong formal proof in a standard format.

3. The proof is handed to N independently implemented verifiers that checks the proof.

But that is impractical for typical user created program. You only want to do that once, for your backend or your type-system etc.

--

What you can do is, as you've stated before, transform your source code into a simpler form and verify that it only can lead to situations that are provably safe.

The advantage of this is that you also can prove that specific cases of pointer arithmetics are provably safe (say, fixed size array on the stack) thus reducing the need for @trusted.

The disadvantage is that it will slow down the compiler and make it more complicated, so why have it in the compiler and not as a separate program?

Make it a separate program so it works on uninstantiated code and prove libraries to be correctly marked @safe before they are uploaded to repositories etc.

If @safe does not affect code gen, why have it in the compiler?

Reply via email to