On Sunday, 10 August 2014 at 13:47:45 UTC, Timon Gehr wrote:
No, if you prove false that means either that it is nonterminating

I.e. everything afterwards is unreachable.

Depends on your take on it. If you support non-deterministic computations (e.g. Ada's "select") then non-termination is treated like bottom. So you can have

infinite_recursion() or terminates() => terminates()

So unreachable in terms of machine-language, sure, but not in terms of execution flow.

Do you agree we indeed have equivalence for those programs with an empty precondition?

You mean "false" or "true"?

or that it cannot be proved by the the assumptions, hence the assumptions
need strengthening.
...

'false' is the strongest statement. Why would you need to strengthen assumptions?

If you succeed in proving the postcondition to be false then the postcondition does not hold and you have to strengthen the precondition or else the triplet is invalid/incorrect. If the precondition is defined false then the postcondition is not required to hold. It is not covered by the proof/verification/specification and the triplet is valid/provably correct.

So if the precondition is defined false then the program is a priori proved correct to the specification, i.e. the specification does not require any guarantees for the result.

If the precondition is actually assumed to hold for all executions of the program, a precondition of 'false' means that the program never executes.

That is the responsibility of the caller. It is the caller's responsibility to uphold the precondition in order to get guaranteed results.

But you could have multiple pairs of preconditions/postconditions. E.g. you could for a generic function have different pairs for int/float/double or have a separate pair for termination, etc. Or have one set that specifies what is covered by unit-tests, what is vetted by review, what is totally unknown… You could have different types of guarantees for a library and users could take advantage of it based on the requirements for the application (high security vs computer game).

Maybe you don't assume this, but internally to the logic, this is what happens: You verify the postcondition under the assumption of the precondition. I.e. in this case you verify the postcondition of the program under the assumption that it never executes.

It is not always possible to prove termination, so that would be unfortunate. It would be more useful to be able to restrict the precondition to what you cover by unit-tests. Or to have multiple sets of pre/postconditions (proven-by-machine vs vetted-by-review).

Then if you can prove that the caller fulfills the precondition you are able to optimize based on the pre/post condition and assume the post-condition if it is machine-verified.

That would give new optimization possibilities.

That does not mean that you should never be able to use a function outside the verified range (by machine proof), but then you need to assert the results at runtime if you want verified correctness.

Of course you may still execute such a program, but the ⦃ ⦄ assertions become irrelevant to this execution.

Not really, they could still trap.

Note that an optimizer needs to be able to use deduced facts for program transformations, or it is useless. To an optimizer, a contradiction means 'unreachable'.

No, to an optimizer a contradiction means "all bets are off for the postcondition to hold":

program(x) != executable(x)

That is why __assume(P) must always hold on any reachable path (e.g. any path that the optimizer considers to be a target for optimization).

But __assume is not a regular precondition, it has a counterpart in the optimization invariant (postcondition):

program(x) == executable(x), x not creating contradictions by __assume(…)

The optimizer is presuming that anything goes as long as this optimization invariant can be upheld. This is not a postcondition specified by the programmer at the meta-level. It is implicit.

And the optimizer does not check the implicit precondition, at all…

to deduce those facts. Do you agree that _if_ 'assume' is to be seen as an optimizer hint, then 'assume(false)' means 'assume unreachable'?

No. I think the optimizer should only use assumptions when they are guaranteed to hold by the caller or the language axioms. Or in the rare case by other machine verifiable axioms (specifications for hardware registers etc).

Of course it does, that is why Hoare Logic and SSA exist.

They exist for the virtue of being convenient formalisms for some applications, but they can be embedded into e.g. set theory just fine.

Not sure what you mean by this and how this relates to imperative programming.

Deduction lacks a notion of time.
...

What is 'a notion of time' and how does HL provide it?

It doesn't. Which is why it trips when you don't prove termination. But HL does not do program transformations.

The assert(Q); statement is equivalent to

if(¬Q) abort;

Not if assert() is considered meta-level. However, since a triplet is tied to an entity/function/module, it would be ok to throw an exception as long as it isn't caught in the module covered by the pre/postcondition triplet if it does not have any side effects.

If you don't reach the postcondition it does not matter.

Assert is not an imperative statement,

Yes, it may be seen to be.

That would be an aberration, because the specification should either be independent of the implementation or generate the implementation…

If the specification is considered part of the program then you will run into problems if there are contradictions.

assert(P); is a statement and hence it needs to be given a semantics, e.g. an axiomatic semantics or by rewriting, as above.

Well, in C it is a statement by implementation, but it is always meant to not have side effects and one should be able to compile with NDEBUG with no difference in program behaviour. So, even though it is implemented as a macro it has always had the status of an annotation.

(BTW: This is not about being right or wrong. It is about understanding each other's statements in the right context.)

I think the whole game changes when Walter wants to optimize based on the annotations… Then you need to be very careful unless you want to amplify bugs rather than improve the overall performance of the program (speed + correct behaviour).

So then the distinction between specification and implementation, proven and vetted, becomes even more important.

Since D has unit tests as a formalism, then it would be a shame to not use it to verify contracts in a way that could provide safe optimization possibilities rather than relying on programmers being able to write provably correct code in a predictable manner, which just ain't gonna happen.

Reply via email to