On Monday, 16 June 2014 at 13:12:33 UTC, Timon Gehr wrote:
My point was that no implementation of @safe whatsoever can make it _equivalent_ to memory safety (i.e. @safe will never single out precisely those programs that do not corrupt memory). It will always only approximate memory safety. This is not a bug, it's just a fact.

Out of curiosity, what is "memory safety" defined to be? Does it include running out of stack?

It should not be a problem if @safe rejects some memory safe programs. It still verifies that they are memory safe if they pass (as opposed to "if and only if"). You can verify subsets even if you cannot define two crisp set that discriminate between all possible programs.

Btw, Rice's theorem is based on the halting problem for TMs… so it suffers from the same issues as everything else in theoretical CS when it comes to practical situations. Whether generated IR contains unsafe instructions is trivially decidable. Since you can define an IR in a way that discriminate between unsafe/safe instructions you can also decide that the safe subset is verifiable memory safe.

to agree on. In this case this is witnessed by the fact that you do not seem to want to ban undefined behaviour from @safe code which I can't agree with.

Then just define the undefined behaviour as yielding the integer result of a unspecified black box function of the input. Integer overflow should yield the same value for the same operands on all ALUs. I don't understand how this relates to memory safety?

Reply via email to