On Friday, 6 February 2015 at 15:10:18 UTC, Steven Schveighoffer wrote:
into suspect the whole function. So marking a function @safe, and having it mean "this function has NO TRUSTED OR SYSTEM CODE in it whatsoever, is probably the right move, regardless of any other changes.

But that would break if you want to call a @safe function with a @trusted function reference as a parameter? Or did I misunderstand what you meant here?

And... what happens if you bring in a new architecture that requires @trusted implementation of a library function that is @safe on other architectures?

1. A way to say "this function needs extra scrutiny"
2. Mechanical verification as MUCH AS POSSIBLE, and especially for changes to said function.

Yes, we can do 2 manually if necessary. But having a compiler that never misses on pointing out certain bad things is so much better than not having it.

I am not sure if it is worth the trouble. If you are gonna conduct a semi formal proof, then you should not have a mechanical sleeping pillow that makes you sloppy. ;-)

Also if you do safety reviews they should be separate from the functional review and only focus on safety.

Maybe it would be interesting to have an annotation for @notprovenyet, so that you could have regular reviews during development and then scan the source code for @trusted functions that need a safety review before you a release is permitted? That way you don't have to do the safety review for every single mutation of the @trusted function.

Maybe it should have been called "@manually_proven_safe" instead, to
discourage use...

@assume_safe would probably be the right moniker since that's what we use elsewhere. But it's water under the bridge now...

Yeah, it was merely the psychological effect that one might hestitate to actually type in "I have proven this" without thinking twice about it. "I trust this code" is an easy claim... ;-)

Reply via email to