> While restricting the code you can write at the compiler level is tempting > for improving safety, I don't think that it's has ever been really successful > regarding adoption (Ada being a prime example of that).
Nearly every language does this, especially statically-typed ones. What are type checks, if not a compiler restriction to improve safety? Or function prototypes, or uninitialized-variable errors, or checked exceptions. BCPL is the only systems programming language I can think of without safety nets; as time went on, languages added more and more of them. I don't see this as "restricting the code you can write at the compiler level". I see it as the compiler helping you by keeping track of potentially dangerous things. **So here 's a slightly different idea:** Do safety analysis similarly to Nim's (currently experimental) side-effect analysis. Instead of requiring an `unsafe` block at the site where the unsafe operation happens, the compiler just tags the function as unsafe, plus functions that call it. This has no effect until you explicitly tag a function as "safe". Then the compiler will complain if the safe function calls something that's (directly or indirectly) unsafe, flagging that call. At that point you look for an appropriate place to wrap an `actuallySafe` block, signifying that there are sufficient checks. Or otherwise you could just decide to take the `safe` attribute off the function.