Well, whatever happens, I really hope that I don't have to start adding tests for null before using a library function just because the non-nullable type system cannot establish that a pointer is not null and therefore flags it as a compile-time error.

That would be annoying.

In my opinion a constraint system should go hand in hand with higher-level symbolic optimization (think maxima etc), but I think more polish is needed on what is in the language first…

What could be useful was whole program analysis that moves checks to the calling code where possible, because then hopefully the backend will remove some of the checks and it might make more sense to leave them in for more than debugging. If each function has two entry points that could probably work out fine, if the backends can handle it (which they probably don't if they are C-centric).

Hm, this is probably the one of the few time in my life that I have felt an urge to revisit Ole-Johan Dahl's book Verifiable Programming... The treatment of types was pretty nice though IIRC (proving correspondence between formal type definitions used for proofs of correctness and implementations or something like that).

Reply via email to