On Wednesday, 17 December 2014 at 05:28:35 UTC, Walter Bright wrote:
On 12/16/2014 5:37 AM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dl...@gmail.com>" wrote:
The current proposal is either too limiting or not limiting enough.

I'm afraid I don't understand at all what you wrote.

Well, look at it this way: D1 took 1990s style C++ application programming model and made it more convenient with a GC. Having a GC-memory model with the possibility of stack/heap allocations as optimizations can keep the language simple. D2 is more of a library author's language, but without GC you need to deal with different types of ownership.

If you toss out the GC, and want the same level of safety and generality you either:

1. need to track ownership add dynamic runtime checks
2. provide a full blown proof system (not realistic)
3. provide means for the programmer to "help" the semantic analysis

What you want is a combination of 1 and 3, but you don't need to keep them separate. Since refcounting is expensive, you don't want that everywhere. For stack allocated objects you have an implicit "refcounting" in the nesting of scopes, you only need to track scope-depth.

If the compiler knows that a returned object is stack allocated, then it can check that by comparing against the stack frame address since the stack grows downwards in an ordered fashion. And in most cases it should be able to elide the check.

If you also maintain information about order (stack depth) among the stack allocated objects a function receives through reference parameters (for every function call), then you can safely combine the stack allocated objects you receive through parameters in aggregates, like inserting an stack allocated element into a container.

Since D is increasingly becoming a library author's language, you probably also would benefit more from strengthening the template support by having a templated ref-type rather than "scope ref". You could define protocols (e.g. predefined UDAs) that tells the compiler what kind of key properties the library type provides, such as ownership. Then use that for a more generic approach to optimization and escape analysis and let the ownership-reference-types be library types. If semantic analysis gets rid of the dynamic aspects of a referencs (such as reference counting) then the compiler should be able to use the same function body, and in other cases just add a inferred wrapper to a generic function body. So the code bloat can be limited.

If all references to external resources (like the heap) in the stack allocated object is owned by the object, then you can treat those as a whole. But you need the compiler to understand what ownership is if it is to guarantee memory safety in the kind of situations where memory safety is known to be difficult to achieve.

The alternative is to state that @safe is not covering non-GC memory and that one should stick to vetted library code when writing @safe code that does not use the GC. I actually find that acceptable for D2. In D2 you would be better off spending the effort at finding ways to turn automatically turn GC allocations into stack allocations as optimizations. Then you can find a more general and powerful ownership-oriented approach for non-GC applications for a new major version of D.

It is not realistic that people will annotate with "scope ref", and compiler inference could make updates to libraries break application code if you get false negatives. Function signatures are contracts, they ought to be explicit for libraries. So having library functions act as "scope ref" without it being explicit in the signature is not necessarily a good idea.

Reply via email to