On Mon, Jan 21, 2013 at 09:27:20PM +0100, F i L wrote: [...] > H. S. Teoh wrote: [...] > >3) It is still unsafe: you can have a dangling reference to owned > >memory, because the owner pointer goes out of scope and the memory > >gets deallocated, but there can still be references lingering around > >somewhere. > > This is why there would need to be a lot of magic happening in the > compiler. When the compiler can't know, for sure, weather a reference > that was assigned to a local-scope var is *still* referencing that var > at the end of the scope (pretty common), then it asks before setting > assigning to null. If the compiler can't determine, at all, which refs > will be pointing to a particular local-var, it falls back to regular > ref-counting (for the var in question). [...]
Hmm. So you're essentially saying that the compiler will do compile-time analysis of variable usage, and based on that choose the simplest memory management model for it? That's not a bad idea. You can have a series of increasingly complex management techniques, and the compiler could choose the least complex one that it can statically prove is safe. So if something can be proven not to escape the current scope, it can even be stack-allocated (or if it's big, put it on the manual heap and delete it on scope exit). If it is returned with no other references to it, then it can be a single-reference pointer (i.e. your owner type). If it has cyclic references, use a reference-counted pointer. If all else fails, fall back to the GC. The problem is that quite often, the compiler will not be able to prove much about the data structure, esp. when you have non-trivial manipulation of pointers -- it requires solving the halting problem to achieve that level of analysis on arbitrary code -- so it will probably fall back to the GC quite often, depending on what the code does. T -- My program has no bugs! Only undocumented features...