Thanks for all the good discussion on this topic! After reading these posts, A refinement of my original question is, "Is the proposed destructor based memory management, as described, intrinsically slower than the GC, or is it just a matter of implementation and engineering effort?"
This is obviously not a cut and dry question. There are a lot of factors, but it sounds like the biggest intrinsic issue is the escape analysis / pessimistic destructor insertion. This is obviously a hard problem. It is the same kind of issue that Rust tries to solve with the borrow checker, and is at the heart of the philosophical divide between GC and non-GC approaches to memory management. > If I can get a unified solution for resource management that costs 5-10% > performance (before I optimized it to use custom allocators etc.) I'm willing > to pay the price. Others may not. Correct me if I'm wrong here: Ideally some _better_ static escape analysis algorithm would solve the performance costs of the destructor runtime, but, it seems that much of those costs can be worked around by using custom allocators. Of course this is not nearly as _automatic_ or _magical_ as a GC, but it does simplify the language and makes the runtime semantics much more consistent in various ways. Destructors make it much easier to implement and utilize custom allocators in general, which I really appreciate. But, I am biased. > The tracing GC could be replaced with atomic reference counting with a cycle > collector. That's of course still a GC but one that plays as nice with > deterministic destruction as possible. Rust has pretty much exactly this with it's RC<T> type, so that is definitely a reasonable compromise that has precedent. * * * @Udiknedormin: afaik, Rust has nothing like a tracing GC, so I'm not sure what you are referring to. The RC<T> type is the closest thing Rust has to a GC, is that what you are referring to?
