On Tuesday, 15 July 2014 at 23:02:19 UTC, Araq wrote:
On Tuesday, 15 July 2014 at 21:11:24 UTC, H. S. Teoh via Digitalmars-d wrote:
On Tue, Jul 15, 2014 at 09:03:36PM +0000, Araq via Digitalmars-d wrote:
>
>The only way to *really* guarantee 100% predictable memory
>reclamation is to write your own. Except that we all know how
>scalable and bug-free that is. Not to mention, when you need >to >deallocate a large complex data structure, *somebody* has to >do the
>work -- either you do it yourself, or the reference counting
>implementation, or the GC. No matter how you cut it, it's >work that >has to be done, and you have to pay for it somehow; the cost >isn't >going to magically disappear just because you use reference >counting
>(or whatever other scheme you dream up).
>

Actually it completely disappears in a copying collector since only
the live data is copied over ...

Nope, you pay for it during the copy. Read the linked paper, it explains the duality of tracing and reference-counting. Whether you trace the
references from live objects or from dead objects, the overall
computation is equivalent, and the cost is effectively the same. Once you've applied the usual optimizations, it's just a matter of time/space
tradeoffs.

This is wrong on so many levels... Oh well, I don't care. Believe what you want.

Please enlighten us.

Reply via email to