On Wednesday, 16 July 2014 at 18:24:11 UTC, H. S. Teoh via
Digitalmars-d wrote:
On Wed, Jul 16, 2014 at 06:11:54PM +0000, Araq via
Digitalmars-d wrote:
On Wednesday, 16 July 2014 at 16:57:18 UTC, Kagamin wrote:
>On Tuesday, 15 July 2014 at 20:44:35 UTC, H. S. Teoh via
>Digitalmars-d
>wrote:
>>Not to mention, when you need to deallocate a large complex
>>data
>>structure, *somebody* has to do the work -- either you do it
>>yourself, or the reference counting implementation, or the
>>GC.
>
>The first and the last options are still prominent.
A copying GC copies the live data, deallocation of a large
complex
data structure is free in this scenario. Same if you use a
manually
managed memory region for the data structure and then
deallocate the
region via some batch operation. But hey, this simple fact
must be
wrong because some guy read a single paper about GCs that
doesn't even
cover this point.
Have you even read the paper? What you just said is exactly
what the
paper is describing. There are two ends of the spectrum of
memory
reclamation algorithms, at one end, you're tracing "matter"
(live
objects), and the other end you're tracing "antimatter" (dead
objects).
They are just duals of each other, and optimized GC/RC
algorithms tend
to approach the middle ground, with time/memory tradeoffs as an
adjustable parameter.
The paper focusses on RC vs "tracing". My point is "tracing" vs
"copying" is another tradeoff. Here is a mark&sweep algorithm:
- Trace live objects.
- For each dead object: Deallocate.
Here is a copying GC:
- Trace and copy live objects.
- There is no deallocation step. The old region is free for
further usage.