At 12:23 AM 11/5/2001 -0800, Brent Dax wrote:
>Michael L Maraist:
># On Sunday 04 November 2001 02:39 pm, Dan Sugalski wrote:
>My understanding is that we will pretty much only allocate PMCs out of
>the arena and any buffers are allocated out of the GC region.  (I could
>be wrong, of course...)

That's dead on. Requiring all the PMCs to be in arenas makes some GC stuff 
easier, as well as making better use of the cache in normal operations.

># First of all, how are arrays of arrays of arrays handled?

You'll have a PMC who's data pointer points to a buffer of PMCs. That works 
fine--we'll tromp through the buffer of PMC pointers and put them on the 
list of live PMCs and GC them as needed.

># First of all, this will mean that the foreign access
># data-structure will grow
># VERY large when PMC arrays/ hashes are prevalant.  What's worse, this
># data-structure is stored within the core, which means that there is
># additional burden on the core memory fragmentation / contention.
>
>No...foreign access (if I undestand correctly, once again) is just a way
>of saying 'hey, I'm pointing at this', a bit like incrementing a
>refcount.

Yep. Only used for real foreign access.

># Additionally, what happens when an array is shared by two
># threads (and thus
># two interpreters).  Who's foreign access region is it stored
># in?  My guess is
># that to avoid premature freeing, BOTH.

The owner's (i.e. the thread that created it). Only the owner of a PMC can 
GC the thing.

># My suggestion is to not use a foreign references section; or
># if we do, not
># utilize it for deep data-structure nesting.

We aren't using it for deep nesting, so we're fine there.

># Beyond this, I think I see some problems with not having PMCs
># relocatable.
># While compacting the object-stores that are readily resized
># can be very
># valuable, the only type of memory leak this avoids is
># fragmentation-related.

Yep, that's one of the benefits of  compacting collector, along with fast 
allocation on average.

># The PMCs themselves still need to be tested against memory
># leaks.

That's why the dead object detection phase exists, to see which PMCs are in 
use. Unused PMCs get reclaimed, and the space taken by their contents reused.

>#  Now I'm
># still in favor of some form of reference counting; I think
># that in the most
># common case, only one data-structure will reference a PMC and
># thus when it
># goes away, it should immediately cause the deallocation of
># the associated
># object-space (sacraficing a pitance of run-time CPU so that
># the GC and free
># memory are relaxed).

Don't underestimate the amount of CPU time taken up by reference counting. 
Amortized across the run of the program, yes, but not at all insignificant. 
Also horribly error prone, as most XS module authors will tell you. 
(assuming they even notice their own leaks, which many don't)

># But I hear that we're not relying on an
># integer for
># reference counting (as with perl5), and instead are mostly
># dependant on the
># GC.

You're conflating dead object detection with GC. Don't--the two things are 
separate and if you think of them that way it makes things clearer.

># Well, if we use a copying GC, but never move the PMC,
># then how are we
># freeing these PMCs?

The dead object detection phase notes them. They're destroyed if necesary, 
then thrown back in the interpreter's PMC pool.

[Fairly complex GC scheme snipped]

That was clever, but too much work. The PMC, buffer header, and memory 
pools are interpreter-private, which eliminates the need for locking in 
most cases. Only the thread that owns a PMC will need to collect it or its 
contents.

For all intents and purposes, an interpreter can consider its pools of 
memory and objects private, except in the relatively rare shared case. (And 
sharing is going to be mildly expensive, which is just too bad--our 
structures are too complex for it not to be) The standalone-interpreter 
model makes GC straightforward, and all we really need to do to expand it 
to a multiple interpreter model is:

*) Make sure the off-interpreter references of shred PMCs do active cleanup 
properly
*) Make sure shared PMCs allocate memory in ways we can reasonably clean up 
after.



                                        Dan

--------------------------------------"it's like this"-------------------
Dan Sugalski                          even samurai
[EMAIL PROTECTED]                         have teddy bears and even
                                      teddy bears get drunk

Reply via email to