Let me ask a somewhat obvious question here.

Why is deterministic destruction needed?

The most often-used example is that of objects with external resources
like filehandles or network sockets. Let me take that argument for the
duration of this email, but please feel free to bring up other reasons
that deterministic destruction is needed.

For the most part, the programmer should be perectly aware of when a
determinstic destruction object should be desetructed. 90% of the cases
involve the object being on the stack, and going out of scope. The
remaining 10%, in my mind, are the ones where the programmer passes on a
filehandle to some code which will do stuff with the filehandle later in
the program, and it needs to hold a reference to it.

This tells me that if we make an attribute stack_collected, the user could
use that when they are sure they are done with the filehandle.
{
  my $fh is stack_collected = new IO::FileHandle(..)
  print $fh whatever;
} # $fh is collected here


The other reason for ref-counted (I think) objects is to avoid pushing
certain system limits, like 64 filehandles, etc. This mirrors the
situation of headers, where we have a limited number of headers, and try
to avoid allocating new ones.

If we are able to define a new type of precious resource, we can make the
GC handle them efficiently. On allocation of a new PMC with type
PRECIOUS_filehandle, we can check how many PRECIOUS_filehandle's exist,
and if there's no room to allocate anymore, we can trigger a DOD run to
attempt to free some up.

This particular system would allow us to avoid over-allocating certain
system resources like filehandles and network sockets, while not placing a
burden upon the code that doesn't care for such precious resources.

Is there still a need for determinstic destruction, even in light of the
alternative approaches mentioned above?

Thanks,
Mike Lambert


Reply via email to