I'm by no means a compiler designer, so please forgive my clumsiness with
jargon.

I would imagine that it would be hard to know how many heap allocated
objects (closures, etc) would be created over the course of a program. How
long they would live seems like a much more tractable problem. It seems that
an object's lifespan can in most cases be guaranteed by the frame on which
it was created. If not, a simple set of rules (and the fact that the
compiler understands the program it is creating) could control such things.

There are situations where a garbage collector is not desirable, but I would
prefer to avoid the errors of explicit memory management.
With this in mind, would it be possible to have a compromise between  full
GC and no memory management (at all?) by using a pool allocator which is
scoped to a particular frame.

This pool, as pools go, would be created with an amount of free memory could
and replenish itself if necessary up to a maximum amount. This has a side
effect of making the program execute faster, but it would also increase the
memory usage considerably if managed incorrectly.

The challenge is ensuring closures don't escape their "scope". I was
thinking that perhaps a lambda could be returnable but not copyable, and
would be scoped to the frame of calling function.  I vaguely recall reading
somewhere that when building in no-gc mode, BitC will disallow certain
features in order to avoid heap allocation. Would those cases be covered by
such a scheme?

-- 
We can't solve problems by using the same kind of thinking we used when we
created them.
   - A. Einstein
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to