On Mon, Oct 14, 2013 at 11:53 PM, Bennie Kloosteman <[email protected]>wrote:

 Jonathan wrote:

"My main interest in reference counting is the ability to bound the
>> space/delay tradeoff on dead objects turning into re-allocatable space"
>
>
> Why is this trade of important ? I dont see it an issue with most GCs?  I
> can understand system resources but memory  i don't unless its due to
> fragmentation in some systems but requiring more memory does the same..
>

Because that seems to be the metric that determines how rapidly the heap
has to grow in order for mutator performance to be sustained.

On the one hand, we have a lot of papers in the literature that seem to say
"well, we don't suck too badly if the heap is 2x what we really need, and
we hum right along at 4x". On the other, we have a lot of devices out there
with a total DRAM footprint of one or two gigabytes trying to run multiple
applications. I suspect we all agree that paging is bad, though SRAM is
certainly improving that.

And then we have more deeply embedded devices where the DRAM
overprovisioning is measured in kilobytes or megabytes.

If we want to have a sensible performance story on *any* of these machines,
we really need to push hard to get the required heap footprint as close to
(1 * actual need) as we can. There seem to be three ways to do that:

1. Change the programs structurally
2. Give the programs ways to be more more expressive about their use
patterns, so that we can reclaim more effectively.
3. Reduce the length of time that unreferenced objects are in limbo.

By "limbo" I mean the period of time between the object being unreferenced
and the object's storage being re-allocatable for some other purpose.. All
other things being equal, the longer that time is, the more pressure we
face to grow the heap.


shap
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to