On Thu, 9 Sep 2021, Iain Duncan wrote:

Thanks Elijah, that's very helpful. So this is what I'm taking away from what 
you said, if you don't mind correcting me anywhere I'm wrong, that would be 
lovely
- we can lock out the gc for the duration of a piece, this is reasonable
- but to do so, we should definitely establish the upper bound of the heap size 
you need and pre-allocate it
- we really don't want to get this wrong, because doubling the heap part way 
through will be (probably) an underrun, as everything in the old heap gets 
copied to the new one
-  once the heap is big, only a reboot of the interpreter will bring it back 
down again.

Yes


One thing I'm not clear on: is it necessarily a lot slower for the gc to do a 
run with a large heap, even if the heap is not in use? Or is the bottleneck the 
number of objects the gc goes trawling through?
I guess another way of putting that is: is there any real disadvantage to 
over-allocating the heap size aside from using up RAM?

This is where my GC knowledge is showing its limits!

I think that it is proportional to the amount of heap space that you actually use. But this depends not only on how much you cons, but also on how fragmented the heap is (since s7 gc is non-compacting), which is application-dependent.

 -E
_______________________________________________
Cmdist mailing list
[email protected]
https://cm-mail.stanford.edu/mailman/listinfo/cmdist

Reply via email to