On Thursday, 9 January 2014 at 14:19:41 UTC, Ola Fosheim Grøstad wrote:
On Thursday, 9 January 2014 at 13:51:09 UTC, Paulo Pinto wrote:
That could possibly be achieved with a generational parallel GC.

Isn't the basic assumption in a generational GC that most free'd objects has a short life span and happened since the last collection? Was there some assumption about the majority of inter-object pointers being within the same generation, too? So that you partition the objects in "train carts" and only have few pointers going between carts? I haven't looked at the original paper in a long time...

That was just a suggestion. There are plenty of incremental GC algorithms to choose from.


Anyway, if that is the assumption then it is generally not true for programs that are written for real time. Temporary objects are then allocated in pools or on the stack. Objects that are free'd tend to come from timers, events or because they have a lifespan (like enemies in a computer game).

There are real time GCs controlling missile tracking systems.

Personally I find them a bit more real time than computer games.

On a game you might miss a few rendering frames, a GC induced
delay on a missile tracking system might turn out a bit ugly.


I also dislike the idea of the GC locking cores down when it doesn't have to, so I don't think parallel is particularly useful. It will just put more pressure on the memory bus. I think it is sufficient to have a simple GC that only scans disjoint subsets (for that kind of application), so yes partitioned by type, or better: by reachability, but not by generation.

If the GC behaviour is predictable then the application can be designed to not trigger bad behaviour from the get go.


Sure, the GC usage should not hinder the application's performance.

However, unless you target systems without an OS, you'll have anyway the OS making whatever it wants with the existing cores.

I never saw much control besides setting affinities.

--
Paulo

Reply via email to