On Sunday, 29 December 2013 at 21:39:52 UTC, Walter Bright wrote:
Since you can control if and when the GC runs fairly simply, this is not any sort of blocking issue.

I agree, it is not a blocking issue. It is a cache-trashing issue. So unless the GC is cache-friendly I am concerned about using D for audio-visual apps. Granted, GC would be great for managing graphs in application logic (game AI, music structures etc). So I am not anti-GC per se.

Lets assume 4 cores, 4MB level 3 cache, 512+ MB AI/game-world data-structures. Lets assume that 50% CPU is spent on graphics, 20% is spent on audio, 10% is spent on texture/mesh loading/building, 10% on AI and 10% is headroom (OS etc).

Ok, so assume you have 5 threads for simplicity:

thread 1, no GC: audio realtime hardware
thread 2/3, no GC: opengl "realtime" (designed to keep the GPU from starving)
thread 4, GC: texture/mesh loading/building and game logic

Thread 4 is halted during GC, but threads 1-3 keeps running consuming 70% of the CPU. Thread1-3 are tuned to keep most of their working set in cache level 3.

However, when the GC kicks in it will start to load 512+MB over the memory bus at fast pace. If there is one pointer per 32 bytes you touch all possible cache lines. So the the memory bus is under strain, and this pollutes cache level 3, which wipes out the look-up-tables used by thread 1&2 which then have to be loaded back into the cache over the memory bus... thread 1-3 fails their deadline, you get some audio-visual defects and the audio/graphics systems compensate by reducing the load by cutting down on audio-visual features. After a while the audio-visual system detects that the CPU is under-utilized and turn the high quality features back on. But I feel there is a high risk of getting disturbing noticable glitches, if this happens every 10 seconds it is going to be pretty annoying.

I think you need to take care, and have a cache-friendly GC-strategy tuned for real time. It is possible though. I don't deny it.

Reply via email to