As other people already mentioned, garbage collection incurs some amount of non-determinism and people working in low level areas prefer to handle things deterministically because they think non-deterministic handling of memory makes your code slow.

For example rendering in gaming gets paused by the GC prolonging the whole rendering process. The conclusion arrives that code runs slow, but it doesn't. In fact, a tracing GC tries to do the opposite, in order to make the whole process faster, it releases memory in a buffer like mode often yielding faster execution in the long run, but not in the short run where it is mandatory not to introduce any kind of pauses.

So, in the end, it isn't a fate about performance rather a fate of determinism vs non-determinism.

Non-determinism has the potential to let the code run faster.
For instance, no one really uses the proposed threading model in Rust with owned values where one thread can only work mutably on an owned value because other threads wanting to mutate that value have to wait. This works by creating a deterministic order of thread execution where each thread can only work after the other thread releases its work. This model gets often praised in Rust but the potential seems only a theoretical one. As a result you most often see the use of atomic reference counting (ARC) cluttering in codebases in Rust.

The other point is the increased memory footprint because you have a runtime memory manager taking responsibility over (de)allocation which is impossible to have in some areas of limited memory systems.

However, why just provide a one size fits all solution when there are plenty of GC algorithms for different kinds of problem domains?
Why not offering more than one just as it is the case in Java?
The advantage hereby is to adapt the GC algorithm after the program was compiled, so you can reuse the same program with different GC algorithms.

Reply via email to