Nothing will get done until someone decides to put in the effort to fix the problem. D's biggest drawback at this point is the GC and one would think with all the smart people around here someone would have solved this problem by now.

We need a solution that allows one to "plug and play" different allocation strategies. One shouldn't have to rely on any particularly bad GC implementation if they don't want to(but as of now are forced to). Stop the world GC is just plain crap except for the most basic apps. Any app that requires performance is going to have issues with this, the reason being simply that there are much better ways.


The ability to avoid the GC is of utmost importance for real time apps regardless of what some want people to think. This can't be done as because D internally uses the GC. Hence this must be worked around. @nogc/@gc has been brought up several times... seems like a start and can't hurt.

IMO the best way to deal with the situation is to use manual memory management with a GC backend.

e.g., pretend the GC is not there just like the good ol' days. If you forget to deallocate something, maybe get a warning from a sufficiently intelligent compiler, if the GC is turned on then it will clean up for you behind the scenes. If you need performance, turn it off or disable it temporarily.

This is the best of both worlds and allows one to easily go from one extreme to the other. One can actually implement a pattern to allow the user to select one extreme or the other.

One can already do this in D in user code but since the D core does not follow this approach it doesn't do one much good unless they want to avoid slices, phobos, etc...

I think it will go a long way to get a standard for D regarding memory allocation that follows these lines. All new code will be written with using the standard and old code will be updated.


Reply via email to