On Monday, 3 February 2014 at 23:00:23 UTC, woh wrote:
ur right I never thought of that, I bet all them game devs never thought of it either, they so dumb. I bet they never tried to use a GC, what fools! Endless graphs of traced objects, oh yes oh yes! It only runs when I allocate, oh what a fool I've been, please castigate me harder!

Also people should consider that Apple (unlike C++ game devs) did not have a tradition of contempt for GC. In fact they tried GC *before* they switched to ARC. The pro-GC camp always likes to pretend that the anti-GC one is just ignorant, rejecting GC based on prejudice not experience but Apple rejected GC based on experience.

GCed Objective-C did not allow them to deliver the user experience they wanted (on mobile), because of the related latency issues. So they switched to automated ref counting. It is not in question that ref counting sacrifices throughput (compared to an advanced GC) but for interactive, user facing applications latency is much more important.

You can do soft-real time with GC as long as the GC is incremental (D's is not) and you heavily rely on object reuse. That is what I am doing with LuaJIT right now and the frame rates are nice and constant indeed. However, you pay a high price for that. Object reuse means writing additional code, makes things more complex and error-prone, which is why your average app developer does not do it.. and should not have to do it.

Apple had to come up with a solution which does not assume that the developers will be careful about allocations. The performance of the apps in the iOS app store are ultimately part of the user experience so ARC is the right solution because it means that your average iOS app written by Joe Coder will not have latency issues or at least less latency issues compared to any GC-based solution.

I think it is an interesting decision for the D development team to make. Do you want a language which can achieve low latency *if used carefully* or one which sacrifices maximal throughput performance for less latency issues in the common case.

I see no obvious answer to that. I have read D has recently been used for some server system at Facebook, ref counting usually degrades performance in that area. It is no coincidence that Java shines on the server as a high performance solution while Java is a synonym for dog slow memory hog on the desktop and mighty unpopular there because of that. The whole Java ecosystem from the VM to the libraries is optimized for enterprise server use cases, for throughput, scalability, and robustness, not for making responsive GUIs (and low latency in general) or for memory use.

If D wants to be the new Java GC is the way to go, but no heap allocation happy GCed language will ever challenge C/C++ on the desktop.

Which reminds me of another major company who paddled back on GC based on experience: Microsoft. Do you remember the talk back then .NET/C# were new? Microsoft totally wanted that to be the technology stack of the future "managed code" everywhere, C/C++ becoming "legacy". However, C# ended up being nothing more than Microsoft Java, shoveling enterprise CRUD in the server room. Microsoft is hosting "Going Native" conferences nowadays, declaring their present and future dedication to C++ (again) and they based the new WinRT on ref counting not GC.











Reply via email to