On Tue, 26 Jan 2016 21:15:07 +0000, rsw0x wrote: > On Tuesday, 26 January 2016 at 20:40:50 UTC, Chris Wright wrote: >> On Tue, 26 Jan 2016 19:04:33 +0000, rsw0x wrote: >>> GC in D is a pipedream, if it wasn't, why is it still so horrible? >>> Everyone keeps dancing around the fact that if the GC wasn't horrible, >>> nobody would work around it. >> >> Rather, if everyone believed the GC was awesome in all circumstances, >> nobody would work around it. It could be awesome in all circumstances, >> but if people believe otherwise, they'll try working around it. It >> could be generally awesome but bad for a certain use case, in which >> case people who need to support that use case will need to work around >> it. >> >> In this case, I think it's a marketing issue, not a technical one. D's >> being marketed as an alternative to C++, and existing C++ users tend to >> believe that any garbage collector is too slow to be usable. > > In any case where you attempt to write code in D that is equal in > performance to C++, you must avoid the GC.
Over on d.learn there was a post by H.S. Teoh today about how a struct deserializer she wrote used techniques that absolutely require a GC in order to work without leaking memory everywhere. The version of the code without this technique was an order of magnitude slower. You could probably implement something in C++ that had similar performance, but you'd be writing a custom string type with reference counting, and it would be terribly annoying to use. But the huge reason that it's a marketing problem rather than a technical one is that I doubt most people care enough about performance most of the time to need to avoid garbage collection. The success of Unity3D shows that you can write games in garbage collected languages, and those have at least moderate latency requirements. If you're doing real-time programming, you do need to be very careful about the GC. If your entire application is real-time, then you can't use a GC at all. If portions of it have lower latency requirements, which, I'm told, is common, you can use @nogc in strategic places, call GC.disable and GC.enable where appropriate, and enjoy the best of both worlds. A bit of digging suggests that real-time programs have troubles with malloc being too slow or having unpredictable latency, and there are a few specialized allocators for real-time systems. So this isn't so much a problem with GC in particular; it's more of a problem with general- purpose allocators.