On Monday, 27 February 2012 at 04:17:24 UTC, Andrew Wiley wrote:
On Sun, Feb 26, 2012 at 11:05 AM, Paulo Pinto <pj...@progtools.org> wrote:
Am 26.02.2012 17:34, schrieb so:

On Sunday, 26 February 2012 at 15:58:41 UTC, H. S. Teoh wrote:

Would this even be an issue on multicore systems where the GC can run concurrently? As long as the stop-the-world parts are below some given
threshold.


If it is possible to guarantee that i don't think anyone would bother
with manual MM.


Well, some game studios seem to be quite happy with XNA, which implies using
a GC:

http://infinite-flight.com/if/index.html


I don't really see why you keep bringing up these examples. This is a performance issue, which means you can certainly ignore it and things will still work, just not as well. I've seen 3d games in Java, but they always suffer from an awkward pause at fairly regular intervals. This is why the AAA shops are still writing most of the engines in
C++.
You will always be able to find examples of developers that simply
chose to ignore the issue for one reason or another.

To make it clear, I'm not trying to antagonize you here. I agree that
GC is in general a superior technical solution to manual memory
management, and given the research going into GC technology, I'm sure
that long term it's probably a good idea.

However, I disagree with your statement that "the main issue is that the GC needs to be optimized, not that manual memory management is
required."
Making a GC that can run fast enough to make this sort of thing a non-issue is currently so hard that it can only be used in certain niche situations. That will probably change, but it will probably change over the course of several years. Manual memory management, however, is here now and dead simple to use so long as the programmer understands the semantics. Programming in that model is harder, but
not nearly as bad as, say, thread-based concurrency with race
conditions and deadlock. Manual memory management is much simpler to
deal with than many other things programmers already take on
voluntarily.
When you want your realtime application to behave in a certain way, would you rather spend months or years working on the GC and program in a completely difficult style to deal with the issue, or use manual memory management *now* and deal with the slightly more difficult programming model? Cost/benefit wise, GC just doesn't make a lot of sense in this sort of scenario unless you have a lot of resources to
burn or a specific reason to choose a GC-mandatory platform.

Again, I'm not saying GC is bad, I'm saying that in this area, the cost/benefit ratio doesn't say you should spend your time improving the GC to make things work. For everyone else, GC is great, and I
applaud David Simcha's efforts to improve D's GC performance.

It does take years but please note that those referenced papers are already several years old. Some are from 2005-6. It doesn't mean D shouldn't support manual memory management but claiming that GC doesn't work for real-time is a [religious] myth. Clearly the cost of research has already been spent years ago and the algorithms were already documented and tested.

OT: one of the papers was written at my university.

Reply via email to