I keep bringing this issues, because I am a firm believer that when
people that fight against a GC are just fighting a lost battle.

Like back in the 80's people were fighting against Pascal or C versus
Assembly. Or in the 90' were fighting against C++ versus C.

Now C++ is even used for operating systems, BeOS, Mac OS X drivers,
COM/WinRT.

Sure a systems programming language needs some form of manual memory management for "exceptional situations", but 90% of the time you will
be allocating either referenced counted or GCed memory.

What will you do when the major OS use a systems programming language like forces GC or reference counting on you do? Which is already slowly happening with GC and ARC on Mac OS X, WinRT on Windows 8, mainstream OS, as well as the Oberon, Spin, Mirage, Home, Inferno and Singularity research OSs.

Create your own language to allow you to live in the past?

People that refuse to adapt to times stay behind, those who adapt, find ways to profit from the new reality.

But as I said before, that is my opinion and as a simple human is also prone to errors. Maybe my ideas regarding memory management in systems languages are plain wrong, the future will tell.

--
Paulo

On Monday, 27 February 2012 at 04:17:24 UTC, Andrew Wiley wrote:
On Sun, Feb 26, 2012 at 11:05 AM, Paulo Pinto <pj...@progtools.org> wrote:
Am 26.02.2012 17:34, schrieb so:

On Sunday, 26 February 2012 at 15:58:41 UTC, H. S. Teoh wrote:

Would this even be an issue on multicore systems where the GC can run concurrently? As long as the stop-the-world parts are below some given
threshold.


If it is possible to guarantee that i don't think anyone would bother
with manual MM.


Well, some game studios seem to be quite happy with XNA, which implies using
a GC:

http://infinite-flight.com/if/index.html


I don't really see why you keep bringing up these examples. This is a performance issue, which means you can certainly ignore it and things will still work, just not as well. I've seen 3d games in Java, but they always suffer from an awkward pause at fairly regular intervals. This is why the AAA shops are still writing most of the engines in
C++.
You will always be able to find examples of developers that simply
chose to ignore the issue for one reason or another.

To make it clear, I'm not trying to antagonize you here. I agree that
GC is in general a superior technical solution to manual memory
management, and given the research going into GC technology, I'm sure
that long term it's probably a good idea.

However, I disagree with your statement that "the main issue is that the GC needs to be optimized, not that manual memory management is
required."
Making a GC that can run fast enough to make this sort of thing a non-issue is currently so hard that it can only be used in certain niche situations. That will probably change, but it will probably change over the course of several years. Manual memory management, however, is here now and dead simple to use so long as the programmer understands the semantics. Programming in that model is harder, but
not nearly as bad as, say, thread-based concurrency with race
conditions and deadlock. Manual memory management is much simpler to
deal with than many other things programmers already take on
voluntarily.
When you want your realtime application to behave in a certain way, would you rather spend months or years working on the GC and program in a completely difficult style to deal with the issue, or use manual memory management *now* and deal with the slightly more difficult programming model? Cost/benefit wise, GC just doesn't make a lot of sense in this sort of scenario unless you have a lot of resources to
burn or a specific reason to choose a GC-mandatory platform.

Again, I'm not saying GC is bad, I'm saying that in this area, the cost/benefit ratio doesn't say you should spend your time improving the GC to make things work. For everyone else, GC is great, and I
applaud David Simcha's efforts to improve D's GC performance.


Reply via email to