Am 08.01.2013 17:12, schrieb Benjamin Thaut:
Am 08.01.2013 16:46, schrieb H. S. Teoh:
So how much experience do you have with game engine programming to
make such statements?
[...]

Not much, I'll admit. So maybe I'm just totally off here. But the last
two sentences weren't specific to game code, I was just making a
statement about software in general. (It would be a gross
misrepresentation to claim that only 10% of a game is performance
critical!)


T


So to give a little background about me. I'm currently doing my masters
degree in informatics which is focused on media related programming.
(E.g. games, applications with other visual output, mobile apps, etc).

Besides my studies I'm working at Havok, the biggest middle ware company
in the gaming industry. I'm working there since about a year. I also
have some contacts to people working at Crytek.

My impression so far: No one who is writing a tripple A gaming title or
engine is only remotly interested in using a GC. Game engine programmers
almost do anything to get better performance on a certain plattform.
There are really elaborate taks beeing done just to get 1% more
performance. And because of that, a GC is the very first thing every
serious game engine programmer will kick. You have to keep in mind that
most games run at 30 FPS. That means you only have 33 ms to do
everything. Rendering, simulating physics, doing the game logic,
handling network input, playing sounds, streaming data, and so on.
Some games even try to get 60 FPS which makes it even harder as you only
have 16 ms to compute everything. Everything is performance critical if
you try to achive that.

I also know that Crytek used Lua for game scripting in Crysis 1. It was
one of the reasons they never managed to get it onto the Consoles (ps3,
xbox 360). In Crysis 2 they removed all the lua game logic and wrote
everything in C++ to get better performance.

Doing pooling with a GC enabled, still wastes a lot of time. Because
when pooling is used almost all data will survive a collection anyway
(because most of it is in pools). So when the GC runs, most of the work
it does is wasted, because its running over instances that are going to
survive anyway. Pooling is just another way of manual memory management
and I don't find this a valid argument for using a GC.

Also my own little test case (a game I wrote for university) has shown
that I get a 300% improvement by not using a GC. At the beginning when I
wrote the game I was convinced that one could make a game work when
using a GC with only a little performance impact (10%-5%). I already
heavily optimized the game with some background knowdelge about how the
GC works. I even did some manual memory mangement for memory blocks that
were garantueed to not contain any pointers to GC data.
Despite all this I got a 300% performance improvement after swichting to
pure manual memory management and removing the GC from druntime.

When D wants to get into the gaming space, there has to be a GC free
option. Otherwise D will not even be considered when programming
languages are evaluated.

Kind Regards
Benjamin Thaut


Without dismissing your experience in game development, I think that your experience was spoiled by D's GC quality.

After all, there are Java VMs driving missiles and ship battle systems, which have even higher timer requirements.

--
Paulo

Reply via email to