Paul G. Allen wrote:

So, the way Linux does it is not bad at all, but the way programmers fail to initialize the memory as soon as it's allocated *is* bad. Allocate the memory and initialize it when it's allocated, not later on when you *might* use it. That way, it's there up front, before the long computation, and there's no surprises half way through. (This is why the C or C++ compiler warns about uninitialized objects.)

You do embedded software?  Right?  And *still* make that statement?

Programmers like you are the reason that all of the Moore's law gains go directly into the trash, programs start up like sludge, and "Please wait..." screens are the norm.

It is *not* required to always initialize large hunks of memory and may, in fact, change a very nice O(log n) computation which the computer can happily chew through quickly into a O(n) computation that moves like molasses.

In addition, when I am allocating a whopping chunk of memory, I'm generally *about index it and put something in it myself*. Like computed data. Or bitvector tracked objects. Or ... And your silly initialize loop just blew out all of my cache lines for any data I had previously.

Finally, if you didn't always have 0 initialized memory, you'd actually notice the buffer and sentinel under and overruns from uninitialized values and have to *fix* them rather than "Hey, runs on Linux. What's your problem?".

Initialize memory when it makes sense.

Yeah, sometimes you need to initialize memory up front. Sometimes it doesn't matter, so do it anyhow (The computation is O(n log n) so the memory clear is irrelevant). However, the time when you need to allocate huge chunks is normally *exactly* the time when you actually need to *think* about that.

-a


--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list

Reply via email to