If you do this and use this style for a significant percentage (well, more than 10-15% of it, unless it's less than 5000 lines long) of your whole program, then you are using C++ in D, and I think it's better for you to avoid using D, and to use C++ or C or asm :-)

No I think that even using a limited subset of D would be quite liberating when compared with the hoops I have to jump through in my C++ code. The new C++ standard will help some, but C++ isn't evolving fast enough.

You can try to write D programs that are faster than the C++ ones, instead. +-20% performance difference is not that important, you can aim to a higher difference.

I always aim for high performance, but it may prove a challenge with D compiler/technology in its current state. I know this is improving and will continue to do so.

Try to use a more custom allocator, like pools, arenas, and so on, they can be much better than nedmalloc. I have seen programs get about twice faster doing this.

The main thing that I do on the memory side is to try to keep my memory use compact and contiguous, which is why I use arrays so much.

I use dynamic arrays quite a lot, and if I switched all of these to use D's built-in GC-based arrays, I think I would see a tremendous performance drop.<

I am a bit suspicious of this. GC scans can slow down a little, but I'm not seeing this as a big problem so far. You can test and benchmark some of your theories. A problem I've seen is caused by the not precise nature of the GC, wrong pointers keeping dead things alive.

One thing that I noticed while benchmarking is that when I remove the call to nedfree in my overloaded delete operator, application performance suffers by 17%. You might expect this to increase performance if memory is never being freed. Instead what happens is that locality of reference suffers. Thus, if D's GC never ran a single collection cycle, it would still be 17% slower than nedmalloc. A moving GC would help this situation, but that's not the case with D.

Thanks for the pointers!

-Craig

Reply via email to