Yigal Chripun wrote:
On 29/09/2009 00:31, Nick Sabalausky wrote:
"Yigal Chripun"<yigal...@gmail.com>  wrote in message
news:h9r37i$tg...@digitalmars.com...


These aren't just marginal performance gains, they can easily be up to
15-30% improvements, sometimes 50% and more. If this is too complex or
the risk is too high for you then don't use a systems language :)

your approach makes sense if your are implementing say a calculator.
It doesn't scale to larger projects. Even C++ has overhead compared to
assembly yet you are writing performance critical code in c++, right?


It's *most* important on larger projects, because it's only on big systems
where small inefficiencies actually add up to a large performance drain.

Try writing a competitive real-time graphics renderer or physics simulator (especially for a game console where you're severely limited in your choice of compiler - if you even have a choice), or something like Pixar's renderer
without *ever* diving into asm, or at least low-level "unsafe" code. And
when it inevitably hits some missing optimization in the compiler and runs
like shit, try explaining to the dev lead why it's better to beg the
compiler vender to add the optimization you want and wait around hoping they
finally do so, instead of just throwing in that inner optimization in the
meantime.

You can still leave the safe/portable version in there for platforms for
which you haven't provided a hand-optimization. And unless you didn't know what you were doing, that inner optimization will still be small and highly isolated. And since it's so small and isolated, not only can you still throw in tests for it, but it's not as much harder as you would think to veryify correctness. And if/when your compiler finally does get the optimization you
want, you can just rip out the hand-optimization and revert back to that
"safe/portable" version that you had still left in anyway as a fallback.



I think you took my post to an extreme, I actually do agree with the above description.

what you just said was basically:
1. write portable/safe version
2. profile to find bottlenecks that the tools can't optimize and optimize those only while still keeping the portable version.

My objection was to what i feel was Jeremie's description of writing code from the get go in low level hand optimized way instead of what you described in your own words:

That wasn't what I said, I don't low level hand optimize everything, I do profiling first, only a few parts *known* to me to require optimizations (ie matrix multiplication) are written in sse from the beginning with a high level fallback, there just happen to be a lot of them :)

What I argued about was your view on today's software being too big and complex to bother optimize it.

And unless you didn't know
what you were doing, that inner optimization will still be small and highly
isolated.

Reply via email to