Jeremie Pelletier wrote:
Don wrote:
Jeremie Pelletier wrote:
Yigal Chripun wrote:
On 29/09/2009 16:41, Jeremie Pelletier wrote:

What I argued about was your view on today's software being too big and
complex to bother optimize it.


that is not what I said.
I was saying that hand optimized code needs to be kept at minimum and only for visible bottlenecks, because the risk of introducing low-level unsafe code is bigger in more complex and bigger software.

What's wrong with taking a risk? If you know what you're doing where is the risk, and if now how will you learn? If you write your software correctly, you could add countless assembly optimizations and never compromise the security of the entire thing, because these optimizations are isolated, so if it crashes there you have only a narrow area to debug within.

There are some parts where hand optimizing is almost useless, like network I/O since latency is already so high having a faster code won't make a difference.

And sometimes the optimization doesn't even need assembly, it just requires using a different high level construct or a different algorithm. The first optimization is to get the most efficient data structures with the most efficient algorithms for a given task, and THEN if you can't optimize it more you dig into assembly.

People seem to think assembly is something magical and incredibly hard, it's not.

Jeremie

Also, if you're using asm on something other than a small, simple loop, you're probably doing something badly wrong. Therefore, it should always be localised, and easy to test thoroughly. I don't think local extreme optimisation is a big risk.

That's also how I do it once I find the ideal algorithm, I've never had any problems or seen any risk with this technique, I did see some good performance gains however.

Greater risks come from using more complicated algorithms. Brute-force algorithms are always the easiest ones to get right <g>.

I'm not sure I agree with that. Those algorithms are pretty isolated and really easy to write unittests for so I don't see where the risk is when writing more complex algorithms, it's obviously harder, but not riskier.

By "riskier" I mean "more chance of containing an error".

I'm partly basing this on my recent experience with writing BigInt. The low-level asm routines are easy to get right, and it's easy to tell when you've go them wrong. They do brute-force stuff, like schoolbook O(n^2) multiplication, and importantly, _there are no special cases_ because it needs to be fast. But the higher-level O(n^1.3) multiplication algorithms are full of special cases, and that's where the bugs are.








Reply via email to