On Tue, 21 Jun 2016 16:42:02 +0100, Simon Hobson wrote:
> An interesting task would be to look at the various algorithms offered, work 
> out how the compiler is likely to handle them when turning into code (or even 
> look at the code generated), and then work out the efficiency of the 
> different methods in terms of CPU cycles/character.
> Of course, it's possible that different methods may come out best depending 
> on the amount of whitespace to be removed.

And results will differ depending on platform, toolchain, 
compiler version, optimizer setting, and so on.

Another aspect beyond just runtime efficacy is this: 

Source code is written for humans, not machines! Clarity and 
simplicity can help minimize code maintenance cost, and thus 
easily outweigh some slight runtime penalty. Whoever has found 
himself in a situation where he had to spend a considerable 
amount of time trying to grok what some "clever", "manually 
optimized" code is supposed to do, knows what I'm talking about. 

This is especially embarrassing, when your past self was the 
wise guy who wrote that "clever" code a few months ago. (Been 
there, done that, got the t-shirt, wore it out.)

Some "soft" but generally helpful rules to keep in mind:

* Deploy clear, simple, and generally well understood algorithms. 

* Write easy to read and easy to maintain code. If that implies 
  using another helper variable, or a goto, or the like - so be it! 

* Do not try to outsmart the guys who wrote the optimizing 
  compiler. Most of the early "brilliant" hacks often turn out 
  counterproductive on modern machines and/or implementations.

* 1. Make it run.  2. Make it run correctly.  3. Make it run 
  faster only if it really is the bottleneck in the total system.

My 2ct, FWIW.

Regards
Urban
_______________________________________________
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng

Reply via email to