bearophile wrote:
Jeremie Pelletier:
Again, that's a lazy view on programming. High level constructs are useful to isolate small and simple algorithms which are implemented at low level.

Software is inherently multi-scale. Probably in 90-95% of the code of a program 
micro-optimizations aren't that necessary because those operations are done 
only once in a while. But then it often happens that certain loops are done an 
enormous amount of times, so even small inefficiencies inside them lead to low 
performance. That's why profiling helps.

This can be seen by how HotSpot (and modern dynamic language JITters work): 
usually virtual calls like you can find in a D program are quick, they don't 
slow down code. Yet if a dynamic call prevents the compile to perform a 
critical inlining or such dynamic call is left in the middle of a critical 
code, it may lead to a slower program. That's why I have Java code go 10-30% 
faster than D code compiled with LDC, not because of the GC and memory 
allocations, but just because LDC isn't smart enough to inline certain virtual 
methods.

Certainly agreed on virtual calls: on my machine, I timed a simple example as executing 65 interface calls per microsecond, 85 virtual calls per microsecond, and 210 non-member function calls per microsecond. So you should almost never worry about the cost of interface calls since they're so cheap, but they are 3.5 times slower than non-member functions.

In most cases, the body of a method is a lot more expensive than the method call, so even when optimizing, it won't often benefit you to use free functions rather than class or interface methods.

Reply via email to