This is a good point. Further to that, keep in mind locality of reference, ie the performance impact of data getting pushed out of the caches. While using machine word size variables for a small number of variables that really need high performance can give a small speed-up, using them extensively can increase the programs data size, increasing the frequency of cache misses and resulting in large slow-downs.


On 01/05/11 19:28, Dmitry Olshansky wrote:
On 30.04.2011 19:34, Mariusz Gliwiński wrote:
Hello,
I'm trying to learn high-performance real-time programming.

One of my wonderings are:
Should i use int/uint for all standard arithmetic operations or
int/short/byte (depends on actual case)?
I believe this question has following subquestions:
* Arithmetic computations performance
* Memory access time

My actual compiler is DMD, but I'm interested in GDC as well.

Lastly, one more question would be:
Could someone recommend any books/resources for this kind of
informations and tips that could be applied to D? I'd like to defer my
own experiments with generated assembly and profiling, but i suppose
people already published general rules that i could apply for my
programming.

Thanks,
Mariusz Gliwiński
I find Agner Fog's guides on optimization for x86 the best source on
such architecture specific matters.
http://www.agner.org/optimize/

Citing releveant part from C++ optimization guide (on Integers):
 >Integers of smaller sizes (char, short int) are only slightly less
 >efficient. In most cases, the compiler will convert these types to
integers of the default size
 >when doing calculations, and then use only the lower 8 or 16 bits of
the result. You can
 >assume that the type conversion takes zero or one clock cycle. In
64-bit systems, there is
 >only a minimal difference between the efficiency of 32-bit integers
and 64-bit integers, as
 >long as you are not doing divisions.



Reply via email to