On 4/30/2011 10:34 AM, Mariusz Gliwiński wrote:
Hello,
I'm trying to learn high-performance real-time programming.
One of my wonderings are:
Should i use int/uint for all standard arithmetic operations or
int/short/byte (depends on actual case)?
I believe this question has following subquestions:
* Arithmetic computations performance
* Memory access time
My actual compiler is DMD, but I'm interested in GDC as well.
Lastly, one more question would be:
Could someone recommend any books/resources for this kind of
informations and tips that could be applied to D? I'd like to defer my
own experiments with generated assembly and profiling, but i suppose
people already published general rules that i could apply for my
programming.
Thanks,
Mariusz Gliwiński
My experience with this pattern of thinking is to use the largest data
type that makes sense, unless you have a profiler saying you need to do
something different. However, if you get being obsessive-compulsive
about having 'the perfectly sized integer types' for the code, it is
possible to fall into the trap of over-using unsigned types 'because the
value can never be negative'. Unsigned 8 and 16 bit values usually have
a good reason to be unsigned, but when you start getting to 32 and 64
bit values it makes a lot less sense most of the time.
When working with non-X86 platforms other problems are usually much more
severe: More expensive thread synchronization primitives, lack of
efficient variable bit bit-shifting (run-time determined number of bits
shifted), non-existent branch prediction, or various floating point code
promoting to emulated double precision code silently on hardware that
can only do single precision floating point etc.