On Saturday, 17 May 2025 at 13:09:40 UTC, Jonathan M Davis wrote:
On Friday, May 16, 2025 1:19:41 PM Mountain Daylight Time H. S. Teoh via Digitalmars-d-learn wrote:
Welcome to your first encounter with why I hate D's narrow integer promotion rules.

It's because C (and I'm pretty sure the CPU as well) promotes integer types smaller than 32 bits to 32 bits to do arithmetic on them. So, in both C and D, the result is int. In general, C code is supposed to have the same semantics in D, or it's not supposed to be compile, and Walter is insistent that the behavior of arithmetic follows that (the one exception I'm aware of being that D defines what happens with overflow whereas C does not).

The result of unary operations should be of the same type as the source.
The result of binary operations should be of the common type.

It's ok to actually perform the calculation in the machine wordsize (if that's large enough), but should be converted back to the common type, no matter if that would be a lossy cast.

The number of places where this would result in a difference to C is very small, and most of those are bugs anyway.

If the programmer doesn't want to lose accuracy, he should use bigger types to start with.

Reply via email to