I was surprised by the behavior of division. The resulting type of division in example below is uint and the value is incorrect. I would expect that when one of operands is signed, then the result is signed type.
int a = -6; uint b = 2; auto c = a / b; // c is type of uint, and has value 2147483645 int d = a / b; // int, 2147483645 auto e = a / cast(int)b; // e, -3 (ok) I have longer time problems with mixing int and uint, so I tested some expression now and here is the result. auto f = a - b // uint, 4294967288 auto g = a + b // uint, 4294967292 auto h = a < b // bool, false auto i = a > b // bool, true Recently while I was hunting some bug in templated code, I created a templated function for operator <, which requires both arguments to be either signed or unsigned. Fortunately D such function was quite easy to do, if it wasn't possible I don't know if I would ever find form where the ints and uints come from... bool sameSign (A, B) () { return isUnsigned!(A) && isUnsigned!(B)) || (isSigned!(A) && isSigned! (B); } bool lt (A, B) (A a, B b) { static assert (sameSign!(A, B) ()); return a < b; } Could somebody please tell me why is this behavior, when mixing signed and unsigned, preferred over one that computes correct result. If this cannot be changed, is it possible to just make compiler error/warning when such incorrect calculation could occur. If it is possible in D code to require same-signed types for function, it is definitely possible for compiler to require explicit cast in such cases.