Steven Schveighoffer wrote:
are there any good cases besides this that Walter has? And even if there are, we are not talking about silently mis-interpreting it. There is precedent for making valid C code an error because it is error prone.


Here's where I'm coming from with this. The problem is that CPU integers are 2's complement and a fixed number of bits. We'd like to pretend they work just like whole numbers we learned about in 2nd grade arithmetic. But they don't, and we can't fix it so they do. I think it's ultimately fruitless to try and make them behave other than what they are: 2's complement fixed arrays of bits.

So, we wind up with oddities like overflow, wrap-around, -int.min==int.min. Heck, we *rely* on these oddities (subtraction depends on wrap-around). Sometimes, we pretend these bit values are signed, sometimes unsigned, and we mix together those notions in the same expression.

There's no way to not mix up signed and unsigned arithmetic.

Trying to build walls between signed and unsigned integer types is an exercise in utter futility. They are both 2-s complement bits, and it's best to treat them that way rather than pretend they aren't.

As for -x in particular, - is not negation. It's complement and increment, and produces exactly the same bit result for signed and unsigned types. If it is disallowed for unsigned integers, then the user is faced with either:

   (~x + 1)

which not only looks weird in an arithmetic expression, but then a special case for it has to be wired into the optimizer to turn it back into a NEG instruction. Or:

   -cast(int)x

That blows when x happens to be a ulong. Whoops. It blows even worse if x turns out to be a struct with overloaded opNeg and opCast, suddenly the opCast gets selected. Oops.

We could use a template:

    -MakeSignedVersionOf(x)

and have to specialize that template for every user defined type, but, really, please no.

Reply via email to