On Mon, 15 Feb 2010 19:29:27 -0500, Michel Fortin <michel.for...@michelf.com> wrote:

On 2010-02-15 18:33:11 -0500, "Steven Schveighoffer" <schvei...@yahoo.com> said:

I should clarify, using - on an unsigned value should work, it just should
not be assignable to an unsigned type.  I guess I disagree with the
original statement for this post (that it should be disabled all
together), but I think that the compiler should avoid something that is
99% of the time an error.
 i.e.
 uint a = -1; // error
uint b = 5;
uint c = -b; // error
int d = -b; // ok
auto e = -b; // e is type int

But should this work?

uint a = 0-1;
uint c = 0-b;
auto e = 0-b; // e is type int?

Through integer promotion rules, these all work. This is essentially negation, but it is not a unary operation. These could also be disallowed, but only after optimization. Because optimizing cannot change the semantic meaning, they have to be allowed.

That is typeof(uint - uint) is uint, no matter how you do it.

unary negation is a different operator.


uint zero = 0;
uint a = zero-1;
uint c = zero-b;
auto e = zero-b; // e is type int?

No different than your first examples. e is of type uint, since uint - uint = uint.

This rule has good intentions, but it brings some strange inconsistencies. The current rules are much easier to predict since they behave always the same whether you have a variable, a literal or a constant expression.

There are plenty of strange inconsistencies in all aspects of computer math. but unary negation of an unsigned value to get another unsigned value is one of those inconsistencies that is 99% of the time not what the user expected, and easily flagged as an error.

For example, there is no possible way a person unfamiliar with computers (and most programmers who have not run into this) would believe that

b = 5;
a = -b;

would result in a being some large positive number. It's just totally unexpected, and totally avoidable.

-Steve

Reply via email to