On Tue, 16 Feb 2010 01:10:33 -0500, Walter Bright <newshou...@digitalmars.com> wrote:

Steven Schveighoffer wrote:
For example, there is no possible way a person unfamiliar with computers

That's a valid argument if you're writing a spreadsheet program. But programmers should be familiar with computers, and most definitely should be familiar with 2's complement arithmetic.

What I meant by that statement is that the behavior goes against common sense -- when it doesn't have to. It only makes sense to advanced programmers who understand the inner workings of the CPU and even in those cases, advance programmers easily make mistakes.

When the result of an operation is 99.999% of the time an error (in fact the exact percentage is (T.max-1)/T.max * 100), disallowing it is worth making the rare valid uses of it illegal.

This is no different in my mind to requiring comparison of an object to null to use !is instead of !=. If you remember, the compiler was dutifully doing exactly what the user wrote, but in almost all cases, the user really meant !is.

To re-iterate, I do *not* think unary - for unsigned types should be disabled. But I think the expression:

x = -(exp)

where x is an unsigned type and exp is an unsigned type (or a literal that can be interpreted as unsigned), should be an error. The only case where it works properly is when exp is 0.

Note that you can allow this behavior, which makes it more obvious:

x = 0 - (exp)

Because this is not unary negation. It follows the rules of subtraction, which do not disallow wrapping past zero.

Similarly, if you do much with floating point, you should be familiar with "What Every Computer Scientist Should Know About Floating-Point Arithmetic"

http://docs.sun.com/source/806-3568/ncg_goldberg.html

Yes, but I'm not talking about normal math with unsigned types. I'm talking about a corner case where it is almost always an error. The case I'm talking about is the equivalent to doing:

x = x / 0;

for floating point. One could argue that this should be statically disallowed, because it's guaranteed to be an error. This doesn't mean that:

x = x / y;

should be disallowed because y *might* be zero.

-Steve

Reply via email to