On 12/11/2012 3:44 PM, foobar wrote:
Thanks for proving my point. after all , you are a C++ developer, aren't you? :)

No, I'm an assembler programmer. I know how the machine works, and C, C++, and D map onto that, quite deliberately. It's one reason why D supports the vector types directly.


Seriously though, it _is_ a trick and a code smell.

Not to me. There is no trick or "smell" to anyone familiar with how computers 
work.


I'm fully aware that computers used 2's complement. I'm also am aware of the
fact that the type has an "unsigned" label all over it. You see it right there
in that 'u' prefix of 'int'. An unsigned type should semantically entail _no
sign_ in its operations. You are calling a cat a dog and arguing that dogs barf?
Yeah, I completely agree with that notion, except, we are still talking about _a
cat_.

Andrei and I have endlessly talked about this (he argued your side). The inevitable result is that signed and unsigned types *are* conflated in D, and have to be, otherwise many things stop working.

For example, p[x]. What type is x?

Integer signedness in D is not really a property of the data, it is only how one happens to interpret the data in a specific context.


To answer you question, yes, I would enforce overflow and underflow checking
semantics. Any negative result assigned to an unsigned type _is_ a logic error.
you can claim that:
uint a = -1;
is perfectly safe and has a well defined meaning (well, for C programmers that
is), but what about:
uint a = b - c;
what if that calculation results in a negative number? What should the compiler
do? well, there are _two_ equally possible solutions:
a. The overflow was intended as in the mask = -1 case; or
b. The overflow is a _bug_.

The user should be made aware of this and should make the decision how to handle
this. This should _not_ be implicitly handled by the compiler and allow bugs go
unnoticed.

I think C# solved this _way_ better than C/D.

C# has overflow checking off by default. It is enabled by either using a checked { } block, or with a compiler switch. I don't see that as "solving" the issue in any elegant or natural way, it's more of a clumsy hack.

But also consider that C# does not allow pointer arithmetic, or array slicing. Both of these rely on wraparound 2's complement arithmetic.


Another data point would be (S)ML
which is a compiled language which requires _explicit conversions_ and has a
very strong typing system. Its programs are compiled to efficient native
executables and the strong typing allows both the compiler and the programmer
better reasoning of the code. Thus programs are more correct and can be
optimized by the compiler. In fact, several languages are implemented in ML
because of its higher guaranties.

ML has been around for 30-40 years, and has failed to catch on.

Reply via email to