I have found a Reddit discussion few days old:
http://www.reddit.com/r/programming/comments/cdwz5/the_perils_of_unsigned_iteration_in_cc/

It contains this, that I quote (I have no idea if it's true), plus follow-ups:

>At Google using uints of all kinds for anything other than bitmasks or other 
>inherently bit-y, non computable things is strongly discouraged. This includes 
>things like array sizes, and the warnings for conversion of size_t to int are 
>disabled. I think it's a good call.<

I have expressed similar ideas here:
http://d.puremagic.com/issues/show_bug.cgi?id=3843

Unless someone explains me why I am wrong, I will keep thinking that using 
unsigned words to represent lengths and indexes, as D does, is wrong and 
unsafe, and using signed words (I think C# uses ints for that purpose) in D is 
a better design choice.

In a language as greatly numerically unsafe as D (silly C-derived conversion 
rules, fixed-sized numbers used everywhere on default, no runtime numerical 
overflows) the usage of unsigned numbers can be justified inside bit vectors, 
bitwise operations, and few other similar situations only.

If D wants to be "a systems programming language. Its focus is on combining the 
power and high performance of C and C++ with the programmer productivity of 
modern languages like Ruby and Python." it must understand that numerical 
safety is one of the not secondary things that make those languages as Ruby and 
Python more productive.

Bye,
bearophile

Reply via email to