On Mon, 04 Jul 2011 16:16:04 -0400, eles <e...@eles.com> wrote:

Yes non-relase mode is slower, but we are probably talking
about a very significant slowdown here.  A lot of integer math
happens in
D.

What about testing only for user selected variables/types?

That's fine, just define a new type that does this, so it's user selected. The ultimate goal is to define something like:

struct checkedInt { ... } // throws on overflows

debug(checkOverflows)
  alias _checkedInt cint;
else
  alias int cint; // default to no overflow checking

now you use cint anywhere you want to selectively enable overflow checking. I don't know if the language is good enough to allow the definition of checkedInt to behave just like a builtin int.


I think a much more effective fix for the language would be to make
slice
length a signed type.  Then you simply eliminate 99% of integer
overflows
(it's very rare that a signed integer overflows, but not so
unlikely that
an unsigned one does).

I do not like that (Java-like) path too much. Why loosing half of
length range?

I'm not saying it's the best solution, but it does solve a very common problem. The largest issue with overflow is for unsigned types, because most integers are closer to zero than they are to their maximum, regardless of whether they are signed or not. So having a signed integer protects against most cases of overflow -- when the integer goes less than zero. Unless I'm solving coding puzzles, I rarely have cases where an integer exceeds its maximum.

Note that for 64-bit D, this effectively becomes a moot point -- a length of 2^63-1 is larger than any possible memory configuration you can currently have.

There is also the possibility of defining a large_array type which does use size_t for the length.

Anyway, this doesn't mean I think signed lengths are better than unsigned lengths. It just means I think solving overflows using signed lengths is a better option than solving them using overflow detection and exception throwing.

-Steve

Reply via email to