> Note that I've proposed a solution elsewhere to use c* (i.e. cint,
clong,
> cuint, culong, etc.) to mean "checked" integral, then you can alias
it to
> int or alias it to your struct type depending on debug flags you
pass to
> dmd.  This should be a reasonable solution, I might actually use it
in
> certain cases, if it's that easy.

I think it is a good proposition. However, name choice conflicts a
bit with the choice for complex (creal, cfloat etc.), so better names
are needed. Let's say: suint, sulong ("safe") or some other proposal.

> >> I don't want
> >> runtime errors thrown in code that I didn't intend to throw.
most of
> >> the
> >> time, overflows will not occur, so I don't want to go through
all my
> >> code
> >> and have to throw these decorations up where I know it's safe.
> >
> > The idea is to add two switches to DMD that activate the integral
> > overflows (one for signed and one for signed and unsigned). If you
> > compile your code without those, the runtime tests will not
happen.
> See the solution I posted.  This can be done with a custom type and
an
> alias.  It just shouldn't take over builtin types.

Not take over (replace) existing types, but provide alternative types
(see abobe "suint"). The idea is to put those types into the standard
language. Only then they are universally available and will took off.

> Are you kidding?  An integer operation is a single instruction.
Testing
> for overflow requires another instruction.  Granted the test may be
less
> than the operation, but it's going to slow things down.  We are not
> talking about outputting strings or computing some complex value.
Integer
> math is the bread-and-butter of cpus.  It's what they were made
for, to
> compute things.  Herein lies the problem -- you are going to be
slowing
> down instructions that take up a large chunk of the code.  It
doesn't take
> benchmarking to see that without hardware support, testing all
integer
> math instructions is going to cause significant slowdown (I'd say
at least
> 10%, probably more like 25%) in all code.

In fact, providing "safe" (ie. overflow checked) types will not force
you to use those. If you don't like them or you consider that you
don't need to slow down your debug build (the release build should
run at the same speed), then you'll have the choice of using classic
(let's say "unsafe" types).

> Comparing them to array bounds tests isn't exactly conclusive,
because
> array bounds tests prevent a much more insidious bug -- memory
> corruption.  With overflow, you may have a subtle bug, but your code
> should still be sane (i.e. you can debug it).  With memory
corruption, the
> code is not sane, you can't necessarily trust anything it says
after the
> corruption occurs.

But you can use tools such as valgrind to detect memory corruption,
while you cannot use analysis tools to detect *unintended* overflows.

> Typically with an overflow bug, I have bad behavior that immediately
> shows.  With memory corruption, the actual crash may occur long
after the
> corruption occurred.

It does not mean that "dormant" bugs like overflows cannot exist and,
in the long run, they could be as dangerous (for example, they may
also lead to memory corruption if "positive index" becomes...
negative - ie. overflows). Besides, yes, memory corruption is a
dangerous bug but the fact that this has been addressed does not
means that other source of bugs should now be neglected. Finally,
"typically" is a bit subjective here, it depends on each one's
experience.

Reply via email to