2008/4/25 Prateek Saxena <[EMAIL PROTECTED]>:
> On Thu, Apr 24, 2008 at 2:20 PM, Ralph Loader <[EMAIL PROTECTED]> wrote:
>  > > I am very interested in seeing how this optimization can remove
>  >  > arithmetic overflows.
>  >
>  >  int foo (char * buf, int n)
>  >  {
>  >         // buf+n may overflow of the programmer incorrectly passes
>  >         // a large value of n.  But recent versions of gcc optimise
>  >         // to 'n < 100', removing the overflow.
>  >         return buf + n < buf + 100;
>  >  }
>
>  This clearly is insecure coding. The optimization to replace "buf + n
>  < buf + 100" with "n < 100" assumes something about the value of buf.
>  I assume that the location of "buf", could change arbitrarily accross
>  platforms depending on the memory layout.
>  I ran foo as follows, getting different outputs :
>

>From the algebra perspective "buf + n < buf + 100" should be the same
as "n < 100". There should really be a feature in C to handle the
special overflow/carry case in order to handle overflows and wrap
arounds.

This is probably a fundamental problem with C.
If one takes the following:

int a,b,c;
a = b + c;

How does one detect if there was a signed int overflow?
Programmers are forced to jump all sorts of hoops to do this check.
Fortunately there is a assembler instruction to do just this on most CPUs.
e.g. jo, jc, js
It would be nice to be able to write this sort of C code.

int a,b,c;
a = b + c;
if (a overflowed) {
    handle_overflow();
}

One cannot simply code a special function to test the carry/overflow
flags as the compiler might reorder the previous instructions, and
therefore the carry/overflow state gets messed up.

Why has the C language lacked this sort of functionality for so long?

James

Reply via email to