Re: US-CERT Vulnerability Note VU#162289

2008-05-12 Thread Robert C. Seacord

Ian,

Sounds great, thanks,  I'll work with Chad to get the vul note updated 
accordingly.


rCs


Robert C. Seacord [EMAIL PROTECTED] writes:

  

Once a new version or patch is available that will warn users that
this optimization is taking place, I will recommend that we change the
work around from Avoid newer versions of gcc to Avoid effected
versions of gcc and/or recommend that users download the patch /
revision.



The behaviour of pointer overflow has now changed as of the following
(as yet unreleased) versions:

gcc 4.2.4
gcc 4.3.1
gcc 4.4.0

and all subsequent versions (4.2.x where x = 4, 4.3.y where y = 1,
4.z where z = 4).

The optimization under discussion is for comparisons between P + V1
and P + V2, where P is the same pointer and V1 and V2 are variables of
some integer type.  The C/C++ language standards permit this to be
reduced to a comparison between V1 and V2.  However, if V1 or V2 are
such that the sum with P overflows, then the comparison of V1 and V2
will not yield the same result as actually computing P + V1 and P + V2
and comparing the sums.

The new behaviour as of the above releases is that this optimization
is performed by default at -O2 and above, including -Os.  It is not
performed by default at -O1 or (of course) -O0.  The optimization may
be enabled for -O1 with the -fstrict-overflow option.  The
optimization may be disabled for -O2 and above with the
-fno-strict-overflow option.

When the optimization is enabled, cases where it occurs may be
detected by using -Wstrict-overflow=N where N = 3.  Note that using
this warning option is likely to yield a number of false positive
reports--cases where this or other overflow optimizations are being
applied, but where there is no actual problem.

Please see the gcc manual for more information about these options.

Ian
  




Re: US-CERT Vulnerability Note VU#162289

2008-04-24 Thread Robert C. Seacord

Neil,

I'm not sure I understand what you mean by the following:

A program that does not satisfy this constraint is erroneous, and many compilers take advantage of this constraint to optimize code more effectively. 

Just because a program contains undefined behavior, does not mean that it 
erroneous. It simply gives the compiler latitude with how to handle the 
undefined behavior, while still conforming.

One possibility is that GCC could handle these constructs in a consistent manner.  That is, GCC clearly implements modwrap semantics. Given this, I think the behavior exhibited in this case is inconsistent.  If, on the other hand, GCC implemented saturation semantics, it would make perfect sense to optimize out this check.  


I'll review your comments below and consider additional changes to the text of 
the vul note.  We would really like to see is a diagnostic for this condition, 
ideally enabled by -Wall.  Once a version of the compiler is available that 
provides such a diagnostic is available, I will recommend we change the US-CERT 
Addendum to recommend that developers upgrade to that version of the compiler, 
and compile using -Wall or whatever the appropriate flag turns out to be.

I am getting tired with the personal/organizational attacks.  If you expect a 
response, please keep your comments professional.

rCs




David Miller wrote:-

  

From: Joe Buck [EMAIL PROTECTED]
Date: Wed, 23 Apr 2008 08:24:44 -0700



If CERT is to maintain its reputation, it needs to do better.  The warning
is misdirected in any case; given the very large number of compilers that
these coding practices cause trouble for, you need to focus on the bad
coding practices, not on unfair demonization of new GCC releases.
  

In my opinion CERT's advisory has been nothing but an unfair FUD
campaign on compilers, and GCC specifically, and has seriously
devalued CERT's advisories, in general, which were already of low
value to begin with.

It looks similar to a news article run by a newspaper that is losing
money and has no real news to write about, but yet they have to write
about something.

The worst part of this fiasco is that GCCs reputation has been
unfairly harmed in one way or another, and there is nothing CERT can
do to rectify the damage they've caused.



I'm appalled that the original desciption hasn't been corrected.  The
text reads:

  Some C compilers optimize away pointer arithmetic overflow tests that
  depend on undefined behavior without providing a diagnostic (a warning).
  Applications containing these tests may be vulnerable to buffer
  overflows if compiled with these compilers.

  I. Description
  In the C language, given the types:

char *buf;
int len;

  some C compilers will assume that buf+len = buf. 


which is an entirely bogus description of the problem.  That this
incorrect description of the state of affairs has been left to
stand only shows that CERT, and those responsible for this advisory,
have completely failed to understand what the real issue is here.
Further, the fact that the advisory stands in this erroneous
form, despite it having been pointed out to them many times over
the past weeks on this form at least, seriously undermines their
credibility in the eyes of any informed observer.

At a minimum the wording should be something more like:

  In the C language, given an object OBJ and a pointer BUF into OBJ,

char *buf;
int len;

  the C standard requires that the result of

buf + len

  must point within, or one byte beyond, the object BUF.  A program
  that does not satisfy this constraint is erroneous, and many
  compilers take advantage of this constraint to optimize code more
  effectively.  Unforunately much existing code is not well written
  and sometimes erroneous in this regard, and hence may not behave
  as originally intended when compiled with optimizations enabled.

Neil.
  




Re: Security vulernarability or security feature?

2008-04-24 Thread Robert C. Seacord

Ralph,

Comments below.


(a) Arithmetic overflows have historically been a significant source of
security vulnerabilities.
  

agreed.

(b) Recent versions of gcc (along with other compilers) contain an
optimisation that can *REMOVE* arithmetic overflows.
  
I am very interested in seeing how this optimization can remove 
arithmetic overflows.


If you can send me an example of source code and instructions on how to 
build, I would certainly be happy to promote this feature of gcc on our 
secure coding web site.



Why is Cert advising people to avoid an optimisation that can ---
realistically, although probably rarely --- remove security
vulnerabilities?
  
If you are referring to VU#694123, this refers to an optimization that 
removes checks pointer arithmetic wrapping.  The optimization doesn't 
actually eliminate the wrapping behavior; this still occurs.  It does, 
however, eliminate certain kinds of checks (that depend upon undefined 
behavior).


Thanks,
rCs


Re: Security vulernarability or security feature? VU#162289

2008-04-24 Thread Robert C. Seacord

Ralph,

Thanks for your further explanation of this optimization.  Here is what 
I understand.  Please correct me if I am wrong on any of these points:


1. The description in VU#162289 misrepresents the problem has a length 
check.  It is actually a check for wrap. 

2. The optimization in this code depends on the assumption that wrapping 
will not occur, or at the very least, that wrapping is undefined 
behavior so the compiler is not required to generate code for this 
condition.


3. Both expressions (buf+len  buf) and   buf + n  buf + 100 can be 
optimized assuming the subexpressions cannot wrap.


4. Both expressions cannot be optimized assuming modulo behavior, for 
example if buf were cast to uintptr_t, because there are values of buf 
and n where buf + n  buf + 100 evaluates to false and n   100 
evaluates to true.


5. Both code examples are examples of poor code, that could be written 
better. 


The test for wrap would be better written as:

if ((uintptr_t)buf+len  (uintptr_t)buf)

And your expression should just be written as:

n   100

6.  The Overview of VU#162289 states Some C compilers optimize away 
pointer arithmetic overflow tests that depend on undefined behavior 
without providing a diagnostic (a warning). 

The point being is that we are not objecting to the optimization, we are 
objecting to the lack of any diagnostic.


Because both examples could benefit from being rewritten (this is what 
the compiler is doing) I don't see the harm in issuing a diagnostic in 
both cases.


rCs


Robert,

You have failed to answer my original question, and I think have failed
to understand the point of the example.

The example shows that what you are claiming is a vulnerability in
162289 is in fact a security feature.

  

  that removes checks pointer arithmetic wrapping.



Just to be 100% clear, the optimisation I gave you an example of, and
which you think is pretty cool is precisely the same optimisation you
are complaining about in 162289, specifically, a pointer value may be
canceled from both sides of a comparison.

This is slightly confused by the fact that in the 162289 example, two
optimisations take place [gcc gurus please correct if I am wrong]:

(a) a pointer value is canceled from both sides of a comparison,
changing (buf+lenbuf) to (len0).  This is what changes the observable
behaviour of the code, and is what the debate is about.

(b) in the example given in 162289, (len0) can then be evaluated at
compile time, removing the test entirely.   This does not change the
runtime behaviour of the code an is completely irrelevant.

To make it even clearer, we can disable optimisation (b) while leaving
optimisation (a), by making 'len' volatile:

int foo (char * buf)
{
volatile int len = 130;
if (buf+len  buf)
return 1;
else
return 0;
}

gcc then generates:

foo:
subl$16, %esp
movl$1073741824, 12(%esp)
movl12(%esp), %eax
addl$16, %esp
shrl$31, %eax
ret

This has not completely removed the test, but when executed, it will
still always return 0, giving precisely the same run-time behaviour as
without the 'volatile'.

To re-iterate:

(a) Why does Cert advise against an optimisation that, under
the right circumstances, can remove examples of a historically
significant class of security holes.

(b) The example you claim is a length check in 162289 is not a length
check and does not support the conclusions you draw from it.

Ralph.


  

 The optimization
doesn't actually eliminate the wrapping behavior; this still occurs.
It does, however, eliminate certain kinds of checks (that depend upon
undefined behavior).

Thanks,
rCs






Re: Security vulernarability or security feature? VU#162289

2008-04-24 Thread Robert C. Seacord

Ralph,

Comments below.

Length-checks are directly related to security, because they protect
against buffer-overruns which are often directly exploited by attackers.

It is much harder to see how reliance on wrap-around could contribute to
the security of an application.  
The original impetus for this came from a check in a sprint() function 
from Plan 9.  Because of the API, there was no way to test if the len 
was out of bounds, but the developers wanted to make sure they weren't 
wrapping the stack on some architectures that have their stacks in high 
memory.


It seems like a reasonable test to make, the code used to work, and they 
got burned on the silent change in behavior.

But it is hard to see what reason you have for picking on this
particular feature of this particular compiler.
  
This may not have been the poster child issue to go after, but we were 
responding to the report.


What features of which compilers do you think we should be reporting on?

Yes, you have dramatically improved the wording from previous
versions.  However, you still say:

  avoid using compiler implementations that perform the offending
   optimization
  

I agree this is worded badly, we'll correct this.

I must admit, given the problems that have been identified with the
advisory (how many versions has it been through?), 
we've only revised it once so far, but we'll change it as many times as 
we need to.

I would be far
happier if you subjected it to independent third-party expert review,
  
we are an independent third party.  who else would we ask? 

and withdrew the advisory until that is completed in a satisfactory
manner (rather than repeatedly incrementally tweaks).
  
withdrawing vulnerability notes (this is not an advisory that is emailed 
out) is not as good a solution as you might think, as this usually draws 
more attention to the vulnerability note than just about anything else 
you can do.


rCs






Re: US-CERT Vulnerability Note VU#162289

2008-04-11 Thread Robert C. Seacord

Gerald,

Comments below.

 My
understanding is that it shouldn't, because the real issue here is
pointer arithmetic and the resulting type should always be a pointer.



I'm not sure what you mean by that last statement.
  
my understanding of the C99 standard is that adding an integer and a 
pointer always results in a pointer.  i'm not a compiler expert, but i 
would think that the rules that apply to adding a pointer and an integer 
of any signedness would be the same, and are different from the rules 
that would apply to adding signed or unsigned integers. 

VU#162289 only applies to pointer arithmetic.  I need to go back and 
check, but I don't believe the signedness of the integer involved is a 
factor.

I think we've already established that those tests that would
print BUG aren't actually finding bugs in the compiler.
Sorry about the string literal Bug.  This was just an artifact from 
the original program.  I should have changed it to optimization detected.

 It
is not correct to assume that adding a value to a char pointer
will wrap around in the same way as if you added a value to an
unsigned number. 

i think this depends on what you mean by value.

You also cannot assume that adding a value to
a signed number will cause it to wrap. (GCC had optimized away
checks for the latter already. There are now reports that many
other compilers may optimize away both tests.)
  

ditto.

Are we in agreement on this? The fact that your example prints
BUG! seems to imply that it is invalid to optimize away these
tests, which it isn't.
  
i agree that the optimization is allowed by C99.  i think this is a 
quality of implementation issue,  and that it would be preferable for 
gcc to emphasize security over performance, as might be expected.

I was under the impression that the only issue here is that
there is a change in behavior. That's a very fine line to walk
when many other compilers do this. In fact, you can run into
the same change of behavior when switching from unoptimized
debug versions to optimized release versions. Recommending not
using a recent GCC as a possible remedy is dangerous (some
would probably say irresponsible).
this was only one of several solutions listed, and not the first one 
listed.  You will see that the Compliant Solution provided in the 
referenced secure coding rule:
ARR38-C. Do not add or subtract an integer to a pointer if the resulting 
value does not refer to an element within the array 
https://www.securecoding.cert.org/confluence/display/seccode/ARR38-C.+Do+not+add+or+subtract+an+integer+to+a+pointer+if+the+resulting+value+does+not+refer+to+an+element+within+the+array

recommends modifying the code.

 What you really mean is,
Use an older GCC or some other compiler that is known not to
take advantage of this optimization.
  
i think we mean what we say, which is *Avoid newer versions of gcc and 
*avoiding the use of gcc versions 4.2 and later.  i don't see any 
verbiage that says use a different compiler.

P.S. Has anyone checked how far back in the line of Microsoft
compilers you have to go to before they don't do this same
optimization 
our tests shows that the 2005 version of the compiler does not perform 
this optimization.  i have not yet tested a newer version of the compiler.


rCs




Re: US-CERT Vulnerability Note VU#162289

2008-04-11 Thread Robert C. Seacord

Gerald,

There was a report (forwarded by Mark Mitchell) of Microsoft
Visual C++ 2005 performing that optimization (the resultant
object code was shown). Have you verified that this report
was false? 


both chad and i have tested this with various options on Visual C++ 2005 
and we have not found any combination of options that will cause this 
optimization to occur.


rCs

--
Robert C. Seacord
Senior Vulnerability Analyst
CERT/CC 


Work: 412-268-7608
FAX: 412-268-6989



Re: US-CERT Vulnerability Note VU#162289

2008-04-11 Thread Robert C. Seacord

Ian,

I know I'm biased, but I think use a different compiler is clearly
implied by the text of the advisory.  If the advisory mentioned that
other compilers also implement the same optimization, then that
implication would not be there.
  
yes, i agree we should make this change, and warn against assuming this 
optimization is not performed on other compilers.


if i understand you correctly (and based on our own tests) none of the 
compilation flags we've discussed address this issue, so we should also 
remove this as a solution.


thanks,
rCs


--
Robert C. Seacord
Senior Vulnerability Analyst
CERT/CC 


Work: 412-268-7608
FAX: 412-268-6989



Re: US-CERT Vulnerability Note VU#162289

2008-04-10 Thread Robert C. Seacord

Gerald,

Here is another version of the program (same compiler version/flags). 


#include stdio.h

void test_signed(char *buf) {
   signed int len;

   len = 130;

   printf(buf = %p; buf+len = %p; buf+len  buf = %d %d,
   buf, buf+len, buf+len  buf, (uintptr_t)buf+len  (uintptr_t)buf);
   if((buf+len  buf) != ((uintptr_t)buf+len  (uintptr_t)buf))
   printf( BUG!);
   printf(\n);
}


void test_unsigned(char *buf) {
   unsigned int len;

   len = 130;

   printf(buf = %p; buf+len = %p; buf+len  buf = %d %d,
   buf, buf+len, buf+len  buf, (uintptr_t)buf+len  (uintptr_t)buf);
   if((buf+len  buf) != ((uintptr_t)buf+len  (uintptr_t)buf))
   printf( BUG!);
   printf(\n);
}

int main(void) {
   test_signed(0);
   test_signed((char*)0x7000);
   test_signed((char*)0xf000);

   test_unsigned(0);
   test_unsigned((char*)0x7000);
   test_unsigned((char*)0xf000);
   return 0;
}

output:

buf = ; buf+len = 4000; buf+len  buf = 0 0
buf = 7000; buf+len = B000; buf+len  buf = 0 0
buf = F000; buf+len = 3000; buf+len  buf = 1 1
buf = ; buf+len = 4000; buf+len  buf = 0 0
buf = 7000; buf+len = B000; buf+len  buf = 0 0
buf = F000; buf+len = 3000; buf+len  buf = 1 1

The unsigned test was one we performed on the gcc versions.  I added the 
signed test, but it didn't make a difference on Visual Studio.  My 
understanding is that it shouldn't, because the real issue here is 
pointer arithmetic and the resulting type should always be a pointer.


rCs


Robert C. Seacord wrote:
  

void f(char *buf)  {
  unsigned int len = len = 0xFF00;

  if (buf+len  buf) puts(true);

}



You need to be more precise. That is not the same example
that you quoted for GCC.

In fact, if you vary the criteria too much, you will find
situations where GCC already behaved that way. The test in
the following example is optimized out by old versions of
GCC (certainly my version 3.4.5 compiler does it, with no
warnings even when using -Wall):

 int f(char *buf, int i)
 {
 i = 130;

 if ((int)buf + i  (int)buf)
 return 0;

 return 1;
 }

That's quite a bit less changed than your example, which
brings unsigned-ness into the picture. [This is exactly
the problem--signed overflow and pointer overflow aren't
defined, unlike unsigned overflow.]

Given that current Microsoft compilers reportedly exhibit
this behavior, it sounds like the advisory is going to at
least need some significant rewriting. :-)

-Jerry
  




Re: US-CERT Vulnerability Note VU#162289

2008-04-08 Thread Robert C. Seacord

Mark,

I will update the CERT C Secure Coding rule with a list of compilers, 
once we complete a fact check.  Chad is responsible for updating the vul 
note, so I'll need to discuss this with him.


Specifically with regards to MSVC 2005, I thought Chad had already 
checked this and found that it did not exhibit this behavior.  I just 
tested the following program.


#include stdio.h

void f(char *buf)  {
 unsigned int len = len = 0xFF00;

 if (buf+len  buf) puts(true);

}

int main(void)
{
   char buffer[100];
   f(buffer);
   return 0;
}

and compiled it with Microsoft Visual Studio 2005 Version 8.0.50727.42 
with the following flags:


/Od /D WIN32 /D _DEBUG /D _CONSOLE /D _UNICODE /D UNICODE /Gm 
/EHsc /RTC1 /MDd /FoDebug\\ /FdDebug\vc80.pdb /W4 /nologo /c /Wp64 
/ZI /TC /errorReport:prompt


And on the dissembly view I see:

 if (buf+len  buf) puts(true);
004113CB  mov eax,dword ptr [buf]
004113CE  add eax,dword ptr [len]
004113D1  cmp eax,dword ptr [buf]
004113D4  jae f+4Dh (4113EDh)
004113D6  mov esi,esp
004113D8  pushoffset string true (41563Ch)
004113DD  calldword ptr [__imp__puts (4182B8h)]
004113E3  add esp,4
004113E6  cmp esi,esp
004113E8  call@ILT+305(__RTC_CheckEsp) (411136h)

And the program prints out true.

Any explanation why I am getting different behavior? 


If I change the flags to

/O2 /D WIN32 /D _DEBUG /D _CONSOLE /D _UNICODE /D UNICODE /Gm 
/EHsc /MDd /FoDebug\\ /FdDebug\vc80.pdb /W4 /nologo /c /Wp64 /TC 
/errorReport:prompt


with /O2 optimization, true is still printed although it is harder to 
examine the dissembly .


rCs



Mark Mitchell wrote:

Mark Mitchell wrote:


I've been told that Intel's ICC compiler also does this optimization:


Apparently, IAR's Atmel AVR compiler does this optimization as well. 
That CPU has 16-bit addresses, so the tester changed the test case to 
use 1  14 instead of 1  30.


I've also been told that Microsoft Visual C++ 2005 compiler does this 
optimization.


Chad, Robert, are you going to update the CERT notice to reflect all 
of this additional information about other compilers that also do this 
optimization?


Here is the code generated:

; Listing generated by Microsoft (R) Optimizing Compiler Version 
14.00.50727.762

...
; Function compile flags: /Ogtpy

...
; 2:   len = 1  30;
; 3:   if (buf + len  buf)
; 4: return 1;
; 5:
; 6:
; 7:   return 0;

xoreax, eax

; 8: }





Re: US-CERT Vulnerability Note VU#162289

2008-04-08 Thread Robert C. Seacord

Dave,

I made at least one change you suggested:

#include stdio.h

void f(char *buf)  {
 unsigned int len = 0xFF00u;

 if (buf+len  buf) puts(true);

}

int main(void) {
   char buffer[100];
   printf(buffer addr = %p.\n, buffer);
   f(buffer);
   return 0;
}

And compiled with flags:

/O2 /GL /D WIN32 /D NDEBUG /D _CONSOLE /D _UNICODE /D UNICODE 
/FD /EHsc /MD /FoRelease\\ /FdRelease\vc80.pdb /W4 /nologo /c /Wp64 
/Zi /TC /errorReport:prompt


The output of this program is:

buffer addr = 0012FF18.
true

The reason the len is so high is to force an overflow when added to 
0x0012FF18.


BTW, I also tried compiling this as a C++ program under visual studio 
and got the same result.


rCs



Robert C. Seacord wrote on :

  

Specifically with regards to MSVC 2005, I thought Chad had already
checked this and found that it did not exhibit this behavior.  I just
tested the following program. 


#include stdio.h

void f(char *buf)  {
  unsigned int len = len = 0xFF00;



  I'm sure you didn't mean to do that, but I doubt it affects the results.

  BTW, that's a very different number from (1  30).  How about trying again
with a 'u' suffix?  Or with something that doesn't have the sign bit set?

cheers,
  DaveK
  




Re: US-CERT Vulnerability Note VU#162289

2008-04-07 Thread Robert C. Seacord


Mark,

Comments below.
The GCC SC was aware of this CERT posting before it was public. Our 
feeling is that this is not a GCC bug, although it is something that 
we would like GCC to warn about. I talked to Ian Taylor and he agreed 
to work on the warning.
I agree with you that the behavior that gcc exhibits in this case is 
permitted by the ISO/IEC 9899:1999 C specification 
http://www.open-std.org/JTC1/SC22/WG14/www/docs/n1124.pdf 
(ยง6.5.6p8).   I believe the vulnerability is that gcc may *silently* 
discard the overflow checks and that this is a recent change in behavior. 

Once a new version or patch is available that will warn users that this 
optimization is taking place, I will recommend that we change the work 
around from Avoid newer versions of gcc to Avoid effected versions of 
gcc and/or recommend that users download the patch / revision.


You are also right that the popularity of gcc is one of the reasons we 
decided to publish on this.  If you identify other compilers that a) are 
relatively popular, b) have changed their behavior recently, and c) 
silently optimize out overflow checks we will consider publishing 
vulnerability notes for those compilers as well.


rCs
**

--
Robert C. Seacord
Senior Vulnerability Analyst
CERT/CC 


Work: 412-268-7608
FAX: 412-268-6989



Re: US-CERT Vulnerability Note VU#162289

2008-04-07 Thread Robert C. Seacord

Andrew,

We'll also add:

-Wstrict-overflow=5

As a work around.  


You are right, I don't regularly read the GCC mailing lists as GCC is not our 
only concern.  This problem came to our attention because it affected one of 
your users.  We did consult with Mark before publishing.

rCs



On Mon, Apr 7, 2008 at 10:28 AM, Robert C. Seacord [EMAIL PROTECTED] wrote:
  

 I believe the vulnerability is that gcc may *silently* discard the overflow
checks and that this is a recent change in behavior.



No it is not recent, unless you consider 1998 recent :).  I don't know
how many times but we have not changed the behavior of GCC with
respect of signed integer overflow being undefined.  Since the loop
optimizers have said this before, we just added an extra pass which
depends on it more.  I guess you did not read the GCC mailing list
before posting this Vulnerability because we already discussed this
many many times before around the time GCC 4.2.0 came out.

Also try -Wstrict-overflow=5 in GCC 4.2.3 and in GCC 4.3.0, we already
warn about most if not all cases already.

Thanks,
Andrew Pinski
  



--
Robert C. Seacord
Senior Vulnerability Analyst
CERT/CC 


Work: 412-268-7608
FAX: 412-268-6989



Re: US-CERT Vulnerability Note VU#162289

2008-04-07 Thread Robert C. Seacord

Joe,

Response below.

On Mon, Apr 07, 2008 at 01:28:21PM -0400, Robert C. Seacord wrote:
  
You are also right that the popularity of gcc is one of the reasons we 
decided to publish on this.  If you identify other compilers that a) are 
relatively popular, b) have changed their behavior recently, and c) 
silently optimize out overflow checks we will consider publishing 
vulnerability notes for those compilers as well.



What is the justification for requirement b)?  We identified two distinct
proprietary compilers that also do this optimization, but it isn't a
recent change in behavior.
  
my thinking is that if this behavior has been in place for many years, 
for example, users will have had the opportunity to discover the changed 
behavior.  our goal here is to disseminate this information more quickly.


on a related topic, we are trying to produce secure coding standards 
that deal with these issues in general.


we've begun the additional of several rules around the topic of pointer 
arithmetic following the release of this vul note. 

The most relevant is titled ARR38-C. Do not add or subtract an integer 
to a pointer if the resulting value does not refer to an element within 
the array and can be found here:


https://www.securecoding.cert.org/confluence/x/SgHm

We are hopeful that these rules and recommendations will help developers 
address these issues in the general sense.


rCs

--
Robert C. Seacord
Senior Vulnerability Analyst
CERT/CC 


Work: 412-268-7608
FAX: 412-268-6989



Re: US-CERT Vulnerability Note VU#162289

2008-04-07 Thread Robert C. Seacord

Mark,

comments below.

Robert C. Seacord wrote:

You are also right that the popularity of gcc is one of the reasons 
we decided to publish on this.  If you identify other compilers that 
a) are relatively popular, b) have changed their behavior recently, 
and c) silently optimize out overflow checks we will consider 
publishing vulnerability notes for those compilers as well.


I have sent CERT information about two other popular optimizing 
compilers which do this optimization.  Those compilers may have done 
it for longer than GCC, or not; I'm not sure.  But, users of those 
compilers are just as vulnerable.


The advisory suggests that people not use GCC.  
no, it does not.  it suggests they may not want to use the latest 
versions.  this is one possible work around.  we never say use another 
compiler.
If you don't mention that other compilers also do this, you may just 
prompt people to switch from GCC to some other compiler that behaves 
in the same way.


The tone of the note also suggests that GCC is uniquely defective in 
some way.  The title of the note mentions GCC, and the overview 
suggests that GCC is doing something wrong:


Some versions of gcc may silently discard certain checks for 
overflow. Applications compiled with these versions of gcc may be 
vulnerable to buffer overflows.


Why not change the overview to something like:

Some compilers (including, at least, GCC, PathScale, and xlc) 
optimize away incorrectly coded checks for overflow.  Applications 
containing these incorrectly coded checks may be vulnerable if 
compiled with these compilers.
ok, i'll review again for tone.  generally we don't try to make these 
notes overly broad; they are only meant to draw attention to a specific 
issue.


rCs

--
Robert C. Seacord
Senior Vulnerability Analyst
CERT/CC 


Work: 412-268-7608
FAX: 412-268-6989