I have to post this blog in response.
http://labs.mudynamics.com/2008/07/14/zen-and-the-art-of-fixing-p1-bugs
Love the security testing IS functional testing, BTW.
K.
---
http://www.pcapr.net
On Thu, Mar 19, 2009 at 4:28 PM, Benjamin Tomhave
list-s...@secureconsulting.net wrote:
Why are we differentiating between software and security bugs? It
seems to me that all bugs are software bugs, and how quickly they're
tackled is a matter of prioritizing the work based on severity, impact,
and ease of resolution. It seems to me that, while it is problematic
that security testing has been excluded historically, our goal should
not be to establish yet another security-as-bolt-on state, but rather
leapfrog to the desired end-state where QA testing includes security
testing as well as functional testing. In fact, one could even argue
that security testing IS functional testing, but anyway...
If you're going to innovate, you must as well jump the curve*.
-ben
* see Kawasaki Art of Innovation
http://blog.guykawasaki.com/2007/06/art_of_innovati.html
Gary McGraw wrote:
Aloha Jim,
I agree that security bugs should not necessarily take precedence
over other bugs. Most of the initiatives that we observed cycled ALL
security bugs into the standard bug tracking system (most of which
rank bugs by some kind of severity rating). Many initiatives put
more weight on security bugs...note the term weight not drop
everything and run around only working on security. See the CMVM
practice activities for more.
The BSIMM helps to measure and then evolve a software security
initiative. The top N security bugs activity is one of an arsenal of
tools built and used by the SSG to strategically guide the rest of
their software security initiative. Making this a top N bugs of any
kind list might make sense for some organizations, but is not
something we would likely observe by studying the SSG and the
software security initiative. Perhaps we suffer from the looking
for the keys under the streetlight problem.
gem
On 3/19/09 2:31 PM, Jim Manico j...@manico.net wrote:
The top N lists we observed among the 9 were BUG lists only. So
that means that in general at least half of the defects were not
being identified on the most wanted list using that BSIMM set of
activities.
This sounds very problematic to me. There are many standard software
bugs that are much more critical to the enterprise than just security
bugs. It seems foolhardy to do risk assessment on security bugs only
- I think we need to bring the worlds of software development and
security analysis together more. Divided we (continue to) fail.
And Gary, this is not a critique of just your comment, but of
WebAppSec at large.
- Jim
- Original Message - From: Gary McGraw g...@cigital.com
To: Steven M. Christey co...@linus.mitre.org Cc: Sammy Migues
smig...@cigital.com; Michael Cohen mco...@cigital.com; Dustin
Sullivan dustin.sulli...@informit.com; Secure Code Mailing List
SC-L@securecoding.org Sent: Thursday, March 19, 2009 2:50 AM
Subject: Re: [SC-L] BSIMM: Confessions of a Software Security
Alchemist (informIT)
Hi Steve,
Sorry for falling off the thread last night. Waiting for the posts
to clear was not a great idea.
The top N lists we observed among the 9 were BUG lists only. So
that means that in general at least half of the defects were not
being identified on the most wanted list using that BSIMM set of
activities. You are correct to point out that the Architecture
Analysis practice has other activities meant to ferret out those
sorts of flaws.
I asked my guys to work on a flaws collection a while ago, but I
have not seen anything yet. Canuck?
There is an important difference between your CVE data which is
based on externally observed bugs (imposed on vendors by security
types mostly) and internal bug data determined using static
analysis or internal testing. I would be very interested to know
whether Microsoft and the CVE consider the same bug #1 on internal
versus external rating systems. The difference is in the term
reported for versus discovered internally during SDL activity.
gem
http://www.cigital.com/~gem
On 3/18/09 6:14 PM, Steven M. Christey co...@linus.mitre.org
wrote:
On Wed, 18 Mar 2009, Gary McGraw wrote:
Many of the top N lists we encountered were developed through the
consistent use of static analysis tools.
Interesting. Does this mean that their top N lists are less likely
to include design flaws? (though they would be covered under
various other BSIMM activities).
After looking at millions of lines of code (sometimes
constantly), a ***real*** top N list of bugs emerges for an
organization. Eradicating number one is an obvious priority.
Training can help. New number one...lather, rinse, repeat.
I believe this is reflected in public CVE data. Take a look at the
bugs that are being reported for, say, Microsoft or major Linux