hi kevin (and sc-l), Sorry for the delay responding to this. I was skiing yesterday with my son Eli and just flew across the country for the SANS summit this morning (leaving behind 6 inches of new snow in VA). Anyway, better late than never.
I'll interleave responses below. On Thu, 28 Jan 2010 10:34:30 -0500, Gary McGraw wrote: >> "Cargo Cult Computer Security: Why we need more description and less >> prescription." http://www.informit.com/articles/article.aspx?p=1562220 >On 2/2/10 12:30 PM, "Wall, Kevin" <kevin.w...@qwest.com> wrote: > In my 11 years of experience working >on this SSG, it is very rare that application development teams are >looking for a _descriptive_ approach. Almost always, they are >looking for a _prescriptive_ one. They want specific solutions >to specific problems, not some general formula to an approach that will >make them more secure. Absolutely. I think as an SSG lead in a particular company environment you must have a prescriptive approach but that the approach you develop will be better if informed by data from a descriptive model like BSIMM. (For the record, I see SAMM as a prescriptive model that tells you often in great detail what your initiative should be doing without knowing one whit about how your organization ticks.) If you read the article carefully, there are two paragraphs that together should make this clear. Here's the first: "Prescriptive models purport to tell you what you should do. Promulgators of such models say things more like, "the model is chocked full of value judgements [sic] about what organizations SHOULD be doing." That's just dandy, as long as any prescriptive model only became prescriptive over time based on sufficient observation and testing." And here's the second: "Also worthy of mention in this section is the "one size fits all" problem that many prescriptive models suffer from. The fact is, nobody knows your organizational culture like you do. A descriptive comparison allows you to gather descriptive data and adapt good ideas from others while taking your culture into account." BSIMM is meant to be a tool for the people running and SSG (and for that matter, strategizing about a company's software security initiative). The article is really about the differences between BSIMM and SAMM than anything else. It's not really about the difference between BSIMM and ESAPI. BSIMM and things like ESAPI fit together. >Both are useful and ideally should go together like hand and glove. Exactly right. >I suspect that this apparent dichotomy in our perception of the >usefulness of the prescriptive vs. descriptive approaches is explained >in part by the different audiences with whom we associate. Agreed. See above. BSIMM is a tool for executives to help build, measure, and maintain a software security initiative. >If our SSG were to hand them something like >BSIMM, they would come away telling their management that we didn't help >them at all. Please do NOT even think about handing the BSIMM to developers as a solution! The BSIMM is a yardstick for an initiative, and it's meant for a guy like you. The notion is to measure your own initiative and most importantly of all compare your initiative to your peers. >This brings me to my fourth, and likely most controversial point. Despite >the interesting historical story about Feynman, I question whether BSIMM >is really "scientific" as the BSIMM community claims. I would contend >that we are only fooling ourselves if we claim otherwise. I think this is a valid criticism. The only thing that makes BSIMM more scientific than other methodologies like the Touchoints, SDL, CLASP, or SAMM, is that the BSIMM uses real data and real measurement. However the measurement technique is certainly not foolproof. (Incidentally, I state that view pretty clearly in the article...computer science, and other fields with "science" in their name are usually not.) >While I am certainly not privy to the exact method used to arrive at the >BSIMM data (I have read through the "BSIMM Begin" survey, but have not >been involved in a full BSIMM assessment), I would contend that the >process is not repeatable to the necessary degree required by science. This criticism holds some water, but you are shooting from the hip and it is pretty clear that you have not read the BSIMM itself. That, and the first article we wrote about the BSIMM explain our methods pretty clearly. Please read those two things and lets continue this line of questioning. >I challenge [the BSIMM team] to put forth additional information explaining >their data collection >process and in particular, describing how it avoids unintentional bias. (E.g., >Are assessment participants choose >at random? By whom? How do you know you >have a representative sample of >a company? Etc.) This is pretty clearly explained in the BSIMM itself. >In my opinion, comparison of observations from two companies is not >worth the paper that is printed on UNLESS we can extrapolate from >this data and make accurate predictions based on past findings. Given the reaction to actual results from the 30 companies who have participated so far in the study, I'll respectfully call you out as incorrect on that one. Turns out that most people who run large SSGs and initiatives are hungry for comparison with peers (and indeed want to meet and learn from the peers). Experience in the field is bearing out the utility of the BSIMM, and the fact that the data we are gathering is supporting the model in important ways is no small potatoes either. >After observing this field for 30+ years (ouch!), I have concluded that we >[computer science] >have not matured into a science because as a discipline we *do NOT >really want to!* Some of us do. Others don't. But I think a sea change is underway even in our little subfield of software security >... efforts such as BSIMM should be welcomed >by all. But is also important for those who prefer _descriptive_ approaches >like BSIMM, to acknowledge the importance of _prescriptive_ approaches I concur. Thanks for your carefully considered posting Kevin. Hope this response is satisfying. gem company www.cigital.com podcast www.cigital.com/silverbullet blog www.cigital.com/justiceleague book www.swsec.com _______________________________________________ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com) as a free, non-commercial service to the software security community. _______________________________________________