There are two closely-related questions that keep arising for which I
still can find no satisfying answer (for me "satisfying" means
"supported by concrete evidence"):

[1] Will using a secure SDLC methodology (or a set of secure development
"best practices") actually produce software that will, when deployed "in
the wild" resist or tolerate attacks and attempted executions of
inserted/embedded malicious code better than software whose developers
did not use a secure SDLC methodology?

[2] Will software that adheres to a set of "security principles", when
deployed "in the wild", actually resist or tolerate attacks and
attempted execution of inserted/embedded malicious code better than
software whose developers did not adhere to security principles?

Right now, all I can find are abundant (possibly TOO abundant) vaguely
supported - and frequently unsupported - assertions by SwA
practitioners, authors of books on software security, and promoters of
secure SDLC methodologies. What I can't find is concrete
cause-and-effect evidence to back up those assertions.

Possibly, the problem starts with our industry's inability to
unequivocally demonstrate that "secure software" even exists - not in
the conceptual sense (I'm overfed with concepts of what software should
be to be considered secure) - but in real world implementations. 

Where are the case studies that demonstrate that BECAUSE such-and-such a
secure SDLC methodology was used or BECAUSE such-and-such a set of
secure principles were adhered to, the resulting software, when
attacked, was able to resist (or tolerate) the attack without being
compromised.

I've found the case studies that tell me how following a given secure
SDLC methodology made regulators happy. I've found case studies saying
there were fewer vulnerabilities in software that was produced under a
secure development regime. But I have yet to see any studies that
unambigiously demonstrate that BECAUSE certain security regulations were
satisfied or BECAUSE the software had fewer vulnerabilities,  attacks on
it were unable to succeed.

Is the problem down to the still-embryonic state of software security
metrics? I think it's fair to say that there's not yet anything
approaching wide agreement even about what characteristics of software
can and should be measured to provide irrefutable evidence that software
exhibits a fixed, or even a relative, level of security. For example,
given there's probably at least one metric that can be used to quantify
the degree to which particular software design adheres to a required set
of secure design principles, what does that metric tell us about how
much more secure the software is than it would have been had its design
not adhered to those principles?

If I can't even find good metrics for quantifying how secure a given
program is, how can I hope to find a good metric for determining ex post
facto (let alone predicting) whether that program, because it was
developed using a given secure methodology or set of "best security
practices", is in fact more secure - and to what extent - when compared
against the same program developed using a traditional,
non-security-minded methodology/set of practices. 

I'm thinking that some university needs to do a spin on "n-version
programming" wherein the same user needs statement is given to 3
different development teams - #1 using a security enhanced methodology
and adhering to security principles, #2 using a traditional methodology
and adhering to security principles, and #3 not doing anything about
security. The resulting three different program versions would then be
subjected to pen tests and other blackhat tests, and measurements would
be taken vis how each one fared in terms of attack-resistance or
attack-tolerance. Not a perfect metric by any means, but at least it
would give some idea of the relative good-better-bestness of the three
approaches to developing the software.

In the absence of such metrics, we're all left to take it on faith that
because SwA practioners and authors we respect make certain
unsubstantiated assertions about "goodness" of their secure
methodologies and the necessity of security principles, following those
methods and adhering to those principles will produce an even
qualitatively verifiable benefit, i.e., that the software will be less
vulnerable to compromise when deployed "out in the wild" than software
produced "the old fashioned way". 


--
Karen Mercedes Goertzel, CISSP
Booz Allen Hamilton
703.902.6981
[EMAIL PROTECTED] 

_______________________________________________
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
_______________________________________________

Reply via email to