[SC-L] Software security definition(s)

2008-03-13 Thread Arian J. Evans
I hate to start a random definition thread, but Ben asked me a good
question and I'm curious if anyone else sees this matter in the
same fashion that I do. Ben asked why I refer to software security
as similar to artifacts identified by emergent behaviors:

>  > Software security is an emergent behavior that changes over time
>  > with software use, evolution, extension, and the changing threat
>  > landscape. It's not an art you get people inspired in.
>  >
>  You keep using that phrase - "emergent behavior" - and I'm not sure what
>  you mean.

So one of the biggest challenges for me was communicating
to any audience, and most importantly the business and
developers, what "secure software/applications" means.

Around 2000/2001 I was still fixated on artifacts in code
and in QA and secure software == strongly typed
variables with draconian input validation and character
set handling (canonicalization and encoding types).

Yet I continued to have problems with the word "security"
when talking to business owners and developers about software.

This is because you say "security" and business owners
see sandbags blocking their figurative river of profits. Corporate
and gov developers see sandbags stopping them from going
home and having dinner with the wife or playing WoW.
Startup developers just laugh.

I started using phrases like "predictable and dependable software"
instead of security. Giving examples of "Rob's Report" -- it has all
these user requirements it must meet to pass on to UAT, and if it
fails, blah blah. SQL injection is a failure of degree, and not of kind.
Same kind of failure as a type-mismatch error that stops the report
from running -- but huge difference in degree of impact.

Finally it dawned on me folks latch on to this secure software stuff
as features and requirements, anyone using waterfall gets drowned
in insecure software due to forever-pushed-back security "features".

My experience also was that never, ever, is a Production app
deployment identical to dev regions, let alone QA stages, UAT, etc.

>From a security posture: prod might be better *or* worse than the
other environments.

Yet even worse -- sometimes I'd test app A and app B for a company,
and they would both fair well when tested independently.

I'd come back a year later and the company would have bolted
them together through say some API or WS and now, together,
apps A and B were really weak when glued together. Usually this
was due to interfaces handling I/O that they weren't intended to.

Long and short of it -- it struck me that security is a measure
of behaviors. It is one quality of an application, like speed/
performance, but this quality is measured by the observed
behaviors, regardless of what the source code, binary, or
blueprint tells you...

Note -- I am not saying any of these things are not valuable.
There's things I'd still far rather find in source than black box,
and things binary tracing is brilliant at. I'm simply saying that
at the end of the day, the "proof in the pudding" is at run-time.

The same way we perform the final measure the quality of
commercial jets: I don't care about tensile strength exceeding
tolerance standards if the wings are falling off at run-time.

If someone compromises 1000 customer accounts, steals
their prescription data, and tells the zoloft folks who is
buying prozac so they can direct-market (real world example):
you have a defective application.

Those behaviors are always emergent -- meaning they can
only ultimately be measured at runtime in a given environment
with a given infra and extension (like plugging app B into app
A through some wonky API).

Sometimes it's the *caching* layer that allows you to insert
control characters that lead to compromising the rest of the
application, soup to nuts.

You won't find any hint of the caching layer in source, in binary,
in blueprint, in conversation with the devs, in dev or QA regions,
in staging and UAT, or by running your desktop or network VA
webapp scanner unauthenticated on the entry portal.

You might find it in documentation, or during threat modeling.

You will find it when you start measuring the emergent
behavior of a piece of otherwise well-vetted software
now sitting behind a weak caching layer and you
start observing wonky caching/response issues and
realize these behaviors are weak; you can attack them

What is "secure" software?

It is one quality of an application that can be measured
by the emergent behaviors of the software while trying to
meet and enforce its use-case in a given run-time environment.

This now fits back to the whole ideology discussion
that is sorely needed and overdue.

-- 
Arian Evans
software security stuff
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderat

Re: [SC-L] Software security definition(s)

2008-03-14 Thread Mike Lyman
Arian J. Evans wrote:
> What is "secure" software?
> It is one quality of an application that can be measured
> by the emergent behaviors of the software while trying to
> meet and enforce its use-case in a given run-time environment.
>   

Fairly new to the list so if I cover things discussed before or breach 
some list standards here feel free to jump all over me.

What is secure software good discussion to help us set our sights on 
where we need to go. Want to keep it grounded in the reality of today 
though just a bit.

I think one of the problems we have in the security industry is "secure" 
itself is a bad term. Somebody, somewhere can find a way to attack any 
computer as long as it exists. I've often told folks I'm beginning to 
work with that you could power off a computer, encase it in a block of 
cement, dump in it the ocean to try to secure the data in it it and 
Robert Ballard could probably located it and retrieve it for anybody 
willing to pay for it and meanwhile it hasn't been very useful to you. 
Even short of that drastic of a step, if users can use it, somebody can 
attack it. Features themselves are double edged swords; "del *.*" or 
"sudo rm *" can be useful commands or very dangerous ones. Even with 
draconian input validation, users could mess up the integrity of the 
data just by fat fingering input or selecting the wrong item in a pick 
list or a disk controller going bad could cause garbage. Somebody 
reading over a user's sholder can comprise the confidentially of the 
data or listening to them at lunch time. (Ever want to know what is 
going on at Microsoft just go to the opening day of any major science 
fiction movie at any theater in the Redmond area.) Flooded network pipes 
or cut cables can create DoS attacks. A user walking away from his desk 
without locking the computer opens up non-repudiation issues. "Secure" 
can be successfully attacked in too many ways and proven insecure.

I try to focus more on secure enough to do the job it needs to do in the 
environment it will operate in. That adds a lot of complexity that is 
difficult to deal with since it makes simple check lists less useful but 
it can also simplify things. I've had experiences where we removed 
security features because they were unnecessary for the application and 
its environment. Had a design team engineer FT Knox to that could have 
protected data for years when that data was going live on a public 
website in less than 24 hours. They were rather surprised to have 
security remove things that were way too costly for the nature of what 
they were doing.

Just started as the security reviewer/lead on a new project yesterday. 
Went into my standard introductions about how this is an ever changing 
world and what passes as good enough today may be wide open tomorrow and 
we just have to live with that fact. We don't have the time or budget to 
fully inject security into their development life cycle at this time or 
dive deep into their code but any improvement is still improvement. What 
we do now will make them better on the next version or the next project. 
(Have seen that happen in a big way with some of the teams we work 
with.) We may have a larger budget next time or get more mileage out of 
the same budget because of what they learn now. As is all too typical, 
our customers get us engaged after the project is already in progress so 
we can't inject security considerations from the beginning and help 
drive the design or the application or the specifications. We do what we 
can while in progress. It'll be better than it would have been without 
our efforts.

When we are done, will it be secure? No, we couldn't ultimately achieve 
that anyway but will it be secure enough for its intended use and 
environment is the better question. Should be but even then I won't give 
concrete answer. Based on what we know today it probably will be but 
somewhere somebody may well be crafting that next attack that blows us 
out of the water.
-- 

Mike Lyman
[EMAIL PROTECTED]

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___