Re: [SC-L] Re: Application Insecurity --- Who is at Fault?

2005-04-12 Thread der Mouse
 The programmer is neither the application architect nor the system
 engineer.
 In some cases he is.  Either way, it doesn't matter.  I'm not asking
 the programmer to re-design the application, I'm asking them to just
 program the design 'correctly' rather than 'with bugs'

Except that sometimes the bugs are in the design rather than the code.
Module A has a spec saying that checking a certain aspect of the input
arguments is the caller's responsibility; module B, calling module A,
is written to a spec that makes it A's responsibility to check those
values.

Neither programmer is at fault; each module was written correctly to
spec.  The real problem is that the specs are incompatible - whatever
part of the design and specification process allowed the two specs for
module A to get out of sync with one another is at fault.  (This
shouldn't happen, no, but anyone who thinks that it doesn't is
dreaming.)  Sometimes even the specs are identical, but are written
badly, leaving enough unsaid for such mismatches to occur - the art and
science of writing complete interface specs, that's another subject I
could rant at some length about

 I would question you if you suggested to me that you always assume to
 _NOT_ include 'security' and only _DO_ include security if someone
 asks.

Security is not a single thing that is included or omitted.

Another common source of security problems is that a module (call it A)
is implemented in a way that is secure against the threat model then in
effect (often this threat model is unspecified, and maybe even A's
coder was careful and went and asked and was told no, we don't care
about that).  This module is then picked up and re-used (hey, re-use
is good, right?) in an environment where the threat model is
drastically different - instant problem.  Security was included, yet
security failed, and the fault does not lie with the original coder.
(It lies with whoever reused the module in an environment it was not
designed for.)

 It's also much more likely that the foreman (aka programming
 manager) told the builder (programmer) to take shortcuts to meet
 time and budget -
 Maybe, but the programmer should not allow 'security' to be one of
 these short-cuts.

The programmer quite likely doesn't have that choice.  Refusing to do
what your manager tells you is often grounds for summary firing, with
the work being reassigned to someone who will follow orders (and
probably will be even more overloaded).

It's also not always clear whether a given thing constitutes a security
risk or not.  A certain validation check that's omitted could lead to
nothing worse than, say, a one-cycle delay in recognizing a given
signal in the initial design, but reused in another way that nobody
knew even existed at first writing, it could cause a crash (and
associated DoS) or worse.

/~\ The ASCIIder Mouse
\ / Ribbon Campaign
 X  Against HTML   [EMAIL PROTECTED]
/ \ Email! 7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B




Re: [SC-L] Re: Application Insecurity --- Who is at Fault?

2005-04-12 Thread ljknews
At 4:21 PM -0400 4/11/05, Dave Paris wrote:
Joel Kamentz wrote:
 Re: bridges and stuff.

 I'm tempted to argue (though not with certainty) that it seems that the 
 bridge analogy is flawed
 in another way --
 that of the environment.  While many programming languages have similarities 
 and many things apply
 to all programming,
 there are many things which do not translate (or at least not readily).  
 Isn't this like trying to
 engineer a bridge
 with a brand new substance, or when the gravitational constant changes?  And 
 even the physical
 disciplines collide
 with the unexpected -- corrosion, resonance, metal fatigue, etc.  To their 
 credit, they appear far
 better at
 dispersing and applying the knowledge from past failures than the software 
 world.

Corrosion, resonance, metal fatigue all have counterparts in the
software world.  glibc flaws, kernel flaws, compiler flaws.  Each of
these is an outside influence on the application - just as environmental
stressors are on a physical structure.

Corrosion and metal fatigue actually get worse as time goes on.
Software flaws correspond more to resonance, where there is a
defect in design or implementation.
-- 
Larry Kilgallen




Re: [SC-L] Theoretical question about vulnerabilities

2005-04-12 Thread Crispin Cowan
David Crocker wrote:
3. Cross-site scripting. This is a particular form of HTML injection and would
be caught by the proof process in a similar way to SQL injection, provided that
the specification included a notion of the generated HTML being well-formed. If
that was missing from the specification, then HTML injection would not be
caught.

XSS occurs where client A can feed input to Server B such that client C
will accept and trust the input. The correct specification is that
Server B should do a perfect job of allowing clients to upload content
that is damaging to other clients. I submit that this is infeasible
without perfect knowledge of the vulnerabilities of all the possible
clients. This seems to be begging the definition of prove correct
pretty hard.
You can do a pretty good job of preventing XSS by stripping user posts
of all interesting features and permitting only basic HTML. But this
still does not completely eliminate XSS, as you cannot a priori know
about all the possible buffer overflows  etc. of every client that will
come to visit, and basic HTML still allows for some freaky stuff, e.g.
very long labels.
Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com


Re: [SC-L] Theoretical question about vulnerabilities

2005-04-12 Thread Crispin Cowan
Nash wrote:
** It would be extremely interesting to know how many exploits could
be expected after a reasonable period of execution time. It seems that
as execution time went up we'd be less likely to have an exploit just
show up. My intuition could be completely wrong, though.

I would think that time is pretty much irrelevant, because it depends
on the intelligence used to order the inputs you try. For instance,
time-to-exploit will be very long if you feed inputs to (say) Microsoft
IIS starting with one byte of input and going up in ASCII order.
Time-to-exploit gets much shorter if you use a fuzzer program: an
input generator that can be configured with the known semantic inputs of
the victim program, and that focuses specifically on trying to find
buffer overflows and printf format string errors by generating long
strings and using strings containing %n.
Even among fuzzers, time-to-exploit depends on how intelligent the
fuzzer is in terms of aiming at the victim program's data structures.
There are many specialized fuzzers aimed at various kinds of
applications, aimed at network stacks, aimed at IDS software, etc.
Crispin
--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com



Re: [SC-L] Theoretical question about vulnerabilities

2005-04-12 Thread karger

Pascal Meunier [EMAIL PROTECTED] writes

Do you think it is possible to enumerate all the ways all vulnerabilities
can be created?  Is the set of all possible exploitable programming mistakes
bounded?

I believe that one can make a Turing machine halting argument to show
that this is impossible.  If you include denial of service attacks
(and infinite loops are certainly a denial of service), then the
halting problem applies immediately and trivially.  But even if you
exclude denial of service, I think you could construct a proof based
on a Turing machine that halts if and only if there is an exploitable
programming mistake, or something like that.

I'm not really very good at such proofs, so I'll leave that as an
exercise for the reader (convenient excuse for not doing the hard
bits!)

It is the practical impossibility of finding all the existing
vulnerabilities that led to the Anderson study of 1972 replacing the
penetrate and patch stratrategy for security with the security
kernel approach of developing small and simple code that you could
make a convincing argument that it is secure.  That in turn led to the
development of high assurance techniques for building secure systems
that remain today the only way that has been shown to produce code
with demonstrably better security than most of what's out there.

(I say most deliberately.  I'm sure that various people reading this
will come up with examples of isolated systems that have good security
but didn't use high assurance. No disputes there, so you don't need to
fill up SC-L with a list of them.  The point is that high assurance is
a systematic engineering approach that works and, when followed, has
excellent security results.  The fact that almost no one uses it says
much more about the lack of maturity of our field than about the
technique itself.  It took a VERY long time for bridge builders to
develop enough discipline that most bridges stayed up.  The same is
true for software security, unfortunately.  It also says a lot about
whether people are really willing to pay for security.)

Although old, Gasser's book probably has the best description of what
I'm talking about.  These two classic documents should be read by
ANYONE trying to do secure coding, and fortunately, they are both
online!  Thanks to NIST and the University of Nebraska at Omaha for
putting them up.  (For completeness, the NIST website was created from
documents scanned by the University of California at Davis.)

citations:

1.  Anderson, J.P., Computer Security Technology Planning Study,
ESD-TR-73-51, Vols. I and II, October 1972, James P. Anderson and
Co., Fort Washington, PA, HQ Electronic Systems Division: Hanscom
AFB, MA. URL: http://csrc.nist.gov/publications/history/ande72.pdf

2.  Gasser, M., Building Secure Systems. 1988, New York: Van Nostrand
Reinhold.  URL: http://nucia.ist.unomaha.edu/library/gasser.php


- Paul



Re: [SC-L] Re: Application Insecurity --- Who is at Fault?

2005-04-12 Thread der Mouse
 I would question you if you suggested to me that you always assume
 to _NOT_ include 'security' and only _DO_ include security if
 someone asks.
 Security is not a single thing that is included or omitted.
 Again, in my experience that is not true.  Programs that are labelled
 'Secure' vs something that isn't.

*Labelling as* secure _is_ (or at least can be) something that is
boolean, included or not.  The actual security behind it, if any, is
what I was talking about.

 In this case, there is a single thing - Security - that has been
 included in one and not the other [in theory].

Rather, I would say, there is a cluster of things that have been boxed
up and labeled security, and included or not.  What that box includes
may not be the same between the two cases, even, never mind whether
there are any security aspects that aren't in the box, or non-security
aspects that are.

 Also, anyone requesting software from a development company may say:
 Oh, is it 'Secure'?  Again, the implication is that it is a single
 thing included or omitted.

Yes, that is the implication.  It is wrong.

The correct response to is it secure? is against what threat?, not
yes or no.  I would argue that anyone who thinks otherwise should
not be coding or specifying for anything that has a significant cost
for a security failure.  (Which is not to say that they aren't!)

/~\ The ASCIIder Mouse
\ / Ribbon Campaign
 X  Against HTML   [EMAIL PROTECTED]
/ \ Email! 7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B




Re: [SC-L] Theoretical question about vulnerabilities

2005-04-12 Thread der Mouse
 Or until you find a bug in your automated prover.  Or, worse,
 discover that a vulnerability exists despite your proof, meaning
 that you either missed a loophole in your spec or your prover has a
 bug, and you don't have the slightest idea which.
 On that basis, can I presume that you believe all programming should
 be done in machine code, in case the compiler/assembler/linker you
 might be tempted to use has a bug?

You can presume anything you like.  But in this case you'd be wrong.

I was/am not arguing that such tools should not be used (for this or
any other reason).  My point is merely that calling what they do
proof is misleading to the point where I'd call it outright wrong.
You have roughly the same level of assurance that code passed by such a
checker is correct that you do that machine/assembly code output by a
traditional compiler is correct: good enough for most purposes, but by
no stretch of the imagination is it even as close to proof as most
mathematics proofs are - and, like them, it ultimately rests on
smart people think it's OK.

 Ultimately, this amounts to a VHLL, except that
 [...nomenclature...].  And, as with any language, whoever writes
 this VHLL can write bugs.
 Like I said, you can still fail to include important security
 properties in the specification.  However, [...].

Basically, the same arguments usually made for any higher-level
langauge versus a corresponding lower-level language: machine versus
assembly, assembly versus C, C versus Lisp, etc.

And I daresay that it provides at least vaguely the same advantages and
disadvantages as for most of the higher/lower level comparisons, too.
But it is hardly the panacea that proving the program correct makes
it sound like.  As someone (who? I forget) is said to have said,
Beware, I have only proven this program correct, not tested it.

/~\ The ASCIIder Mouse
\ / Ribbon Campaign
 X  Against HTML   [EMAIL PROTECTED]
/ \ Email! 7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B