Re: Is finding security holes a good idea?

2004-06-17 Thread Birger Tödtmann
Am Do, den 10.06.2004 schrieb Eric Rescorla um 20:37:
 Cryptography readers who are also interested in systems security may be
 interested in reading my paper from the Workshop on Economics
 and Information Security '04:
 
 Is finding security holes a good idea?
[...]

The economic reasoning within the paper misses casualties that arise
from automated, large scale attacks.

In figure 2, the graph indicating the Black Hat Discovery Process
suggests we should expect a minor impact of Private Exploitation only,
because the offending Black Hat group is small and exploits manually. 
However, one could also imagine Code Red, Slammer and the like.  Apart
from having a fix ready or not, when vulnerabilities of this kind are
not known *at all* to the public (no problem description, no workaround
like remove file XYZ for a while known), worms can hit the network far
more severe than they already do with knowledge of vulnerability and
even fixes available.  I would expect the Intrusion Rate curve to be
formed radically different at this point.  This also affects the
discussion about social welfare lost / gained through discloure quite a
lot.

I don't see how applying Browne's vulnerability cycle concept to the
Black Hat Discovery case as it has been done in the paper can reflect
these threat scenarios correctly.  


Regards,
-- 
Birger Tödtmann [EMAIL PROTECTED]
Computer Networks Working Group, Institute for Experimental Mathematics
University Duisburg-Essen, Germany

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is finding security holes a good idea?

2004-06-17 Thread Eric Rescorla
Birger Toedtmann [EMAIL PROTECTED] writes:

 Am Do, den 10.06.2004 schrieb Eric Rescorla um 20:37:
 Cryptography readers who are also interested in systems security may be
 interested in reading my paper from the Workshop on Economics
 and Information Security '04:
 
 Is finding security holes a good idea?
 [...]

 The economic reasoning within the paper misses casualties that arise
 from automated, large scale attacks.

 In figure 2, the graph indicating the Black Hat Discovery Process
 suggests we should expect a minor impact of Private Exploitation only,
 because the offending Black Hat group is small and exploits manually. 
 However, one could also imagine Code Red, Slammer and the like.  Apart
 from having a fix ready or not, when vulnerabilities of this kind are
 not known *at all* to the public (no problem description, no workaround
 like remove file XYZ for a while known), worms can hit the network far
 more severe than they already do with knowledge of vulnerability and
 even fixes available.  I would expect the Intrusion Rate curve to be
 formed radically different at this point.  This also affects the
 discussion about social welfare lost / gained through discloure quite a
 lot.

 I don't see how applying Browne's vulnerability cycle concept to the
 Black Hat Discovery case as it has been done in the paper can reflect
 these threat scenarios correctly.  

It's true that the Browne paper doesn't apply directly, but I don't
actually agree that rapid spreading malware alters the reasoning in
the paper much. None of the analysis on the paper depends on any
particular C_BHD/C_WHD ratio. Rather, the intent is to provide
boundaries for what one must believe about that ratio in order to
think that finding bugs is a good idea.

That said, I don't think that the argument you present above is that
convincing. it's true that a zero-day worm would be bad, but given the
shape of the patching curve [0], a day-5 worm would be very nearly as
bad (and remember that it's the C_BHD/C_WHD ratio we care about).
Indeed, note that all of the major worms so far have been based on
known vulnerabilities. 

-Ekr

[0] E. Rescorla, Security Holes... Who Cares?, Proc. 12th USENIX
Security, 2003.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: A National ID: AAMVA's Unique ID

2004-06-17 Thread John Gilmore
 The solution then is obvious, don't have a big central database. Instead use
 a distributed database.

Our favorite civil servants, the Departments of Motor Vehicles, are about
to do exactly this to us.

They call it Unique ID and their credo is: One person, one license,
one record.  They swear that it isn't national ID, because national
ID is disfavored by the public.  But it's the same thing in
distributed-computing clothes.

The reason they say it isn't a national ID is because it's 50 state
IDs (plus US territories and Canadian provinces and Mexican states) --
but the new part is that they will all be linked by a continent-wide
network.  Any official who looks up your record from anywhere on the
continent will be able to pull up that record.  Anyplace you apply for
a state license or ID card, they will search the network, find your
old record (if you have one) and transfer it to that state.  So
there's no way to escape your past record, and no way to get two cards
(in the absence of successful fraud, either by citizens or DMV
employees).

This sure smells to me like national ID.

This, like the MATRIX program, is the brainchild of the federal
Department of inJustice.  But those wolves are in the sheepskins of
state DMV administrators, who are doing the grassroots politics and
the actual administration.  It is all coordinated in periodic meetings
by AAMVA, the American Association of Motor Vehicle Administrators
(http://aamva.org/).  Draft bills to join the Unique ID Compact, the
legally binding agreement among the states to do this, are already
being circulated in the state legislatures by the heads of state DMVs.
The idea is to sneak them past the public, and past the state
legislators, before there's any serious public debate on the topic.

They have lots of documents about exactly what they're up to.  See
http://aamva.org/IDSecurity/.  Unfortunately for us, the real
documents are only available to AAMVA members; the affected public is
not invited.

Robyn Wagner and I have tried to join AAMVA numerous times, as
freetotravel.org.  We think that we have something to say about the
imposition of Unique ID on an unsuspecting public.  They have rejected
our application every time -- does this remind you of the Hollywood
copy-prevention standards committees?  Here is their recent
rejection letter:

  Thank you for submitting an application for associate membership in AAMVA.
  Unfortunately, the application was denied again. The Board is not clear as
  to how FreeToTravel will further enhance AAMVA's mission and service to our
  membership. We will be crediting your American Express for the full amount
  charged.

  Please feel free to contact Linda Lewis at (703) 522-4200 if you would like
  to discuss this further.

  Dianne 
  Dianne E. Graham 
  Director, Member and Conference Services 
  AAMVA 
  4301 Wilson Boulevard, Suite 400 
  Arlington, VA 22203 
  T: (703) 522-4200 | F: (703) 908-5868 
  www.aamva.org http://www.aamva.org/  

At the same time, they let in a bunch of vendors of high security ID
cards as associate members.

AAMVA, the 'guardians' of our right to travel and of our identity
records, doesn't see how listening to citizens concerned with the
erosion of exactly those rights and records would enhance their
mission and service.  Their mission appears to be to ram their
secret policy down our throats.  Their service is to take our tax
money, use it to label all of us like cattle with ear-tags, and deny
us our constitutional right to travel unless we submit to being
tagged.

We protest.  Do you?

John Gilmore

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Is finding security holes a good idea?

2004-06-17 Thread Rick Wash
On Wed, 16 Jun 2004 17:35:18 -0400, Thor Lancelot Simon [EMAIL PROTECTED]
wrote:

 On Wed, Jun 16, 2004 at 02:12:18PM -0700, Eric Rescorla wrote:

  Let's assume for the sake of argument that two people auditing the
same code section will find the same set of bugs.

 Actually, I think that in this regard the answer lies in a sort of
critical mass of interest.  Ideas about where to look for what sort of
bug (...) tend to propagate through the population of experts who might
find bugs slowly at first, and then faster and faster in what's probably
an exponential way.

That's interesting.  Now consider what happens if I search for and find a
bug and disclose it.  All blackhats can start using it right away, but
some people will patch.  This is described eloquently in Figure 1 from
Eric's paper.  However, if I don't disclose it, then by your argument
someone will find it very soon afterwards.  For the sake of argument, lets
assume that its a blackhat.  Since I didn't disclose it, no one has
patched their systems yet, and the blackhat has a period of private
exploitation until it gets publicly known, at which point it gets
disclosed and patched.  This scenario is illustrated by Figure 2 in the
paper, and since the rediscovery is very fast, the starting point on both
figures is the same.  Of course the blackhat could wait for the most
opportune time to use the bug, but by your assumption someone else will
find the bug soon and cause it to be patched, so waiting isn't worth it
for him.

The private exploitation time is the only difference between the two
figures.  However, it definately has a cost in both time and money for me
to go searching for bugs.  If the blackhat is going to find it and use it
fairly soon anyway, why should I go through the effort of finding the bug
in the first place?  We as a community are going to have the blackhats do
our work for us soon if we don't go searching.

I think Eric's claim is that *proactive* finding and disclose of bugs is
not worthwhile.  However, it seems to me he advocates *reactive* disclose
for when bugs are already being exploited by blackhats.  Proactively
finding bugs doesn't increase security much because it doesn't make
software more secure overall, and he assumes that most of the damage from
bugs (even bugs discovered by blackhats) comes after they become public.

One issue I had with this paper is that he didn't take on the question of
which bugs were proactively found (by researchers looking for bugs) and
which bugs were reactively found (by observing the blackhat community).  I
know this is a very difficult thing to determine, but his paper brought
this question to light.

Of course, all of this is assuming the bugs are from known classe New
classes of vulnerabilities have many other worthwhile benefits.

 I think that this sort of thing is going to turn out to be _very_ hard
to tease out evidence for or against using naive studies of bug
commission, discovery, or rediscovery rates; but it is my intuition
based on many years of making, finding, and fixing bugs, and of watching
others eventually redo my work in the cases in which I'd managed to fail
to let them know about it.  I would argue that in fact this pattern is
not the exception; it is the rule.

I agree its going to be very hard, but obviously from this conversation
people disagree whether this pattern is the exception or the rule.  Some
actual evidence (and not just anecdotal evidence) is warranted.

 Rick

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]