Re: MD5 considered harmful today

2008-12-30 Thread Eric Rescorla
At Tue, 30 Dec 2008 11:51:06 -0800 (PST),
"Hal Finney" wrote:
> Therefore the highest priority should be for the six bad CAs to change
> their procedures, at least start using random serial numbers and move
> rapidly to SHA1. As long as this happens before Eurocrypt or whenever
> the results end up being published, the danger will have been averted.
> This, I think, is the main message that should be communicated from this
> important result.

VeriSign says that they have already fixed RapidSSL:

https://blogs.verisign.com/ssl-blog/2008/12/on_md5_vulnerabilities_and_mit.php

   Q: How will VeriSign mitigate this problem?
   
   A: VeriSign has removed this vulnerability. As of shortly before this
   posting, the attack laid out this morning in Berlin cannot be
   successful against any RapidSSL certificate nor any other SSL
   Certificate that VeriSign sells under any brand.
   
   
   Q: Does that mean VeriSign has discontinued use of MD5?
   
   A: We have been in the process of phasing out the MD5 hashing
   algorithm for a long time now. MD5 is not in use in most VeriSign
   certificates for most applications, and until this morning our roadmap
   had us discontinuing the last use of MD5 in our customers'
   certificates before the end of January, 2009. Today's presentation
   showed how to combine MD5 collision attacks with some other clever
   bits of hacking to create a false certificate. We have discontinued
   using MD5 when we issue RapidSSL certificates, and we've confirmed
   that all other SSL Certificates we sell are not vulnerable to this
   attack. We'll continue on our path to discontinue MD5 in all end
   entity certificates by the end of January, 2009.

Incidentally, I most of the CAs names in Slide 19 are VeriSign
brands. In particular RapidSSL, RSA, Thawte, and Verisign.co.jp are
and I believe that FreeSSL is as well.

-Ekr

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: MD5 considered harmful today

2008-12-30 Thread "Hal Finney"
Re: http://www.win.tue.nl/hashclash/rogue-ca/

Key facts:

 - 6 CAs were found still using MD5 in 2008: RapidSSL, FreeSSL, TC
   TrustCenter AG, RSA Data Security, Thawte, verisign.co.jp. "Out of the
   30,000 certificates we collected, about 9,000 were signed using MD5,
   and 97% of those were issued by RapidSSL." RapidSSL was used for the
   attack.

 - The attack relies on cryptographic advances in the state of the art for
   finding MD5 collisions from inputs with different prefixes. These advances
   are not yet being published but will presumably appear in 2009.

 - The collision was found using Arjen Lenstra's PlayStation Lab and used
   200 PS3s with collectively 30 GB of memory. The attack is in two parts,
   a new preliminary "birthdaying" step which is highly parallelizable and
   required 18 hours on the PS3s, and a second stage which constructs the
   actual collision using 3 MD5 blocks and runs on a single quad core PC,
   taking 3 to 10 hours.

 - The attack depends on guessing precisely the issuing time and serial
   number of the "good" certificate, so that a colliding "rogue"
   certificate can be constructed in advance. The time was managed
   by noting that the cert issuing time was reliably 6 seconds after
   the request was sent. The serial number was managed because RapidSSL
   uses serially incrementing serial numbers. They guessed what serial
   number would be in use 3 days hence, and bought enough dummy certs
   just before the real one that hopefully the guessed serial number would
   be hit.

 - The attacks were mounted on the weekend, when cert issuance rates are
   lower. It took 4 weekends before all the timing and guessing worked right.
   The cert was issued November 3, 2008, and the total cert-purchase cost was
   $657.

 - The rogue cert, which has the basicConstraints CA field set to TRUE, was
   intentionally back-dated to 2004 so even if the private key were stolen,
   it could not be misused.

My take on this is that because the method required advances in
cryptography and sophisticated hardware, it is unlikely that it could
be exploited by attackers before the publication of the method, or
the publication of equivalent improvements by other cryptographers. If
these CAs stop issuing MD5 certs before this time, we will be OK. Once
a CA stops issuing MD5 certs, it cannot be used for the attack. Its old
MD5 certs are safe and there is no danger of future successful attacks
along these lines.  As the paper notes, changing to using random serial
numbers may be an easier short-term fix.

Therefore the highest priority should be for the six bad CAs to change
their procedures, at least start using random serial numbers and move
rapidly to SHA1. As long as this happens before Eurocrypt or whenever
the results end up being published, the danger will have been averted.
This, I think, is the main message that should be communicated from this
important result.

Hal Finney

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: very high speed hardware RNG

2008-12-30 Thread Jon Callas


On Dec 30, 2008, at 2:11 PM, Jerry Leichter wrote:


On Dec 30, 2008, at 4:40 PM, Jon Callas wrote:
We don't have a formal definition of what we mean by random. My  
definition is that it needs to be unguessable. If I have a random  
number and the work factor for you to guess it is more or less its  
randomness. It's a Shannonesque way of looking things, but not  
precisely information-theoretic.
I don't think this quite captures the situation.  It's easy to give  
a formal definition of randomness; it's just not clear that such a  
thing can ever be realized.


And that is, pretty much, the point. A formal definition that is no  
guidance to people trying to build things is mathematics as art. Not  
that art is a bad thing -- I adore mathematics as art, and my time as  
a mathematician was all in the art department. It's just not useful to  
the part of me that's an engineer. Pretty has worth, just different  
worth than useful.





Here's one definition:  A random bitstream generator is an  
"isolated" source of an infinite stream of bits, numbered starting  
at zero, with the property that a Turing machine, given as input  
anything about the universe except the internal state of the  
"isolated" source, and bits 0 to n generated by the source, has no  
better than a 50% chance of correctly guessing bit n+1.  The  
difficulty is entirely in that quoted "isolated".  It's not so much  
that we can't define it as that given any definition that captures  
the intended meaning, there are no known systems that we can  
definitely say are "isolated" in that sense.  (Well, there's kind of  
an exception:  Quantum mechanics tells us that the outcome of  
certain experiments is "random", and Bell's Theorem gives us some  
kind of notion of "isolation" by saying there are no hidden  
variables - but this is very technically complex and doesn't really  
say anything nearly so simple.)


A while back, I wrote to this list about some work toward a stronger  
notion of "computable in principle", based on results in quantum  
mechanics that limit the amount of computation - in the very basic  
sense of bit flips - that can be done in a given volume of space- 
time.  The argument is that a computation that needs more than this  
many bit flips can't reasonably be defined as possible "in  
principle" just because we can describe what such a computation  
would look like, if the universe permitted it!  One might produce a  
notion of "strongly computationally random" based on such a theory.   
Curiously, as I remarked i that message, somewhere between 128 and  
256 bits of key, a brute force search transitions from "impractical  
for the forseeable future" to "not computable in principle".  So at  
least for brute force attacks - we're actually at the limits  
already.  Perhaps it might actually be possible to construct such a  
"random against any computation that's possible in principle" source.


A deterministic, but chaotic system that is sufficiently opaque  
gets pretty close to random. Let's just suppose that the model they  
give of photons bouncing in their laser is Newtonian. If there's  
enough going on in there, we can't model it effectively and it can  
be considered random because we can't know its outputs.
I don't like the notion of "opaqueness" in the context.  That just  
means we can't see any order that might be in there.  There's a  
classic experiment - I think Scientific American had pictures of  
this maybe 10 years back - in which you take a pair of concentric  
cylinders, fill the gap with a viscous fluid in which you draw a  
line with dye parallel to the cylinders' common axis.  Now slowly  
turn the inner cylinder, dragging the dye along.  This is a highly  
chaotic process, and after a short time, you see a completely random- 
looking dispersion of dye through the liquid.  Present this to  
someone and any likely test will say this is quite random.  But ...  
if you slow turn the inner cylinder backwards - "slowly", for both  
directions of turn, depending on details of the system - the  
original line of dye miraculously reappears.


That's why it's not enough to have chaos, not enough to have  
opaqueness.  The last thing you want to say is "this system is so  
complicated that I can't model it, so my opponent can't model it  
either, so it's random".  To the contrary, you *want* a model that  
tells you something about *why* this system is hard to predict!


Exactly. You've described a chaotic but easily reversible system, and  
that makes it unsuitable to be random by my "unguessability" metric.  
There exists an algorithm that's faster than guessing, and so  
therefore it isn't random.





However, on top of that, there's a problem that hardware people  
(especially physicists) just don't get about useful randomness,  
especially cryptographic random variables. Dylan said that to live  
outside the law, you must be honest. A cryptographic random  
variable has to look a certain way, 

Re: very high speed hardware RNG

2008-12-30 Thread Jerry Leichter

On Dec 30, 2008, at 4:40 PM, Jon Callas wrote:
We don't have a formal definition of what we mean by random. My  
definition is that it needs to be unguessable. If I have a random  
number and the work factor for you to guess it is more or less its  
randomness. It's a Shannonesque way of looking things, but not  
precisely information-theoretic.
I don't think this quite captures the situation.  It's easy to give a  
formal definition of randomness; it's just not clear that such a thing  
can ever be realized.


Here's one definition:  A random bitstream generator is an "isolated"  
source of an infinite stream of bits, numbered starting at zero, with  
the property that a Turing machine, given as input anything about the  
universe except the internal state of the "isolated" source, and bits  
0 to n generated by the source, has no better than a 50% chance of  
correctly guessing bit n+1.  The difficulty is entirely in that quoted  
"isolated".  It's not so much that we can't define it as that given  
any definition that captures the intended meaning, there are no known  
systems that we can definitely say are "isolated" in that sense.   
(Well, there's kind of an exception:  Quantum mechanics tells us that  
the outcome of certain experiments is "random", and Bell's Theorem  
gives us some kind of notion of "isolation" by saying there are no  
hidden variables - but this is very technically complex and doesn't  
really say anything nearly so simple.)


A while back, I wrote to this list about some work toward a stronger  
notion of "computable in principle", based on results in quantum  
mechanics that limit the amount of computation - in the very basic  
sense of bit flips - that can be done in a given volume of space- 
time.  The argument is that a computation that needs more than this  
many bit flips can't reasonably be defined as possible "in principle"  
just because we can describe what such a computation would look like,  
if the universe permitted it!  One might produce a notion of "strongly  
computationally random" based on such a theory.  Curiously, as I  
remarked i that message, somewhere between 128 and 256 bits of key, a  
brute force search transitions from "impractical for the forseeable  
future" to "not computable in principle".  So at least for brute force  
attacks - we're actually at the limits already.  Perhaps it might  
actually be possible to construct such a "random against any  
computation that's possible in principle" source.


A deterministic, but chaotic system that is sufficiently opaque gets  
pretty close to random. Let's just suppose that the model they give  
of photons bouncing in their laser is Newtonian. If there's enough  
going on in there, we can't model it effectively and it can be  
considered random because we can't know its outputs.
I don't like the notion of "opaqueness" in the context.  That just  
means we can't see any order that might be in there.  There's a  
classic experiment - I think Scientific American had pictures of this  
maybe 10 years back - in which you take a pair of concentric  
cylinders, fill the gap with a viscous fluid in which you draw a line  
with dye parallel to the cylinders' common axis.  Now slowly turn the  
inner cylinder, dragging the dye along.  This is a highly chaotic  
process, and after a short time, you see a completely random-looking  
dispersion of dye through the liquid.  Present this to someone and any  
likely test will say this is quite random.  But ... if you slow turn  
the inner cylinder backwards - "slowly", for both directions of turn,  
depending on details of the system - the original line of dye  
miraculously reappears.


That's why it's not enough to have chaos, not enough to have  
opaqueness.  The last thing you want to say is "this system is so  
complicated that I can't model it, so my opponent can't model it  
either, so it's random".  To the contrary, you *want* a model that  
tells you something about *why* this system is hard to predict!


However, on top of that, there's a problem that hardware people  
(especially physicists) just don't get about useful randomness,  
especially cryptographic random variables. Dylan said that to live  
outside the law, you must be honest. A cryptographic random variable  
has to look a certain way, it has to be honest. It's got to be  
squeaky clean in many ways. A true random variable does not. A true  
random variable can decide that it'll be evenly distributed today,  
normal tomorrow, or perhaps Poisson -- the way we decide what  
restaurant to go to. No, no, not Italian; I had Italian for lunch.


That's why we cryptographers always run things through a lot of  
software. It's also why we want to see our hardware randomness, so  
we can correct for the freedom of the physical process. Imagine a  
die that is marked with a 1, four 4s, and a 5. This die is crap to  
play craps with, but we can still feed an RNG with it. We just need  
to know that it's not w

Re: very high speed hardware RNG

2008-12-30 Thread Jon Callas


The thing that bothers me about this description is the too-easy  
jump between "chaotic" and "random".  They're different concepts,  
and chaotic doesn't imply random in a cryptographic sense:  It may  
be possible to induce bias or even some degree of predictability in  
a chaotic system by manipulating its environment.  I believe there  
are also chaotic systems that are hard to predict in the forward  
direction, but easy to run backwards, at least sometimes.


That's not to say this system isn't good - it probably is - but just  
saying its chaotic shouldn't be enough.




You are saying pretty much what I've been saying about this (and some  
other things).


We don't have a formal definition of what we mean by random. My  
definition is that it needs to be unguessable. If I have a random  
number and the work factor for you to guess it is more or less its  
randomness. It's a Shannonesque way of looking things, but not  
precisely information-theoretic.


A deterministic, but chaotic system that is sufficiently opaque gets  
pretty close to random. Let's just suppose that the model they give of  
photons bouncing in their laser is Newtonian. If there's enough going  
on in there, we can't model it effectively and it can be considered  
random because we can't know its outputs.


However, on top of that, there's a problem that hardware people  
(especially physicists) just don't get about useful randomness,  
especially cryptographic random variables. Dylan said that to live  
outside the law, you must be honest. A cryptographic random variable  
has to look a certain way, it has to be honest. It's got to be squeaky  
clean in many ways. A true random variable does not. A true random  
variable can decide that it'll be evenly distributed today, normal  
tomorrow, or perhaps Poisson -- the way we decide what restaurant to  
go to. No, no, not Italian; I had Italian for lunch.


That's why we cryptographers always run things through a lot of  
software. It's also why we want to see our hardware randomness, so we  
can correct for the freedom of the physical process. Imagine a die  
that is marked with a 1, four 4s, and a 5. This die is crap to play  
craps with, but we can still feed an RNG with it. We just need to know  
that it's not what it seems.


So yeah -- it's a glib confusion between chaotic and random, but  
chaotic enough might be good enough. And the assumption that hardware  
can just be used is bad. Hardware that helpfully whitens is worse.


Jon

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Security by asking the drunk whether he's drunk

2008-12-30 Thread Sidney Markowitz
Sidney Markowitz wrote, On 31/12/08 10:08 AM:
> or that CA root certs that use MD5 for their hash are
> still in use and have now been cracked?

I should remember -- morning coffee first, then post.

The CA root certs themselves have not been cracked -- It is the digital
signatures created by some CAs who still use MD5 to sign the certs that
they issue that have been hacked: The known weakness in MD5 allows one
to create two certs with the same MD5 hash, one that is legitimate to
get signed by the CA, and another one for rogue use that can be given
the same signature.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Steve Bellovin on the MD5 Collision attacks, more on Wired

2008-12-30 Thread David G. Koontz
http://www.cs.columbia.edu/~smb/blog//2008-12/2008-12-30.html

Steve mentions the social pressures involved in disclosing the vulnerability:

Verisign, in particular, appears to have been caught short. One of the CAs
they operate still uses MD5. They said:

The RapidSSL certificates are currently using the MD5 hash function
  today. And the reason for that is because when you're dealing with
  widespread technology and [public key infrastructure] technology, you have
  phase-in and phase-out processes that cane take significant periods of
  time to implement.

 ...
[4 years?]

Legal pressure? Sotirov and company are not "hackers"; they're respected
researchers. But the legal climate is such that they feared an injunction.
Nor are such fears ill-founded; others have had such trouble. Verisign isn't
happy: "We're a little frustrated at Verisign that we seem to be the only
people not briefed on this". But given that the researchers couldn't know
how Verisign would react, in today's climate they felt they had to be cautious.

This is a dangerous trend. If good guys are afraid to find flaws in fielded
systems, that effort will be left to the bad guys. Remember that for
academics, publication is the only way they're really "paid". We need a
legal structure in place to protect security researchers. To paraphrase an
old saying, security flaws don't crack systems, bad guys do.

 --

The researchers provided information under NDA to browser manufacturers and
Microsoft contacted Verisign providing no real details
(http://blog.wired.com/27bstroke6/2008/12/berlin.html , the Wired article.):

Callan confirms Versign was contacted by Microsoft, but he says the NDA
prevented the software-maker from providing any meaningful details on the
threat. "We're a little frustrated at Verisign that we seem to be the only
people not briefed on this," he says.

The researchers expect that their forged CA certificate will be revoked by
Verisign following their talk, rendering it powerless. As a precaution, they
set the expiration date on the certificate to August 2004, ensuring that any
website validated through the bogus certificate would generate a warning
message in a user's browser.

 ---

The 2007 paper http://www.win.tue.nl/hashclash/EC07v2.0.pdf

Chosen-prefix Collisions for MD5 and Colliding X.509 Certificates for Different
Identities, Marc Stevens , Arjen Lenstra , and Benne de Weger

(also from the Wired article)

 --

Nate Lawson's comments
http://rdist.root.org/2008/12/30/forged-ca-cert-talk-at-25c3/
To paraphrase Gibson, “Crypto security is available already, it just isn’t
equally distributed.”


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Researchers Use PlayStation Cluster to Forge a Web Skeleton Key

2008-12-30 Thread David G. Koontz
http://blog.wired.com/27bstroke6/2008/12/berlin.html

More coverage on the MD5 collisions.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Researchers Show How to Forge Site Certificates |

2008-12-30 Thread David G. Koontz
http://www.freedom-to-tinker.com/blog/felten/researchers-show-how-forge-site-certificates

 By Ed Felten - Posted on December 30th, 2008 at 11:18 am

Today at the Chaos Computing Congress, a group of researchers (Alex Sotirov,
Marc Stevens, Jake Appelbaum, Arjen Lenstra, Benne de Weger, and David
Molnar) announced that they have found a way to forge website certificates
that will be accepted as valid by most browsers. This means that they can
successfully impersonate any website, even for secure connections.


 ---

Through the  use of MD5 collisions.  The slides from the presentation are
available here:

http://events.ccc.de/congress/2008/Fahrplan/events/3023.en.html

The presentation entitled "MD5 considered harmful today, Creating a rogue CA
Certificate"

The collisions were found with a cluster of 200 PlayStation 3's. (slide
number 3, see slide number 25 for a picture of the cluster, a collision
taking one to two days)

They apparently did a live demo using forged certificates in a man in the
middle attack using a wireless network during the demonstration with access
by the audience. (slide number 5)

 CAs still using MD5 in 2008:  (slide number 19)
  ? RapidSSL
  ? FreeSSL
  ? TrustCenter
  ? RSA Data Security
  ? Thawte
  ? verisign.co.jp


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: very high speed hardware RNG

2008-12-30 Thread Jack Lloyd
On Tue, Dec 30, 2008 at 11:45:27AM -0500, Steven M. Bellovin wrote:

> Of course, every time a manufacturer has tried it, assorted people
> (including many on this list) complain that it's been sabotaged by the
> NSA or by alien space bats or some such.

Well, maybe it has. Or maybe it was just not competently implemented,
or perhaps it has a failure mode that was not accounted for. The
design might be perfect but the physical implementation that happens
to be in your computer has a manufacturing flaw such that if the CPU
core voltage is slightly low and the ambient temperature is above 95F,
the raw output becomes biased from a uniform distribution in a subtle
way - how do you detect something like that? I personally would not
trust the direct output of any physical hardware source for anything,
precisely because you cannot measure it, or test for failures, in any
meaningful way. That does not mean it is not a useful thing to have.

> It's not obvious to me that you're right.  In particular, we need to
> consider how such an instruction would interact with a virtual machine
> hypervisor.  Is it a bug or a feature that the hypervisor can't
> intercept the request?  Remember that reproducibility is often a virtue.

We already have this problem with rdtsc and equivalent cycle counter
reading instructions. ISTR that some architectures allow the kernel to
forbid access to the cycle counter - if so, similar techniques could
be used for a rdrandom instruction. For those that don't, the
non-reproducability ship has already sailed (think of rdtsc as a
rdrandom that has a very bad probability distribution).

Reproducability is sometimes a virtue, but sometimes not. I recall
discussions last year, I believe on this list, about how to design a
PRNG that was able to safely deal with VM state rollbacks. A
user-accessible RNG instruction would easily alleviate this problem.

> The JVM could just as easily open /dev/urandom today.

Except when it doesnt exist - in which case most Java software seems
to default to things like seeding a PRNG with the timestamp, because
the other alternatives that are feasible in Java, like interleaving
counter reads among multiple threads, are slow, difficult to implement
correctly, and even more difficult to test.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


MD5 considered harmful today

2008-12-30 Thread Jacob Appelbaum
Hello,

I wanted to chime in more during the previous x509 discussions but I was
delayed by some research.

I thought that I'd like to chime in that this new research about
attacking x509 is now released. We gave a talk about it at the 25c3
about an hour or two ago.

MD5 considered harmful today: Creating a rogue CA certificate

Nearly everything is here:
http://www.win.tue.nl/hashclash/rogue-ca/

Best,
Jacob



signature.asc
Description: OpenPGP digital signature


Re: very high speed hardware RNG

2008-12-30 Thread Steven M. Bellovin
On Sun, 28 Dec 2008 23:49:06 -0500
Jack Lloyd  wrote:

> On Sun, Dec 28, 2008 at 08:12:09PM -0500, Perry E. Metzger wrote:
> > 
> > Semiconductor laser based RNG with rates in the gigabits per second.
> > 
> > http://www.physorg.com/news148660964.html
> > 
> > My take: neat, but not as important as simply including a decent
> > hardware RNG (even a slow one) in all PC chipsets would be.

Of course, every time a manufacturer has tried it, assorted people
(including many on this list) complain that it's been sabotaged by the
NSA or by alien space bats or some such.
 
> I've been thinking that much better than a chipset addition (which is
> only accessible by the OS kernel in most environments) would be a
> simple ring-3 (or equivalent) accessible instruction that writes 32 or
> 64 bits of randomness from a per-core hardware RNG, something like
> 
> ; write 32 bits of entropy from the hardware RNG to eax register
> rdrandom %eax
> 
> Which would allow user applications to access a good hardware RNG
> directly, in addition to allowing the OS to read bits to seed the
> system PRNG (/dev/random, CryptoGenRandom, or similar)

It's not obvious to me that you're right.  In particular, we need to
consider how such an instruction would interact with a virtual machine
hypervisor.  Is it a bug or a feature that the hypervisor can't
intercept the request?  Remember that reproducibility is often a virtue.
> 
> I think the JVM in particular could benefit from such an extension, as
> the abstractions it puts into place otherwise prevent most of the
> methods one might use to gather high-quality entropy for a PRNG seed.
> 
The JVM could just as easily open /dev/urandom today.

--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Fw: [saag] Further MD5 breaks: Creating a rogue CA certificate

2008-12-30 Thread Steven M. Bellovin


Begin forwarded message:

Date: Tue, 30 Dec 2008 11:05:28 -0500
From: Russ Housley 
To: ietf-p...@imc.org, ietf-sm...@imc.org, s...@ietf.org, c...@irtf.org
Subject: [saag] Further MD5 breaks: Creating a rogue CA certificate


http://www.win.tue.nl/hashclash/rogue-ca/

MD5 considered harmful today
Creating a rogue CA certificate

December 30, 2008

Alexander Sotirov, Marc Stevens,
Jacob Appelbaum, Arjen Lenstra, David Molnar, Dag Arne Osvik, Benne de
Weger

We have identified a vulnerability in the Internet Public Key 
Infrastructure (PKI) used to issue digital certificates for secure 
websites. As a proof of concept we executed a practical attack 
scenario and successfully created a rogue Certification Authority 
(CA) certificate trusted by all common web browsers. This certificate 
allows us to impersonate any website on the Internet, including 
banking and e-commerce sites secured using the HTTPS protocol.

Our attack takes advantage of a weakness in the MD5 cryptographic 
hash function that allows the construction of different messages with 
the same MD5 hash. This is known as an MD5 "collision". Previous work 
on MD5 collisions between 2004 and 2007 showed that the use of this 
hash function in digital signatures can lead to theoretical attack 
scenarios. Our current work proves that at least one attack scenario 
can be exploited in practice, thus exposing the security 
infrastructure of the web to realistic threats.

___
saag mailing list
s...@ietf.org
https://www.ietf.org/mailman/listinfo/saag




--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Short announcement: MD5 considered harmful today - Creating a rogue CA certificate

2008-12-30 Thread Weger, B.M.M. de
Hi all,

Today, 30 December 2008, at the 25th Annual Chaos Communication Congress in 
Berlin,
we announced that we are currently in possession of a rogue Certification
Authority certificate. This certificate will be accepted as valid and trusted 
by 
all common browsers, because it appears to be signed by one of the commercial 
root 
CAs that browsers trust by default. We were able to do so by constructing a 
collision for the MD5 hash function, obtaining a valid CA signature in a 
website 
certificate legitimately purchased from the commercial CA, and copying this 
signature into a CA certificate constructed by us such that the signature 
remains 
valid. 

For more information about this project, see 
http://www.win.tue.nl/hashclash/rogue-ca/.

The team consists of: 

Alexander Sotirov (independent security researcher, New York, USA), 
Marc Stevens (CWI, Amsterdam, NL), 
Jacob Appelbaum (Noisebridge, The Tor Project, San Francisco, USA), 
Arjen Lenstra (EPFL, Lausanne, CH), 
David Molnar(UCB, Berkeley, USA), 
Dag Arne Osvik (EPFL, Lausanne, CH), 
Benne de Weger (TU/e, Eindhoven, NL).

For press and general inquiries, please email md5-collisi...@phreedom.org.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


FBI "code"-cracking contest

2008-12-30 Thread Steven M. Bellovin
http://www.networkworld.com/community/node/36704


--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Security by asking the drunk whether he's drunk

2008-12-30 Thread Ben Laurie
On Tue, Dec 30, 2008 at 4:25 AM, Peter Gutmann
 wrote:
> Ben Laurie  writes:
>
>>what happens when the cert rolls? If the key also changes (which would seem
>>to me to be good practice), then the site looks suspect for a while.
>
> I'm not aware of any absolute figures for this but there's a lot of anecdotal
> evidence that many cert renewals just re-certify the same key year in, year
> out (there was even a lawsuit over the definition of the term "renewal" in
> certificates a few years ago).  So you could in theory handle this by making a
> statement about the key rather than the whole cert it's in.  OTOH this then
> requires the crawler to dig down into the data structure (SSH, X.509,
> whatever) to pick out the bits corresponding to the key.

Not really a serious difficulty.

> Other alternatives
> are to use a key-rollover mechanism that signs the new key with old one
> (something that I've proposed for SSH, since their key-continuity model kinda
> breaks at that point), and all the other crypto rube-goldbergisms you can
> dream up.

Yeah, that's pretty much the answer I came up with - another option
would be to use both the old and new certs for a while.

But signing the new with the old seems easiest to implement - the
signature can go in an X509v3 extension, which means CAs can sign it
without understanding it, and only Google has to be able to verify it,
so all that needs to change is CSR generating s/w...

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Security by asking the drunk whether he's drunk

2008-12-30 Thread Peter Gutmann
Ben Laurie  writes:

>what happens when the cert rolls? If the key also changes (which would seem
>to me to be good practice), then the site looks suspect for a while.

I'm not aware of any absolute figures for this but there's a lot of anecdotal
evidence that many cert renewals just re-certify the same key year in, year
out (there was even a lawsuit over the definition of the term "renewal" in
certificates a few years ago).  So you could in theory handle this by making a
statement about the key rather than the whole cert it's in.  OTOH this then
requires the crawler to dig down into the data structure (SSH, X.509,
whatever) to pick out the bits corresponding to the key.  Other alternatives
are to use a key-rollover mechanism that signs the new key with old one
(something that I've proposed for SSH, since their key-continuity model kinda
breaks at that point), and all the other crypto rube-goldbergisms you can
dream up.

In any case though at the moment we have basically no assurance at all of
key/cert information so even a less-than-perfect mechanism like trusting
Google and having problems during cert rollover is way, way better than what
we've got now.  In any case if Google decides to go bad then redirecting
everyone's searches to www.drivebymalware.ru is a bigger worry than whether
they're sending out inaccurate Perspectives responses.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Security by asking the drunk whether he's drunk

2008-12-30 Thread Ben Laurie
On Mon, Dec 29, 2008 at 10:10 AM, Peter Gutmann
 wrote:
> David Molnar  writes:
>
>>Service from a group at CMU that uses semi-trusted "notary" servers to
>>periodically probe a web site to see which public key it uses. The notaries
>>provide the list of keys used to you, so you can attempt to detect things
>>like a site that has a different key for you than previously shown to all of
>>the notaries. The idea is that to fool the system, the adversary has to
>>compromise all links between the target site and the notaries all the time.
>
> I think this is missing the real contribution of Perspectives, which (like
> almost any security paper) has to include a certain quota of crypto rube-
> golbergism in order to satisfy conference reviewers.  The real value isn't the
> multi-path verification and crypto signing facilities and whatnot but simply
> the fact that you now have something to deal with leap-of-faith
> authentication, whether it's for self-generated SSH or SSL keys or for rent-a-
> CA certificates.  Currently none of these provide any real assurance since a
> phisher can create one on the fly as and when required.  What Perspectives
> does is guarantee (or at least provide some level of confidence) that a given
> key has been in use for a set amount of time rather than being a here-this-
> morning, gone-in-the-afternoon affair like most phishing sites are.  In other
> words a phisher would have to maintain their site for a week, a month, a year,
> of continuous operation, not just set it up an hour after the phishing email
> goes out and take it down again a few hours later.
>
> For this function just a single source is sufficient, thus my suggestion of
> Google incorporating it into their existing web crawling.  You can add the
> crypto rube goldberg extras as required, but a basic "this site has been in
> operation at the same location with the same key for the past eight months" is
> a powerful bar to standard phishing approaches, it's exactly what you get in
> the bricks-and-mortar world, "Serving the industry since 1962" goes a lot
> further than "Serving the industry since just before lunchtime".

Two issues occur to me. Firstly, you have to trust Google (and your
path to Google).

Secondly, and this seems to me to be a generic issue with Perspectives
and SSL - what happens when the cert rolls? If the key also changes
(which would seem to me to be good practice), then the site looks
suspect for a while.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: Security by asking the drunk whether he's drunk

2008-12-30 Thread Peter Gutmann
David Molnar  writes:

>Service from a group at CMU that uses semi-trusted "notary" servers to
>periodically probe a web site to see which public key it uses. The notaries
>provide the list of keys used to you, so you can attempt to detect things
>like a site that has a different key for you than previously shown to all of
>the notaries. The idea is that to fool the system, the adversary has to
>compromise all links between the target site and the notaries all the time.

I think this is missing the real contribution of Perspectives, which (like
almost any security paper) has to include a certain quota of crypto rube-
golbergism in order to satisfy conference reviewers.  The real value isn't the
multi-path verification and crypto signing facilities and whatnot but simply
the fact that you now have something to deal with leap-of-faith
authentication, whether it's for self-generated SSH or SSL keys or for rent-a-
CA certificates.  Currently none of these provide any real assurance since a
phisher can create one on the fly as and when required.  What Perspectives
does is guarantee (or at least provide some level of confidence) that a given
key has been in use for a set amount of time rather than being a here-this-
morning, gone-in-the-afternoon affair like most phishing sites are.  In other
words a phisher would have to maintain their site for a week, a month, a year,
of continuous operation, not just set it up an hour after the phishing email
goes out and take it down again a few hours later.

For this function just a single source is sufficient, thus my suggestion of
Google incorporating it into their existing web crawling.  You can add the
crypto rube goldberg extras as required, but a basic "this site has been in
operation at the same location with the same key for the past eight months" is
a powerful bar to standard phishing approaches, it's exactly what you get in
the bricks-and-mortar world, "Serving the industry since 1962" goes a lot
further than "Serving the industry since just before lunchtime".

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: very high speed hardware RNG

2008-12-30 Thread Jack Lloyd
On Sun, Dec 28, 2008 at 08:12:09PM -0500, Perry E. Metzger wrote:
> 
> Semiconductor laser based RNG with rates in the gigabits per second.
> 
> http://www.physorg.com/news148660964.html
> 
> My take: neat, but not as important as simply including a decent
> hardware RNG (even a slow one) in all PC chipsets would be.

I've been thinking that much better than a chipset addition (which is
only accessible by the OS kernel in most environments) would be a
simple ring-3 (or equivalent) accessible instruction that writes 32 or
64 bits of randomness from a per-core hardware RNG, something like

; write 32 bits of entropy from the hardware RNG to eax register
rdrandom %eax

Which would allow user applications to access a good hardware RNG
directly, in addition to allowing the OS to read bits to seed the
system PRNG (/dev/random, CryptoGenRandom, or similar)

I think the JVM in particular could benefit from such an extension, as
the abstractions it puts into place otherwise prevent most of the
methods one might use to gather high-quality entropy for a PRNG seed.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: very high speed hardware RNG

2008-12-30 Thread Jerry Leichter

On Dec 28, 2008, at 8:12 PM, Perry E. Metzger wrote:



Semiconductor laser based RNG with rates in the gigabits per second.

http://www.physorg.com/news148660964.html

My take: neat, but not as important as simply including a decent
hardware RNG (even a slow one) in all PC chipsets would be.

True.

The thing that bothers me about this description is the too-easy jump  
between "chaotic" and "random".  They're different concepts, and  
chaotic doesn't imply random in a cryptographic sense:  It may be  
possible to induce bias or even some degree of predictability in a  
chaotic system by manipulating its environment.  I believe there are  
also chaotic systems that are hard to predict in the forward  
direction, but easy to run backwards, at least sometimes.


That's not to say this system isn't good - it probably is - but just  
saying its chaotic shouldn't be enough.


BTW, a link from this article - at least when I looked at it - went to http://www.physorg.com/news147698804.html 
 : "Quantum Computing: Entanglement May Not Be Necessary".  There are  
still tons of surprises in the quantum computing arena

-- Jerry





Perry
--
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com