On 22/08/13 09:09 AM, Mikko Rantalainen wrote:
On Friday, 16 August 2013 12:01:51 UTC+3, Gervase Markham  wrote:
On 15/08/13 11:22, Mikko Rantalainen wrote:

No. The site's public key does not need to be changed to request a
new certificate.

Technically, no. But there are other occasions on which it does - key
length upgrade, cert private key loss or compromise (or possible
compromise). In the current system, none of these things have to bother
the user.

Perhaps I'm not an average user but I would like to be informed about changed 
key in all those cases.

Arguing for the other side, it is a good idea and it has been shown to work. However, it is complicated. For one example if you are browsing to some big site that goes through an SSL farm, you might be facing 100s of certificates, each in different boxes.

If you understand what a certificate is (certifying a statement of identity to you) this makes sense, as the statement is always the same. But to deal at this level - understanding what a cert is as opposed to software and math and x509 and 'trust bits' and other technophilia - has so far not been easy for the software people.

2 year certs if time limit increases security? Why not issue a new
signature every day and be done with broken revocation lists?)

1. That's what OCSP is. The equivalent of a new signature every few minutes.

Yeah, and the browser support for this is approximately zero. Certificates 
served by the server itself would work with all user agents today. Sure, some 
servers make updating the certificate a bit too hard to do automatically or 
every day, but fixing the servers is a lot easier than fixing all the user 
agents.

Right. Trapped. In this case, OCSP is a quadruple deadlock between servers, browsers, committees, and CAs. Getting one of these to move is beyond normal humans. Getting them to dance together is the work of many gods.


2. Limited cert lifetimes mean that if an algorithm starts to look dodgy
(e.g. as MD5 did) we can move the industry to new algorithms without
having to worry about 20-year end-entity certs. This is why we have been
pushing in the CAB Forum for shorter max cert lifetimes. It's the CAs
who want longer lifetimes!

As long as the CA key X is signed with algorithm Y and its lifetime is N years, 
there's no additional security for signing chained keys for shorter lifetimes. 
For example, if a CA has 2048 bit RSA key with self signature using SHA-1 and 
lifetime of 20 years, it really does not matter if chained server keys have 
better algorithms and longer key lengths. If we really believed that shorter 
lifetime is required for the keys, we would be replacing those CA keys already.


Well, it's a little complicated. The root key gets its 'signature' not from RSA but from its position within the root list that the vendor supplies. (The SHA1+self-sig is only there because the signed cert is the packet of choice, there isn't really a popular packet for distributing a non-signed key in the x509 world.)

Also, it's worth bearing in mind that the number of bits is a distractor. All the weakness comes from elsewhere, so fiddling around with the bits is just so much numerology that amuses NIST and numerate managers and others. It does little for overall security.


As far as I know, the problem with SSL/TLS certificates is inadequate 
verification done by the CAs or leaking the whole private key, not the key 
strength.

Nope. That is what people will tell you, but they are telling you that for other reasons (specifically, big CAs will say that of small CAs).

In security terms, verification is only useful if you've got something you can do with the identity so verified. In practice there is little or nothing you can do with an identity, and meanwhile the goods are gone. It's because of this underlying failure of feedback look that CAs are engaged in a race to the bottom.

Leaking the whole (CA's root) private key is an issue, but only because the browser revocation process is in its infancy. At some stage they'll be able to update the root list dynamically, in which case the damage will be firewalled.

If you want to see what matters, look at what the crooks do. Phishing has been an every present problem, with varying estimates of damages. Spear phishing has evolved into a credible attack on corporates for espionage (damages aren't typically as easy to calculate). For the most part, these attacks work because the browsers do not work effectively at the UI/security juncture. (And for counter example S/MIME is a running joke.)




iang
_______________________________________________
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security

Reply via email to