Ian Grigg wrote:
My key argument is a bit different: What happens if a user encounters a spoofed web site wanting him to install a new (rogue) CA root cert? If the user clicks yes, the rogue CA has won (and isnt likely to be de-trusted by this user once again).
Oh, I see, yes I missed that. If that happens, it is game over.
Therefore it would be nice to give the user some informations about the CA that asks to be trusted - as this is the moment this kind of attack can be prevented.
OK.
Thats why I propose some trusted third party web site including just the informations about the different CA's, listing also the known rogue ones, and the ones that are newer then the users browser version.
But, unless it is automated, it is hard for the user to do much about it. The user is likely to just click through.
Presumably, one wants a meta-CA, but that just creates yet another point of trauma. It's already a bit of a challenge to get the user to keep some sort of eye on the false cert attack. How do you propose getting this working for the meta-CA?
Right. One of the "signs" that a web site is spoofed _could be_ its SSL certificate, altough today almost no users look at the SSL icon.
But if a rogue CA manages to get its root cert into the trusted cert store, this SSL lock wouldn't be trustworthy any more.
Yup. Although the other defences that have been proposed (branding, etc) will still work. What they effectively do is take us back to when info on the SSL connection was important, and drag the user (perhaps kicking and screaming) back into the security protocol. In this case, the browser can give the user some sense of how well the browser knows the cert - never before? many times?
See above, CA1 != CA2 isn't part of the model.
Right, as long as CA1 and CA2 aren't rogue (=not trustworthy at all):)
Yes. That's outside the model. The model isn't realistic. It's broken. Luckily, nobody's bothered to attack it (c.f., that Microsoft browser bug).
Now, we could quite happily go on ignoring breaks in the security model and hope the attacker never comes along and bothers to attack it. But, now there is a serious phishing attack (one estimate I saw said 5% of users could be fooled, but I intiutively think that too high) so in the process of addressing that, the whole browser security model should get a rethink.
(The literature I found doesnt really state much about this, mostly its said "do not trust any CA unless you're really sure...";
What literature have you found that seriously addresses the security model? I'm surprised you've found anything addressing the CA, as this is something that is out of the hands of the user.
I wasn't able to find anything serious, and relied on Eric Rescorla's SSL book and a bunch of papers. For the most part, I've collected my information at http://iang.org/ssl/ and http://iang.org/ssl/pki_considered_harmful.html#links
> thats what I'm
trying to enhance with the TTP web site; and thats what Microsoft wants to solve with their "root cert update").
CA-signed certs aren't under attack, secure browsing is. So it seems likely that a root cert attack is more of a theoretical issue or a design issue - which might suit your purposes admirable.
Regardless of that, I'd suggest that you would have to address how the CA-signed cert attacks work alongside with your investigations into how the root cert gets attacked. As in, the two should be treated hand in hand, as addressing one risk without covering the other will leave one exposed, academically.
iang _______________________________________________ mozilla-crypto mailing list [EMAIL PROTECTED] http://mail.mozilla.org/listinfo/mozilla-crypto
