> Hi Ian! (This was originally a private, but I'm copying the list cuz
> of your "anyone who's better with SSL..." comment.)
Sure!
>> There have been studies, and they've brought
>> out a lot of stuff, but that stuff is rarely
>> adopted. The browser world isn't set up to
>> adopt ideas from outside, for differing reasons
>> (for example, techies don't read academic papers
>> coz their boring).
>
> I'd much rather read an academic paper about something that will help
> the users of my software get what they need out of it than program
> something that won't. I realize that it's impossible to get things
> right on the first try, but the security interface in
> Netscape/Mozilla/Firefox has been essentially unchanged since Netscape
> 0.94.
Indeed! The starting that I would suggest is
the WebTrust paper (no URL this time as I'm on
webmail). Also there are some short papers on
UI security aspects posted by Ping a while back
which I'll have to dig for. There is also the
Smith & Ye paper which I haven't read for a while
but can be considered a forerunner into WebTrust
and was also coded up in Mozilla. Also Marc
Stiegler's intro to petnames and especially
Zooko's Triangle is key to understanding the
human - key interface. I lack the URLs right
now.
>> > It's inefficient partly due to the difficulty of getting a certificate
>> > to work in the second place. And for end-entities, it's inefficient
>> > partly due to the policies of (say) webmail providers who want to tack
>> > something onto the end of any message that the customer sends -- as an
>> > example, look at Hotmail and its breakage of digital signatures.
>>
>> Right. However, your causality is back to
>> front; all those say that it is an inefficient
>> market, the question I was answering was why
>> the market is inefficient. Nothing changes
>> until those core *business* factors are addressed,
>> which is why Frank's project is by definition the
>> most important project in the browser world today
>> (because it effects the business factors).
>
> Actually, I think it's nearing time that CAs were taken out of the
> hands of businesses entirely... at least until business can create a
> coherent idea of what they need to accomplish, and design a CA
> infrastructure that meets those needs. Commercial operations would
> run their own CAs, or outsource to others to run their CA... and each
> outsourcing would have a specific contractual obligation setting blame
> for fraud at the cost of the fraud. (Insurance companies would then
> put pressure on the outsourced CAs to reduce the risk of fraud.)
Which reminds me, another aspect that puts a
dampener on CAs is Sarbanes Oxley, which basically
says that the corporates cannot outsource the risk
to CAs. Now, in this particalar case I suspect
that all it achieves is some mandated regs saying
what we already knew, so it makes it easier to
place the CAs in the process / outsourcing game
rather than the "trust us" game.
In terms of your comments - I have to say "so what?"
The goal of Mozilla should be security for users,
not industrial subsidies to CAs. Unfortunately,
the structure of the PKI was built to support the
CAs and it is somewhat heretical to say things like
"CAs need to survive on their merits." My own view
is that theirs is a relatively good future for them,
but only after they have gone through the tough
culling of the domain control shakeout (which is
in essence a move through a "user-demanded service"
phase).
> But there's a conceptual issue here: a Certifying Authority can only
> certify identities within its own domain, using information that it
> has from within its own domain. Verisign et al are issuing
> certificates based on 'legal identity' [what everyone else calls 'real
> identity', but I balk at that notion as well], but the problem is that
> they are /not/ the keepers of the domain of legal identities. Which
> means that they're taking on a pretention that they really shouldn't
> be allowed to have.
Well, they can call it what they like. If anyone
relies on it, I imagine Verisign will say that's
a problem for the relying party, and I doubt that
they will stand up in court and say it is a "legal
identity." (The same could be said for "real
identity" being an equally made up term of no
real foundation.)
> I hate to say it, but since government is what makes the legal
> identity concept flow, it needs to be the various governments that
> actually run the 'legal identity' CAs. (Or award contracts to run
> such.) And these governments would not be the US federal level...
> instead, the US state level. (I don't know how things are arranged in
> other countries, so I can't comment.)
It may surprise you but I personally don't think
there is a concept of legal identity that is strong
enough in the US Federal level to support what you
are saying. In Napoleonic countries ("civil law"
and the Napoleonic draft identity concept) there is
a strong concept of identity; which is one important
reason why some countries like Estonia have had
some success at PKIs. But in the anglo world, it
isn't that firm. In common law, it was always the
case that you could adopt any name you liked as long
as you weren't committing fraud. That might be a
fast fading anachronism, but it remains the basis in
law afaik, and is reflected in the basic concept of
a "legal person" a.k.a. a company costing $100 off
the shelf.
>> > Second: CAs need to no longer have such a high barrier to entry.
>> > (Even OpenCA is too difficult for most system administrators to set up
>> > -- and MS's CA is too difficult to administer.)
>>
>> Yes. Unwinding the "big expensive CA" model is
>> taking a long time. Unfortunately, what people
>> don't realise is that model is one of the very things
>> that killed the big CAs, they needed the little
>> guys to create the food chain, it wasn't possible
>> to construct a shark and ban all the the other
>> competitors ... because the competitors *are* the
>> food chain. (That's what comes of letting sales
>> people design your PKI, they deliberately created
>> the big model thinking if they could get their
>> first it would all work out.)
>
> Sales people, versus... economists?
LOL... no, sales people versus business people
and especially marketing & strategy people (sales
is not marketing!) (MBAs for want of a better word).
As it happens I once had a funny comment when
I claimed that if
domain control or similar certs were used, the
market would grow to 2-10 times in size; the
response I got back was that without doing any
econometrics that was an unfounded statement!
(Econometrics is what macro economists do to
measure support for monetary theories, which
indicates that certain companies *were* using
economists of the wrong kind, perhaps, to do
their marketing. Any marketing person will know
what is meant by the lack of a discriminated
product holding back the market and why the
prediction of massive certs growth was an
obviousity. Actually, any economist would
know about discrimination too!)
>> Sure, all these things are nice. Actually, the
>> killer app is sort of sitting in there as we speak,
>> it is thunderbird. But, what has to happen is the
>> CA-first principle needs to be turned into CA-last,
>> and email accounts should create their self-signed
>> certs on the fly and distro them in some fashion,
>> again on the fly. But for that to happen, we have
>> to understand that people email other people without
>> really caring that they don't have some other party
>> telling them who it is.
>
> Indeed. (Email is based on reputation, more than anything -- either
> the person is known RL, in which case you're merely creating a
> transition of domains of trust... or the person is unknown, in which
> case you're creating a reputation based on what they have to say and
> what they present. For example, '[EMAIL PROTECTED]' has established
> a reputation with me as a critical thinker about cryptography and
> security systems. As has '[EMAIL PROTECTED]'. And various other
> identities, based on email address more than whatever 'name' the
> person has voluntarily associated with that address.)
Be careful ... that '[EMAIL PROTECTED]' got
wiped out by a failed machine last week, this
is another one :)
But yes. Email works that way. The only
exception of any note is the government/defence
category and the evolving Sarbanes Oxley "must
control everything" corporate world. But even
then, email doesn't work the PKI way enough to
warrant the current design of Thunderbird's
all-or-nothing cert use. And Mozilla's mission
is the average user, not the corporate user,
who essentially has a budget and therefore can
pay for any designs and tweaks and CAs and what
have you.
>> > Once that occurs, websites will begin using cryptographic
>> > authentication instead of username and password pairs, because it will
>> > be very cheap for them to do so (meaning, the cost of setting up a
>> > certificate-based authentication mechanism will be offset by the fact
>> > that users will be able to log in automatically). Especially if they
>> > run their own low-barrier-to-entry CA, or outsource it to someone
>> > else.
>>
>> Right, in principle. Unfortunately, the way the
>> user certs are placed makes them only useful for
>> all-or-nothing scenarios, and that makes deployment
>> difficult. There would need to be changes to the
>> SSL user cert protocol to make that fly (only the
>> server can request a cert, in which case it is a
>> demand, what is needed is for the client to suggest
>> certs on the fly). (Those who know the SSL protocol
>> should jump here in and correct if I'm wrong...)
>
> Correction: the way the user certs are /currently/ placed. But
> there's not precisely much choice about the concept -- else you have
> the SSH management problem.
Those of us who deal daily with SSH often wonder
what this problem is.... "Oh, yeah, that!" But
it works, doesn't it? Whereas the client-side
cert thing isn't useable outside corporate forced
environments.
> As a side note: I find it absolutely deplorable that usernames and
> passwords are still with us. We've had cryptographic strong
> authentication for a long while -- why don't we use it more often?
> (aside from the aforementioned 'barriers to entry' to running a CA,
> that is.)
There are many complex reasons for that, and
I leave it aside as an interesting problem for
now :)
> (And I do know the SSL protocol. The server may or may not
> authenticate itself. If the server doesn't authenticate itself, the
> client may close the connection. Iff the server authenticates itself,
> it may ask for a certificate, and provide a list of CAs that it trusts
> in order to make it possible to figure out which end-entity
> certificate is needed. If it does not get a valid certificate, it may
> close the connection.)
Sounds about what I thought. As an application
programmer, I'd want the client to suggest the
cert, and the API to provide the cert details to
the server application at a later time. So it
might not get used, but it's there if needed.
That's one way to get over the barrier of
client-cert deployment, there are other ways.
>> > You would think that with the failure of X.500 (why they used it as
>> > the basis for LDAP I'll never know), they would recognize that BER and
>> > DER have serious flaws, that ASN.1 isn't all it's cracked up to be,
>> > and that X.509 is a horrible idea to base certificates on. (Flaws in
>> > MS's ASN.1 implementation were found, giving SYSTEM access on WinXP
>> > and below and Network Service access on WinServ2k3. I'd bet there are
>> > other ASN.1/BER/DER parsers that also have or had problems.)
>>
>> The core cert layout is well known to be dross. I'm
>> not entirely convinced that OpenPGP is any better, it
>> has for example 5 different ways of expressing an
>> integer, and you "just have to know" so many things.
>
> So, get a working group together and do the IETF geek semi-annual
> party thing... define a new format, maybe XML-based. (Though the way
> I see it, X.509v3 could easily be converted to XML and back again --
> only the DER form could have the signature verified, but XML would
> make it a lot easier to see what's actually /there/.)
LOL... You think x.509 is bad, wait until you
see what the XML people did! No, the only way
this is going to happen is the same old way as
always: a very small team comes up with a killer
app and happens to do it right. It spreads from
there.
Committees are for standardising already working
product; they are not for designing new systems.
>> The thing is there is a wealth of knowledge out
>> and about on how to improve the security in the
>> browser. It is the browser world's challenge to
>> adopt that knowledge. (I say that explicitly
>> including others like Microsoft who have just
>> as much of a challenge coping with security as
>> does Mozilla.)
>
> There's also a wealth of knowledge on how to improve the security of
> the user's online experience, which is not necessarily the same
> thing...
All part and parcel - if the user rejects the
security model, then it won't be secure.
>> So the more forward looking thought we can get
>> together the better. At some point we might have
>> some programmers who are ready to start branching
>> off the main distro and start working on the
>> security.
>
> I am, unfortunately, not conversant with the layout of the Firefox
> source code -- and there's no "Firefox internals" book like there is
> with the Linux kernel. Is there a breakout of the API anywhere?
> (i.e., a breakout of the hooks that occur when an https connection is
> made, the certificate is validated, and how to change various aspects
> of the window etc?)
You might like to post that question separately to
the list.
iang
_______________________________________________
mozilla-crypto mailing list
[email protected]
http://mail.mozilla.org/listinfo/mozilla-crypto