Re: For discussion: MECAI: Mutually Endorsing CA Infrastructure

2012-02-08 Thread Ondrej Mikle
On 02/07/2012 09:58 PM, Kai Engert wrote:
 On 07.02.2012 17:54, Ondrej Mikle wrote:
 The phone calls would ensure that each registered person will be aware
 of the certificate issuance.

 This is getting very close to EV validation (Sovereign Keys have the
 same issue).
 
 I'd say making phone calls is less effort than checking business
 documents, so I'm not convinced it's close to EV - because EV is out of
 reach for anyone running a private server.

Sure, provided that you call the right owner.

 How do you plan on handling CDN services (ones with many certs)?
 
 That's a reason why I propose vouchers to be IP specific.
 
 In my understanding, each IP will have only a single certificate,
 regardless from where in the world you connect to it.

It's not true in general. There are services hidden with a load-balancer
behind a single IP. An example - 3m.com:

% dig +short 3m.com
192.28.34.26  #note: it's not fast-flux, TTL is 86400

% openssl s_client -tls1 -connect 192.28.34.26:443 -servername 3m.com
/dev/null 2/dev/null | openssl x509 -noout -fingerprint -sha256
-issuer -subject
SHA256
Fingerprint=0A:F6:9B:2A:7C:C5:7C:7E:36:1F:49:02:EF:A4:8B:1E:4D:F6:44:43:CF:AF:8F:22:75:E8:BA:B8:61:49:A0:65
issuer= /C=US/O=Thawte, Inc./CN=Thawte SSL CA
subject= /C=US/ST=Minnesota/L=Saint Paul/O=3M Company/OU=3M Company -
P9/CN=*.3m.com

% openssl s_client -tls1 -connect 192.28.34.26:443 -servername 3m.com
/dev/null 2/dev/null | openssl x509 -noout -fingerprint -sha256
-issuer -subject
SHA256
Fingerprint=40:21:0B:40:1F:1E:7D:61:D5:8B:C9:60:6C:07:1D:F0:1B:85:55:4D:5A:95:14:16:84:45:42:AD:82:44:97:CE
issuer= /C=US/O=Thawte, Inc./CN=Thawte SSL CA
subject= /C=US/ST=Minnesota/L=Saint Paul/O=3M Company/OU=3M Company -
P11/CN=*.3m.com

% openssl s_client -tls1 -connect 192.28.34.26:443 -servername 3m.com
/dev/null 2/dev/null | openssl x509 -noout -fingerprint -sha256
-issuer -subject
SHA256
Fingerprint=7A:0F:F7:50:9E:8A:67:57:5A:6E:08:16:0C:A4:C2:11:D6:DD:A0:79:78:FC:49:23:8F:9A:30:B6:F6:0E:05:98
issuer= /C=US/O=Thawte, Inc./CN=Thawte SSL CA
subject= /C=US/ST=Minnesota/L=Saint Paul/O=3M Company/OU=3M Company -
P10/CN=*.3m.com

Here's a survey on CDNs done around November 2011 that shows the CDN
services (IPs are not recorded, but a simple script could check which
ones are just behind single IP):

http://constructibleuniverse.net/CDN/CDN_hosts.csv

It's CSV with '|' delimiter, fields: domain, DB id, issuer Org, issuer
CN, first seen, last seen.

 EV doesn't help if the attacker simply decides to get a plain DV cert
 and hopes that a sufficient amount of users won't notice the missing green.

Maybe I wasn't clear before: I'm not saying EV certs would help, just
that the administrative demands/costs of verification process in
MECAI/SK are getting close to EV validation. (Technically, we could talk
about OV/IV validation instead of EV.)

 But if the domain is supposed to watch something anyway (e.g. cert
 transparency log), then the domain owner could simply watch public DNS
 for changes. They could ask a monitoring company to watch DNS and notify
 them by phone if it changes. Or they could setup a watchdog on their own
 on some hosted VPS elsewhere on the web. They could quickly detect the
 DNS manipulation, and maybe even prevent the cert from being issued.

Yes. The difference between the attacker close to webserver and
compromised DNS is just in the available attack vectors:

- with DNS, it makes it easier if MX points to other machine than
webserver and makes attacking registrar as an option, whereas
- attacker close to webserver requires that MX points to the webserver
and that active attacker can do downgrade attack to SMTPS (if SMTPS is
used; or the webserver itself is compromised).

 Maybe we should require that CAs must always send out multiple emails
 whenever a certificate is being requested, even for EV. In addition to
 the usual approval message to host/web/sslmaster@domain, it could be
 mandatory that the CA sends notification emails to each of the email
 addresses found in the domain registry. If the domain owners are smart,
 they will use email addresses from secondary domains. Thus, even if the
 attacker can hijack the DNS of the domain in question, the attacker
 might be unable to do it for those secondary domains, too. If the domain
 owners receive notification about a fraudulent certificate request, they
 can quickly react. At least they will know which CA might soon issue a
 false certificate - and the domain owner can contact that CA and request
 cancellation or revocation.

I'd also add that the secondary domain should be registered with
different registrar than primary domain.

 Being able to modify WHOIS data and hijack a domain is a separate attack
 and might require solutions from a different angle.

It's a separate attack, but unfortunately very feasible (I am inclined
to say that it's generally easier than attacking a CA).

Ondrej
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org

Re: For discussion: MECAI: Mutually Endorsing CA Infrastructure

2012-02-08 Thread Rob Stradling

On 08/02/12 12:43, Ondrej Mikle wrote:

On 02/07/2012 09:58 PM, Kai Engert wrote:

snip

That's a reason why I propose vouchers to be IP specific.

In my understanding, each IP will have only a single certificate,
regardless from where in the world you connect to it.


It's not true in general. There are services hidden with a load-balancer
behind a single IP. An example - 3m.com:


Also, a TLS Server might choose a different cert depending on the cipher 
suite list provided by the TLS client.


e.g.

$ openssl s_client -connect tls.secg.org:40023 -cipher RSA 2 /dev/null 
| grep Certificate chain -A 3

Certificate chain
 0 s:/OU=SAMPLE ONLY/O=Certicom 
Corp./L=Toronto/SN=Ontario/CN=tls.secg.org RSA 1024 Server Certificate/C=CA
   i:/OU=SAMPLE ONLY/O=Certicom 
Corp./L=Toronto/SN=Ontario/CN=tls.secg.org RSA 1024 Certificate 
Authority/C=CA

---

vs

$ openssl s_client -connect tls.secg.org:40023 -cipher ECDSA 2 
/dev/null | grep Certificate chain -A 5

Certificate chain
 0 s:/OU=SAMPLE ONLY/O=Certicom 
Corp./L=Toronto/ST=Ontario/CN=tls.secg.org ECC secp256r1 Server 
Certificate/C=CA
   i:/OU=SAMPLE ONLY/O=Certicom 
Corp./L=Toronto/SN=Ontario/CN=tls.secg.org ECC secp256r1 Certificate 
Authority/C=CA
 1 s:/OU=SAMPLE ONLY/O=Certicom 
Corp./L=Toronto/SN=Ontario/CN=tls.secg.org ECC secp256r1 Certificate 
Authority/C=CA
   i:/OU=SAMPLE ONLY/O=Certicom 
Corp./L=Toronto/SN=Ontario/CN=tls.secg.org ECC secp256r1 Certificate 
Authority/C=CA

---

AFAIK, such configurations are not widespread today, but this would 
change if/when ECC certs start to be used more widely.


--
Rob Stradling
Senior Research  Development Scientist
COMODO - Creating Trust Online
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: For discussion: MECAI: Mutually Endorsing CA Infrastructure

2012-02-08 Thread Ondrej Mikle
On 02/07/2012 06:04 PM, Kai Engert wrote:
 The CA will remember the assocation {IP, certificate}. In future
 requests, as long as this requesting IP requests a voucher for the same
 certificate, the described bidirectional authentication and verification
 will be sufficient.

Just a technicality: {IP, list of certificates, FQDN} instead of {IP,
certificate} (see my previous post to dev.tech.crypto).

We should also investigate how various cloud services (Amazon, Akamai,
...) handle IP-to-server mapping. I don't know how often does that change.

 What happens if an attacker uses a false certificate for a domain that
 is not yet using SSL? The worst that can happen is a temporary denial of
 service attack to start using SSL, because it makes it harder for the
 real domain owner to switch away from the false bookkeeping, which is
 being kept by the vouching authorities. However, as soon as the false
 certificate used by the attacker has been revoked, the vouching
 authorities will revert to the empty bookkeeping state. Also, because
 vouchers are per IP, the real domain owner can simply ensure that a
 different IP address will be used for the real server. Also, it should
 probably be defined that the bookkeeping done by vouching authorities
 (the pairs of {IP, certificate}) will expire 10 days after no more
 requests using a valid certificate were made for the given IP.

Sovereign Keys need to solve the same issue:
https://git.eff.org/?p=sovereign-keys.git;a=blob;f=issues/transitional-considerations.txt;h=fa3b1591820baf1f2f62740f1f0e8b7998c29174;hb=HEAD

What happens if domain is sold and former owner won't cooperate with
issuing new voucher or hand over private key for former server cert? How
can a vouching CA know that new owner is not an attacker?

I'll post a note about this discussion to sovereign-keys list, I think
they'd be interested as well.

Ondrej
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Google about to fix the CRL download mechanism in Chrome

2012-02-08 Thread Jean-Marc Desperrier

Hi,

Google just published the changes they are about to do in the revocation 
checking in Chrome :

http://www.imperialviolet.org/2012/02/05/crlsets.html

In my opinion, maybe somewhat opposite to the way they describe it, 
fundamentally they are not *at* *all* changing the standard PKI method 
of revocation check.


They are instead just solving a number of flaws in the way the CRL 
revocation information is fetched by browser, therefore implementing a 
new CRL fetching method that *works*, replacing the current *broken* one.


To work properly, CRL fetching must be done in advance of accessing the 
site. This never worked properly when you had to individually, locally 
determine the list of CRL to download.
Therefore establishing  centrally the list of public CRL to download, 
and pushing the result to browsers *is* the proper solution.


The other trouble with CRL in that in practice the only solution that's 
available is to download complete CRL, that include all revocation 
reason, resulting in awful bandwidth requirements.


Whereas the optimal solution would be to download each day a delta CRL, 
with only the difference with the previous day, and containing only the 
revocation reasons you *really* care about (key compromise).


By centrally converting the CRL format to a proprietary optimized format 
that contains only that, they can do it, without implementing in the 
browser the complex delta, by reason, CRL splitting mechanism that 
theoretically exists, but that nobody ever got right (and nobody will, 
as getting it right also depends on every CA getting it right, when 
their solution just *doesn't*).


The cross-signing (replacing the original signature on CRL by a new 
signature/integrity layer) this solution requires is certainly not a 
problem, it just has to be done right, which is not difficult when you 
already have a secure software update diffusion channel.


In conclusion I'm 100% in favor of Mozilla adopting this solution, 
instead of trying to invent new schemes, that are very hard to get right 
: Most people spend a lot of time on them only to realize at the end 
that making things differently usually only means making a very slightly 
differently weighted choice between all the possible parameters of a 
security solution, that ends up not really much better than the 
original, even thought you were initially convinced the original was 
very broken.


I hope I have convinced you Google's solution is not new at all, which 
is great. If it's not actually new, it's much easier to be convinced 
it's pure *enhancement*, and not change, on the current solution, so 
there's no significant drawback, and no initially non-obvious potential 
danger, at adopting it.


PS : I probably won't be much on-line in the next one-and-half week, 
just had to post this before :-)

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Google about to fix the CRL download mechanism in Chrome

2012-02-08 Thread Kai Engert

My criticism:

(a)
I don't like it that the amount of CRLs will be a subset of all CRLs. 
What about all the revoked certificates that aren't included in the list?


With a dynamic mechanism like OCSP (and in the future OCSP stapling) you 
don't have to make a selection.


(b)
I don't like it that the CRLs are supposed to be filtered. How can you 
ensure that no important revocation will be missed?


(c)
What about mobile browsers, what about people with expensive mobile data 
plans, or when roaming?


Won't the set of CRLs be too big for download?

Even if they use diffs, what about people who use their mobile browser 
only once or twice a week, and will keep the data connection off during 
the remainder of the time?


Will there be a full set of diffs for the past days still be available 
to recreate the latest set of signed CRLs, or will browsers end up 
re-downloading the full set of CRLs on each of those infrequent occassions?


But we will have to see numbers in order to judge whether this point is 
valid criticism.



In my opinion we should rather get our homework done: finally get 
on-demand downloading of CRLs done (which depends on more helping hands 
to get the bugs in libPKIX fixed), get OCSP stapling deployed and find a 
way to require strict revocation checking. The latter will involve 
creating user interface that allows users to override a temporary OCSP 
outage, e.g. when using a captive portal in order to get the payment done.


Kai

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Google about to fix the CRL download mechanism in Chrome

2012-02-08 Thread Eddy Nigg

On 02/08/2012 09:58 PM, From Jean-Marc Desperrier:
Whereas the optimal solution would be to download each day a delta 
CRL, with only the difference with the previous day, and containing 
only the revocation reasons you *really* care about (key compromise).




A certificate can be either valid, expired or revoked. A revoked 
certificate is not valid, no matter the reason (which does not have to 
be present in the CRL).


--
Regards

Signer:  Eddy Nigg, StartCom Ltd.
XMPP:start...@startcom.org
Blog:http://blog.startcom.org/
Twitter: http://twitter.com/eddy_nigg

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Google about to fix the CRL download mechanism in Chrome

2012-02-08 Thread Eddy Nigg

On 02/09/2012 12:18 AM, From Nelson B Bolyard:
Will they really include the CRLs from all of mozilla's trusted CAs? 
Won't the union of all those CRLs be huge, even if they strip off 
certain reason codes? 


BTW, this proposal wouldn't be a problem if it would cover, lets say the 
top 500 sites and leave the rest to the CAs. There would be probably 
also the highest gains.


--
Regards

Signer:  Eddy Nigg, StartCom Ltd.
XMPP:start...@startcom.org
Blog:http://blog.startcom.org/
Twitter: http://twitter.com/eddy_nigg

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Google about to fix the CRL download mechanism in Chrome

2012-02-08 Thread Brian Smith
Eddy Nigg wrote:
 On 02/09/2012 12:18 AM, From Nelson B Bolyard:
 BTW, this proposal wouldn't be a problem if it would cover, lets say
 the top 500 sites and leave the rest to the CAs. There would be
 probably also the highest gains.

Effectively, we would be making the most popular servers on the internet 
faster, and giving them a significant competitive advantage over less popular 
servers. I am not sure this is compatible with Mozilla's positions on net 
neutrality and related issues.

AFAICT, improving the situation for the top 500 sites (only) would be the 
argument for *mandatory* OCSP stapling and against implementing Google's 
mechanism. The 500 biggest sites on the internet all have plenty of resources 
to figure out how to deploy OCSP stapling. The issue with OCSP stapling is the 
long tail of websites, that don't have dedicated teams of sysadmins to very 
carefully change the firewall rules to allow outbound connections from some 
servers (where previously they did not need to) and/or implement deploy DNS 
resolvers on their servers (where, previously, they might not have needed any), 
and/or upgrade and configure their web server to support OCSP stapling (which 
is a bleeding edge feature and/or not available, depending on the server 
product).

A better (than favor the Alexa 500) solution may be to do auto-load CRLs for 
the sub-CA that handles EV roots (assuming that CAs that do EV have or could 
create sub-CAs for EV roots for which there would be very few revocations, 
which may require standardizing some of the business-level decision making 
regarding when/why certificates can be revoked), or similar things. This would 
at least reduce the cost for the long tail of websites to a low* fixed yearly 
fee. I am not sure this would be completely realistic or sufficient though.

I am also concerned about the filtering based on reason codes. Is it realistic 
to expect that every site that has a key compromise to publicly state that 
fact? Isn't it pretty likely that after a server's EE certificate has been 
revoked, that people will tend to be less diligent about protecting the private 
key and/or asking for the cert to be revoked with a new reason code?

However, I don't think we should reject Google's improvement here because it 
isn't perfect. OCSP fetching is frankly a stupid idea, and AFAICT, we're all 
doing it mostly because everybody else is doing it and we don't want to look 
less secure. In the end, for anything serious, we have been relying (and 
continue to rely) on browser updates to *really* protect users from 
certificate-related problems. And, often we're making almost arbitrary 
decisions as to which breaches of which websites are worth issuing a browser 
update for. Google is just improving on that. Props to Adam, Ben, Wan-Teh, 
Ryan, and other people involved.

Cheers,
Brian

* Yes, I consider the price of even EV certificates to be almost 
inconsequential, compared to the overall (opportunity) cost of a person needed 
to securely set up and maintain even the most basic HTTPS server.
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Google about to fix the CRL download mechanism in Chrome

2012-02-08 Thread ianG

On 9/02/12 06:58 AM, Jean-Marc Desperrier wrote:


In conclusion I'm 100% in favor of Mozilla adopting this solution,


+1

I haven't looked closely but I'm confident they will do the right thing 
in this area.


iang
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Google about to fix the CRL download mechanism in Chrome

2012-02-08 Thread Robert Relyea

On 02/08/2012 04:20 PM, Brian Smith wrote:

However, I don't think we should reject Google's improvement here because it 
isn't perfect. OCSP fetching is frankly a stupid idea, and AFAICT, we're all 
doing it mostly because everybody else is doing it and we don't want to look 
less secure. In the end, for anything serious, we have been relying (and 
continue to rely) on browser updates to*really*  protect users from 
certificate-related problems. And, often we're making almost arbitrary 
decisions as to which breaches of which websites are worth issuing a browser 
update for. Google is just improving on that. Props to Adam, Ben, Wan-Teh, 
Ryan, and other people involved.
We do OCSP fetching because CRL fetching on the internet as a whole 
didn't scale when it was tried. OCSP may not be perfect, but we do it 
because it's the best thing we have today.


OCSP stapling would certainly improve things, which is why it was 
created, what was it, oh at least 5 years ago. Part of what we are 
fighting is the general inertia of the web. It took close to 15 years to 
get OCSP generally turned on!


bob
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: Google about to fix the CRL download mechanism in Chrome

2012-02-08 Thread ianG

On 9/02/12 09:18 AM, Nelson B Bolyard wrote:

On 2012/02/08 12:57 PDT, Kai Engert wrote:


My criticism:

[snip]

Won't the set of CRLs be too big for download?

[snip]

This is my question as well.
Will they really include the CRLs from all of mozilla's trusted CAs?
Won't the union of all those CRLs be huge, even if they strip off certain
reason codes?


The reason I have confidence in them to make this work, or back off in 
the event, is that for all Google's many flaws, they are rather good at 
Internet engineering.  This might be considered to be their core competence.


iang
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Google about to fix the CRL download mechanism in Chrome

2012-02-08 Thread Eddy Nigg

On 02/09/2012 02:20 AM, From Brian Smith:
Effectively, we would be making the most popular servers on the 
internet faster, and giving them a significant competitive advantage 
over less popular servers. I am not sure this is compatible with 
Mozilla's positions on net neutrality and related issues.


Yes certainly it isn't - this is about Google and not Mozilla. And I 
don't expect Mozilla not to check the status of a certificate either (or 
at least attempting to check...).



AFAICT, improving the situation for the top 500 sites (only) would be the 
argument for *mandatory* OCSP stapling and against implementing Google's 
mechanism.


Agreed. (I would like to add that we should consider the top 500 secured 
sites when speaking about those, where essential traffic is generation 
in SSL mode).



  The 500 biggest sites on the internet all have plenty of resources to figure 
out how to deploy OCSP stapling.


Absolutely.


  The issue with OCSP stapling is the long tail of websites, that don't have 
dedicated teams of sysadmins to very carefully change the firewall rules to 
allow outbound connections from some servers.


I believe stapling will be successful when web servers will do it by 
default. This is entirely possible and wouldn't require from the admins 
lots of knowledge. The majority will never turn it on if it's only optional.



A better (than favor the Alexa 500) solution may be to do auto-load CRLs for 
the sub-CA that handles EV roots


That's a very good idea (and for the reasons you stated).


However, I don't think we should reject Google's improvement here because it 
isn't perfect. OCSP fetching is frankly a stupid idea, and AFAICT, we're all 
doing it mostly because everybody else is doing it and we don't want to look 
less secure.


Well, in fact the Mozilla based browsers were one of the first to 
successfully support OCSP. Most, if not all other browsers either didn't 
even exist at that time or didn't support OCSP (and CRL checking was not 
turned on by default either).


--
Regards

Signer:  Eddy Nigg, StartCom Ltd.
XMPP:start...@startcom.org
Blog:http://blog.startcom.org/
Twitter: http://twitter.com/eddy_nigg

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto