Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-19 Thread Ryan Sleevi
On Fri, June 19, 2015 11:10 am, Brian Smith wrote:
  The current set of roots is already too big for small devices to
  reasonably
  manage, and that problem will get worse as more roots are added. Thus,
  small devices have to take a subset of Mozilla's/Microsoft's/Apple's
  roots.

Without wanting to fracture the discussion, I think it's reasonable to say
citation needed.

It's also neither fair nor reasonable to ask Mozilla to design a policy
for situations that Mozilla has no activity in, has no visibility in, and
has no influence in.

I've admittedly presumed when you say smaller devices you're talking
IoT, since it's demonstrably false for the space of 'mobile' and
demonstrably false for the space of 'small device' (e.g. Chromecast,
Amazon FireTV, Roku), which are ostensibly small devices.

It also doesn't apply to the 'smaller still' line of devices - for
example, Google's Brillo framework is smaller overhead than that of
Chromecast/FireTV, and yet still manages to find a reasonable trust store
without compromising.

It's also worth noting that these devices (which Mozilla does not develop
code for, AFAIK) can further optimize their handling by treating roots as
trust anchors (Subject + public key), the same way that NSS trust records
are expressed, rather than storing the full certificate. NSS's libpkix was
certainly designed for this, although I believe mozilla::pkix requires a
full cert?

Still, it means the cost is not the 560K of the full root store, but far les.

Of course, it's completely unreasonable to talk about the constraints of
IoT security on the internet when many of the devices being produced lack
a basic capability of updating or security fixes. If you want to posit
that these devices 'divide the internet' (as you suggest), then the first
and foremost you must acknowledge the potential harm and self-inflicted
wounds these devices are causing, before it be suggested that it's
Mozilla's responsibility.

  In particular, Mozilla should put into place a mechanism to ensure that it
  doesn't end up with ~400 government roots, instead of just hoping that 200
  governments don't apply.

Apologies if you feel I'm mischaracterizing your response, but the only
(technical, policy) argument that you seem to positing is that It's hard
for IoT devices to access the Internet if there's a lot of roots.

How is that Mozilla's responsibility (which isn't in that space), and what
other reasons are there for prohibiting ~400 roots, beyond that? How or
why is this not a problem that the IoT device manufacturer can/should
solve?


Now, that said, I'm extremely sympathetic and deeply appreciative of the
fact that the Mozilla Root Inclusion Policy has historically been operated
in the community-at-larges interest, and not strictly Mozilla's interests.
This has allowed a variety of Linux distributions, for example, to reuse
the Mozilla store as a canonical root of trust for their users. So if
there as an argument of Mozilla's policy X would make it hard for
downstream user Y to consume and reuse the trust store, then I think
that's entirely reasonable. But I don't think it's fair to suggest that
Mozilla solely bears responsibility for changing to accommodate downstream
user Y, nor is it clear if there even is a downstream user Y in this
hypothetical discussion of smaller devices in order to actually weigh
the technical or policy tradeoffs.

For example, your proposed Don't allow non-global CAs into the root
program, because small devices fails to account for why CAs enter the
non-global market to begin with. If you review the policies, you will see
it's because many of these CAs wish to target specific classes of users
for which they can employ a more rigorous form of request validation than
required by the Baseline Requirements, often to enable new classes of
services that cannot be directly served by the commercial CA market
according to the security/threat model of their constituents.

Look at CNNIC, which primarily serves a domestic Chinese userbase, and
their policies and practices related to validating the identity of the
requester in a way that's specialized and tailored to the identity
structure employed in China. Or look at many of the CAs participating in
ETSI programs that allow PTC-BR and Qualified certificates (such as those
in the PKIoverheid program). These CAs employ verification and validation
techniques specific to their regional security needs.

Now, if the only option for recognizing these certificates was that these
CAs would have to be globally accessible / serve a global market (again, a
definitional issue that is surprisingly hard to pin down if you work
through it), then the natural outcome of this is policies that go from:
We serve [population of users X] and employ [more rigorous method Y]
to
We serve all users globally. For [population of users X], we employ [more
rigorous method Y]. For [everyone else], we employ [as little as
possible].

The as little as possible isn't because 

Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-19 Thread Gervase Markham
On 17/06/15 22:50, Brian Smith wrote:
 By small scope, I'm referring to CAs who limit their scope to a certain
 geographical region, language, or type of institution.

I'm not sure how that neuters my objection. CAs who do more than DV will
need to have local infrastructure in place for identity validation. Are
you saying that a CA who can't do that worldwide from the beginning is
unsuitable for inclusion?

 For example, thinking about it more, I think it is bad to include
 government-only CAs at all, because including government-only CAs means
 that there would eventually be 196 government-only CAs

Not necessarily at all; not all governments appear to be interested in
running CAs for public use. The slope is not that slippery.

Gerv

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-18 Thread Dimitris Zacharopoulos

On 18/6/2015 12:50 πμ, Brian Smith wrote:

I did, in my original message. HARICA's constraint includes *.org, which is
much broader in scope than they intend to issue certificates for. dNSName
constraints can't describe HARICA's scope.

Cheers,
Brian


Hi Brian,

It is very common for projects, research institutions with an 
international scope and other initiatives to request dns names that 
contain .org or even .eu.



Best regards,
Dimitris Zacharopoulos.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-17 Thread Brian Smith
Gervase Markham g...@mozilla.org wrote:

 On 06/06/15 02:12, Brian Smith wrote:
  Richard Barnes rbar...@mozilla.com wrote:
 
  Small CAs are a bad risk/reward trade-off.
 
  Why do CAs with small scope even get added to Mozilla's root program in
 the
  first place? Why not just say your scope is too limited to be worthwhile
  for us to include?

 There's the difficultly. All large CAs start off as (one or more :-)
 small CAs. If we admit no small CAs, we freeze the market with its
 current players.


By small scope, I'm referring to CAs who limit their scope to a certain
geographical region, language, or type of institution.

For example, thinking about it more, I think it is bad to include
government-only CAs at all, because including government-only CAs means
that there would eventually be 196 government-only CAs and if each one has
just 1 ECDSA root and 1 RSA root, then that's 392 roots to deal with. If we
assume that every government will eventually want as many roots as Symantec
has, then there will be thousands of government-only roots. It's not
reasonable.

StartCom and Let's Encrypt is a converse example, because even though they
had issued certificates to zero websites when they started, their intent is
to issue certificates to every website.

For example, if Amazon had applied with a CP/CPS that limited the scope to
only their customers, then I would consider them to have too small a scope
to be included.

 Mozilla already tried that with the HARICA CA. But, the result was
 somewhat
  nonsensical because there is no way to explain the intended scope of
 HARICA
  precisely enough in terms of name constraints.

 Can you expand on that a little?


I did, in my original message. HARICA's constraint includes *.org, which is
much broader in scope than they intend to issue certificates for. dNSName
constraints can't describe HARICA's scope.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-11 Thread Gervase Markham
On 06/06/15 02:12, Brian Smith wrote:
 Richard Barnes rbar...@mozilla.com wrote:
 
 Small CAs are a bad risk/reward trade-off.
 
 Why do CAs with small scope even get added to Mozilla's root program in the
 first place? Why not just say your scope is too limited to be worthwhile
 for us to include?

There's the difficultly. All large CAs start off as (one or more :-)
small CAs. If we admit no small CAs, we freeze the market with its
current players.

A great case for this, of course, is Let's Encrypt, who are currently as
tiny as it's possible to be, and yet I don't think you'd say they are a
bad risk/reward trade-off. That leads me to think that whether a CA is a
bad trade-off has factors to consider other than its size.

 Mozilla already tried that with the HARICA CA. But, the result was somewhat
 nonsensical because there is no way to explain the intended scope of HARICA
 precisely enough in terms of name constraints.

Can you expand on that a little?

Gerv

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-10 Thread Hubert Kario
On Tuesday 09 June 2015 11:57:40 Rick Andrews wrote:
 On Tuesday, June 9, 2015 at 3:05:30 AM UTC-7, Hubert Kario wrote:
  True, OTOH, if a third party says that there was a misissuance, that means
  there was one.
 
 I disagree. Only the domain owner knows for sure what is a misissuance, and
 what isn't. It seems likely that I might turn over all known certs for my
 domain to the third party, but they might find another one, and I might say
 oh, yeah, I forgot about that one. So a third party can only report to
 the domain owner, but cannot know if the cert is legitimate.

the implied situation was that the tool is run by the domain owner/admin

-- 
Regards,
Hubert Kario
Quality Engineer, QE BaseOS Security team
Web: www.cz.redhat.com
Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic

signature.asc
Description: This is a digitally signed message part.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-10 Thread Matt Palmer
On Tue, Jun 09, 2015 at 12:00:23PM -0700, Rick Andrews wrote:
 On Tuesday, June 9, 2015 at 7:45:05 AM UTC-7, Kurt Roeckx wrote:
  On 2015-06-09 15:26, Peter Kurrasch wrote:
   3) How frequently might such tools run? Or to put it differently, how 
   much time do I probably have between when I issue a gmail cert and when 
   someone figures it out (and of course how much longer before my 
   illegitimate cert is no longer valid)? I need only 24 hours to do all the 
   damage I want, but in a pinch I'll make do with 8.
  
  CT allows to store precertificate.  That is, the CA says it intents to 
  issue a certificate.  Should we mandate the use of precertificates and a 
  minimum time between the precertificate and the real certificate?
 
 Absolutely not. If a CA is unable to get the required minimum number of
 SCTs, it will likely not issue the cert (sure, it may retry, but it's
 possible that retries fail too).  Logging must be seen as intent, but not
 a guarantee of issuance.

A minimum time doesn't imply a maximum (or, rather, a finite maximum).  From
your perspective, I'd object on another basis: any non-trivial delay in
issuance degrades the user experience of those acquiring certificates.  As a
consumer of CA services, I wouldn't want to have to wait (say) 3 days to get
my DV cert, just because of CT requirements.

- Matt

-- 
Advocating Object-Oriented Programming is like advocating Pants-Oriented
Clothing.
-- Jacob Gabrielson

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-10 Thread Matt Palmer
On Tue, Jun 09, 2015 at 08:26:55AM -0500, Peter Kurrasch wrote:
 1) How to exclude domains from the search? For example I want to find
 gmail certs but exclude something like eggmail which could be a false
 positive.

Constrain your search to domains which have a name part which is exactly
'gmail'.

 2) How to address wild cards? For example can I bypass these detection
 tools by issuing a cert for *[dot]innocentdomain[dot]com instead of
 gmail[dot]innocentdomain[dot]com?

Fun times.

 3) How frequently might such tools run? Or to put it differently, how much
 time do I probably have between when I issue a gmail cert and when someone
 figures it out (and of course how much longer before my illegitimate cert
 is no longer valid)?  I need only 24 hours to do all the damage I want,
 but in a pinch I'll make do with 8.

How frequently does the consumer of such services *want* the tool to run?

 4) What's the expected model for a third-party monitor? Who might the
 organizations be and how might the monitoring be funded?

People give the monitor money.  The monitor protects their interests.

 In a way the SSL Labs service is a perfect example of the limitations of a
 monitoring service.  Their SSL Pulse found an awful lot of servers with
 a failing grade.

Luckily, CT doesn't aim to fix every Goober-with-Apache; it's main benefit
is in providing sunshine on CA operations, and making it easier to provide
complete evidence of malfeasance or incompetence.

- Matt

-- 
My favourite was some time ago, and involved a female customer thanking Mr.
Daemon for his effort trying to deliver her mail, and offering him a good
time if he ever visited Sydney.
-- Matt McLeod

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-10 Thread Rob Stradling

On 10/06/15 01:54, Matt Palmer wrote:

On Tue, Jun 09, 2015 at 10:44:58AM +0100, Rob Stradling wrote:

On 09/06/15 04:05, Clint Wilson wrote:

To further support your claims here, Chris, there are already tools coming out 
which actively monitor domains in CT logs and can be set up with notifications 
of misissuance:
https://www.digicert.com/certificate-monitoring/
https://groups.google.com/forum/#!topic/mozilla.dev.security.policy/EPv_u9V06n0

This type of tool for CT is only going to improve with time.


So I'm wondering if the TRANS WG should think about standardizing a JSON API
for searching CT logs and for setting up notifications of (mis-)issuance.
The server side of this API could be implemented by services such as
https://crt.sh or even directly by the logs themselves.


For logs themselves, as a requirement for *being* a log?  No.


Fully agree.


A log has a single well-defined purpose, and I don't think that adding 
independent
functionality to the purpose of the log itself is a winning strategy.


Indeed.  A log and a monitor are separate software components.

Sorry I was unclear.  I was imagining that a provider might want to 
provide both a log and a monitor for that log, running from the same 
server, using a shared database.  (IMHO, we shouldn't require this, but 
we shouldn't prohibit it either).



An API for querying the CT-relevant data for a collection of certificates...
*that* would probably be quite useful.


:-)


BTW, you probably won't be surprised to hear that I've been trying to think
of reasons to create a shell script called crt.sh.  ;-)


Nope, not particularly surprised.


:-)

--
Rob Stradling
Senior Research  Development Scientist
COMODO - Creating Trust Online

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-10 Thread Rick Andrews
I don't understand. The domain owner/admin is not a third party. 

-Rick

 On Jun 10, 2015, at 4:01 AM, Hubert Kario hka...@redhat.com wrote:
 
 On Tuesday 09 June 2015 11:57:40 Rick Andrews wrote:
 On Tuesday, June 9, 2015 at 3:05:30 AM UTC-7, Hubert Kario wrote:
 True, OTOH, if a third party says that there was a misissuance, that means
 there was one.
 
 I disagree. Only the domain owner knows for sure what is a misissuance, and
 what isn't. It seems likely that I might turn over all known certs for my
 domain to the third party, but they might find another one, and I might say
 oh, yeah, I forgot about that one. So a third party can only report to
 the domain owner, but cannot know if the cert is legitimate.
 
 the implied situation was that the tool is run by the domain owner/admin
 
 -- 
 Regards,
 Hubert Kario
 Quality Engineer, QE BaseOS Security team
 Web: www.cz.redhat.com
 Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-10 Thread Hubert Kario
On Wednesday 10 June 2015 07:28:06 Rick Andrews wrote:
 I don't understand. The domain owner/admin is not a third party. 

the third party in question was an entity running the CT service

and since they can produce a certificate signed by a trusted CA as a proof of 
misissuance, the data itself is also trusted. Independent of your trust of the 
third party.

 
 -Rick
 
 
  On Jun 10, 2015, at 4:01 AM, Hubert Kario hka...@redhat.com wrote:
  
  
  On Tuesday 09 June 2015 11:57:40 Rick Andrews wrote:
  
  On Tuesday, June 9, 2015 at 3:05:30 AM UTC-7, Hubert Kario wrote:
  True, OTOH, if a third party says that there was a misissuance, that
  means
  there was one.
  
  
  I disagree. Only the domain owner knows for sure what is a misissuance,
  and what isn't. It seems likely that I might turn over all known certs
  for my domain to the third party, but they might find another one, and I
  might say oh, yeah, I forgot about that one. So a third party can only
  report to the domain owner, but cannot know if the cert is legitimate.
  
  
  the implied situation was that the tool is run by the domain owner/admin
  
  -- 
  Regards,
  Hubert Kario
  Quality Engineer, QE BaseOS Security team
  Web: www.cz.redhat.com
  Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic
 
 ___
 dev-security-policy mailing list
 dev-security-policy@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-security-policy

-- 
Regards,
Hubert Kario
Quality Engineer, QE BaseOS Security team
Web: www.cz.redhat.com
Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic

signature.asc
Description: This is a digitally signed message part.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-09 Thread Rob Stradling

On 09/06/15 04:05, Clint Wilson wrote:

To further support your claims here, Chris, there are already tools coming out 
which actively monitor domains in CT logs and can be set up with notifications 
of misissuance:
https://www.digicert.com/certificate-monitoring/
https://groups.google.com/forum/#!topic/mozilla.dev.security.policy/EPv_u9V06n0

This type of tool for CT is only going to improve with time.


If you act as a CT monitor yourself, you can be sure that the logs 
aren't misbehaving.  But if you rely on a third party to monitor the 
logs for you, you have to trust that third party.


Therefore, ISTM that some domain owners might want to be able to use the 
services of multiple independent monitors simultaneously.


So I'm wondering if the TRANS WG should think about standardizing a JSON 
API for searching CT logs and for setting up notifications of 
(mis-)issuance.  The server side of this API could be implemented by 
services such as https://crt.sh or even directly by the logs themselves.


Thoughts?


On Monday, June 8, 2015 at 5:23:14 PM UTC-6, Chris Palmer wrote:


For the sake of argument let's suppose I generate a cert for 
googlecares[dot]com and it shows up in the CT logs. What happens next?


A shell script notices this and pages the team responsible for
managing the company's online identities. Then, company
representatives have a phone call with the issuing CA. History shows
that various things may come from that, depending on the
circumstances.

snip

You seem to be assuming that web site operators can't write shell
scripts, and don't care about their public names and public keys, and

snip

BTW, you probably won't be surprised to hear that I've been trying to 
think of reasons to create a shell script called crt.sh.  ;-)


--
Rob Stradling
Senior Research  Development Scientist
COMODO - Creating Trust Online

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-09 Thread Kurt Roeckx

On 2015-06-09 15:26, Peter Kurrasch wrote:

3) How frequently might such tools run? Or to put it differently, how much time 
do I probably have between when I issue a gmail cert and when someone figures 
it out (and of course how much longer before my illegitimate cert is no longer 
valid)? I need only 24 hours to do all the damage I want, but in a pinch I'll 
make do with 8.


CT allows to store precertificate.  That is, the CA says it intents to 
issue a certificate.  Should we mandate the use of precertificates and a 
minimum time between the precertificate and the real certificate?



Kurt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-09 Thread Rick Andrews
On Tuesday, June 9, 2015 at 3:05:30 AM UTC-7, Hubert Kario wrote:
 True, OTOH, if a third party says that there was a misissuance, that means 
 there was one.

I disagree. Only the domain owner knows for sure what is a misissuance, and 
what isn't. It seems likely that I might turn over all known certs for my 
domain to the third party, but they might find another one, and I might say 
oh, yeah, I forgot about that one. So a third party can only report to the 
domain owner, but cannot know if the cert is legitimate.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-09 Thread Rick Andrews
On Tuesday, June 9, 2015 at 7:45:05 AM UTC-7, Kurt Roeckx wrote:
 On 2015-06-09 15:26, Peter Kurrasch wrote:
  3) How frequently might such tools run? Or to put it differently, how much 
  time do I probably have between when I issue a gmail cert and when someone 
  figures it out (and of course how much longer before my illegitimate cert 
  is no longer valid)? I need only 24 hours to do all the damage I want, but 
  in a pinch I'll make do with 8.
 
 CT allows to store precertificate.  That is, the CA says it intents to 
 issue a certificate.  Should we mandate the use of precertificates and a 
 minimum time between the precertificate and the real certificate?
 
 
 Kurt

Absolutely not. If a CA is unable to get the required minimum number of SCTs, 
it will likely not issue the cert (sure, it may retry, but it's possible that 
retries fail too). Logging must be seen as intent, but not a guarantee of 
issuance.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-09 Thread Kurt Roeckx
On Tue, Jun 09, 2015 at 12:00:23PM -0700, Rick Andrews wrote:
 On Tuesday, June 9, 2015 at 7:45:05 AM UTC-7, Kurt Roeckx wrote:
  On 2015-06-09 15:26, Peter Kurrasch wrote:
   3) How frequently might such tools run? Or to put it differently, how 
   much time do I probably have between when I issue a gmail cert and when 
   someone figures it out (and of course how much longer before my 
   illegitimate cert is no longer valid)? I need only 24 hours to do all the 
   damage I want, but in a pinch I'll make do with 8.
  
  CT allows to store precertificate.  That is, the CA says it intents to 
  issue a certificate.  Should we mandate the use of precertificates and a 
  minimum time between the precertificate and the real certificate?
  
  
  Kurt
 
 Absolutely not. If a CA is unable to get the required minimum number of SCTs, 
 it will likely not issue the cert (sure, it may retry, but it's possible that 
 retries fail too). Logging must be seen as intent, but not a guarantee of 
 issuance.

I'm not sure I understand what you're saying.

First, I don't understand your thing about a minimum number of
STCs.  I wasn't talking about a minimum amount of SCTs between
them.  The signed certificate timestamp (SCT) has, as the name
implies, a timestamp.  I'm saying that we could require a minimum
time between the timestamp from the precertificate and the
certificate.

I also didn't say anything about guaranteeing issuance.   The
whole point is that we could detect that a wrong one could be
issued and that we can avoid the issuance.


Kurt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-09 Thread Rick Andrews
On Tuesday, June 9, 2015 at 12:23:57 PM UTC-7, Kurt Roeckx wrote:
 On Tue, Jun 09, 2015 at 12:00:23PM -0700, Rick Andrews wrote:
  On Tuesday, June 9, 2015 at 7:45:05 AM UTC-7, Kurt Roeckx wrote:
   On 2015-06-09 15:26, Peter Kurrasch wrote:
3) How frequently might such tools run? Or to put it differently, how 
much time do I probably have between when I issue a gmail cert and when 
someone figures it out (and of course how much longer before my 
illegitimate cert is no longer valid)? I need only 24 hours to do all 
the damage I want, but in a pinch I'll make do with 8.
   
   CT allows to store precertificate.  That is, the CA says it intents to 
   issue a certificate.  Should we mandate the use of precertificates and a 
   minimum time between the precertificate and the real certificate?
   
   
   Kurt
  
  Absolutely not. If a CA is unable to get the required minimum number of 
  SCTs, it will likely not issue the cert (sure, it may retry, but it's 
  possible that retries fail too). Logging must be seen as intent, but not a 
  guarantee of issuance.
 
 I'm not sure I understand what you're saying.
 
 First, I don't understand your thing about a minimum number of
 STCs.  I wasn't talking about a minimum amount of SCTs between
 them.  The signed certificate timestamp (SCT) has, as the name
 implies, a timestamp.  I'm saying that we could require a minimum
 time between the timestamp from the precertificate and the
 certificate.
 
 I also didn't say anything about guaranteeing issuance.   The
 whole point is that we could detect that a wrong one could be
 issued and that we can avoid the issuance.
 
 
 Kurt

I should have been more clear. I'm talking about Google's current requirement 
for CAs to adhere to RFC6962 for all EV certs. Google's policy specifies 
minimum numbers of SCTs based on the validity period of the cert. So CAs 
currently have to gather 3 SCTs for a 27-month EV cert.

If CAs issue an EV cert with fewer than the minimum number of SCTs mandated by 
Google, Chrome will not display the EV treatment for that cert. So CAs are 
incentivized to add the required minimum number of SCTs to each EV cert, and 
not issue any EV cert with fewer than the minimum number of SCTs if the 
customer expects to see the EV treatment for their site.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-08 Thread Chris Palmer
On Fri, Jun 5, 2015 at 8:04 AM, Peter Kurrasch fhw...@gmail.com wrote:

 Certificate Transparency gets us what we want, I think. CT works
 globally, and is safer, and significantly changes the trust equation:
 ‎
 * Reduces to marginal/effectively destroys the attack value of mis-issuance

 Please clarify this statement because, as written, this is plainly not true. 
 The only way to reduce the value is if someone detects the mis-issuance and 
 then takes action to resolve it.

Yes, I am assuming that — it's the foundational and necessary
assumption of any audit system.

The Googles, Facebooks, PayPals, ... of the world care very much about
mis-issuance for their domains. Activists and security experts and
bloggers and reporters are always looking for fun stuff, and are
generally capable of writing shell scripts.

 From what I've seen so far, both are major gaps in CT as a security feature.

What have you seen so far that leads you to believe that? Are there
mis-issuances in the existing CT logs that nobody has called attention
to...?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-08 Thread Peter Kurrasch
My point is that you cannot say CT effectively destroys the attack value of 
mis-issuance and then as justification say that you are assuming someone will 
notice. This is the gap I'm talking about: the space between when a 
mis-issuance takes place and when someone notices.

For the sake of argument let's suppose I generate a cert for 
googlecares[dot]com and it shows up in the CT logs. What happens next? What 
if I do googlecares[dot]org instead? Even if someone notices, what action 
will be taken? As a bad guy I'm going to do whatever I can get away with, so it 
seems I don't have to worry about CT because, as it turns out, I can get away 
with quite a lot.


  Original Message  
From: Chris Palmer
Sent: Monday, June 8, 2015 1:38 PM

On Fri, Jun 5, 2015 at 8:04 AM, Peter Kurrasch fhw...@gmail.com wrote:‎
 Certificate Transparency gets us what we want, I think. CT works
 globally, and is safer, and significantly changes the trust equation:
 ‎
 * Reduces to marginal/effectively destroys the attack value of mis-issuance

 Please clarify this statement because, as written, this is plainly not true. 
 The only way to reduce the value is if someone detects the mis-issuance and 
 then takes action to resolve it.

Yes, I am assuming that — it's the foundational and necessary
assumption of any audit system.

The Googles, Facebooks, PayPals, ... of the world care very much about
mis-issuance for their domains. Activists and security experts and
bloggers and reporters are always looking for fun stuff, and are
generally capable of writing shell scripts.

 From what I've seen so far, both are major gaps in CT as a security feature.

What have you seen so far that leads you to believe that? Are there
mis-issuances in the existing CT logs that nobody has called attention
to...?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-08 Thread Clint Wilson
To further support your claims here, Chris, there are already tools coming out 
which actively monitor domains in CT logs and can be set up with notifications 
of misissuance: 
https://www.digicert.com/certificate-monitoring/
https://groups.google.com/forum/#!topic/mozilla.dev.security.policy/EPv_u9V06n0

This type of tool for CT is only going to improve with time.

On Monday, June 8, 2015 at 5:23:14 PM UTC-6, Chris Palmer wrote:

  For the sake of argument let's suppose I generate a cert for 
  googlecares[dot]com and it shows up in the CT logs. What happens next?
 
 A shell script notices this and pages the team responsible for
 managing the company's online identities. Then, company
 representatives have a phone call with the issuing CA. History shows
 that various things may come from that, depending on the
 circumstances.
 
 https://en.wikipedia.org/wiki/DigiNotar#Issuance_of_fraudulent_certificates
 
  What if I do googlecares[dot]org instead?
 
 Probably something similar, because the company probably bought that
 name and certs for it too.
 
  Even if someone notices, what action will be taken? As a bad guy I'm going 
  to do whatever I can get away with, so it seems I don't have to worry about 
  CT because, as it turns out, I can get away with quite a lot.
 
 You seem to be assuming that web site operators can't write shell
 scripts, and don't care about their public names and public keys, and
 that mis-issuance has been consequence-free when detected in the past.
 Those are strange assumptions, and history shows they don't always
 hold.
 
 I'm done arguing this point, no doubt to everyone's relief. :)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-05 Thread Peter Kurrasch
You have a lot of ideas in here, Richard!

Asking the question what is the increased risk we face by introducing new CA's 
and new roots into the trust store? is a good idea. How we go about answering 
that gets tricky. It might be helpful to articulate some threat models facing 
CA's, both governmental and commercial, big and small.  A few that come to mind:

1)  Externally-driven attack on a CA's internal network and systems.

2)  Internal mis-management of the CA's network and systems (e.g. firewall 
settings, software configs, credentials, etc.).

3)  Physical breach of the CA's offices and/or data centers.

4)  Bad actors within the CA who have malicious intent (or maybe just don't 
care).

5)  Someone within the CA who makes a simple (or not-so-simple) mistake.

6)  Legal statutes or authoritarian dictates which obligate a CA to perform bad 
acts.

7)  As Moudrick mentioned, a CA might seek legislation that provides some sort 
of legal cover for bad acts they are committing.

8)  ‎Coercion between CA's to issue certs that one might not be able to do 
itself for some reason.

I imagine others exist so I hope people will feel free to add or otherwise 
comment on them.

But in terms of risk, it's a question of what tools and mechanisms are 
available to us that help prevent, detect, mitigate these various situations. 
Some CA's will raise more concerns in one area than a different CA will, so it 
might be necessary to consider these on a per-organization basis instead of 
coming up with different categories. But maybe it's helpful to have the 
categories anyway if for no other reason than to guide these discussions?

In terms of the tools available, the ones I can think of are: audits (of 
course), other public declarations/disclosures, ‎technical constraints in the 
certs, certificate transparency, key pinning (maybe?), domain validation 
(maybe)?

To your specific question of defining scope, I think it's a combination of the 
following:

- relative size of the CA (number of roots, number of active certs, expected 
size of the customer base, ???)
- affiliations with government bodies
- affiliations with domain name registrars 
- ‎affiliations with Internet infrastructure operators (which needs refinement 
in terms of what that means)

There are probably others so again I hope people will add to that list.

But I agree that our ability to identify or quantify or characterize the above 
will help us decide which threats are of greatest concern. From there we should 
be able to identify constraints we want to impose that provides the right 
balance.

I'll leave it here for now. I look forward to continuing this discussion. 


  Original Message  
From: Richard Barnes
Sent: Thursday, June 4, 2015 1:44 PM‎

I'd like to try to up-level some of the discussions we're having about name
constraints, to see if we can find some higher-level consensus.

The thing that was driving my earlier proposal with regard to name
constraints was a feeling of imbalance. With every CA we add to our
program we add risk for every site on the web. That cost is supposed to be
offset by the benefit that a CA provides in terms of adding security for
its subscriber sites. But when a CA exists to serve only a small slice of
the web, this equation seems unbalanced. Small CAs are a bad risk/reward
trade-off.

One way to balance this equation better is to scope the risk to the scope
of the CA. If a CA is only serving a small slice of the web, then they
should only be able to harm a small slice of the web. A CA should only be
able to harm the entire web if it's providing benefit to a significant part
of it.

I wonder if we can agree on this general point -- That it would be
beneficial to the PKI if we could create a mechanism by which CAs could
disclose the scope of their operations, so that relying parties could
recognize when the CA makes a mistake or a compromise that goes outside
that scope, and prevent harm being done.

I think of this as CA scope transparency. Not constraining what the CAs
do, but asking them to be transparent about what they do. That way if they
do something they said they don't do, we can recognize it and reject it
proactively.

Obviously, this mechanism would have to be flexible. To the degree that
it's based on information being provided to running browser instances, that
information has to propagate quickly. Based on our experience with things
like CRLsets and OneCRL, I think that's possible. With our migration to
SalesForce for CA interactions, it's easy to imagine a mostly automated
path from CA-provided information to information pushed out to browsers.
Update your CA's SalesForce records with some new scope, and it gets picked
up

The hard problem is how to define scope. We've taken a couple of stabs
at using name constraints as one definition of scope. I still have some
hope that if we can get the adaptability right, this might be viable, but
clearly it's not going to match well for all cases. I honestly 

Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-05 Thread Brian Smith
Richard Barnes rbar...@mozilla.com wrote:

 Small CAs are a bad risk/reward trade-off.


Why do CAs with small scope even get added to Mozilla's root program in the
first place? Why not just say your scope is too limited to be worthwhile
for us to include?


 One way to balance this equation better is to scope the risk to the scope
 of the CA.  If a CA is only serving a small slice of the web, then they
 should only be able to harm a small slice of the web.  A CA should only be
 able to harm the entire web if it's providing benefit to a significant part
 of it.

 I wonder if we can agree on this general point -- That it would be
 beneficial to the PKI if we could create a mechanism by which CAs could
 disclose the scope of their operations, so that relying parties could
 recognize when the CA makes a mistake or a compromise that goes outside
 that scope, and prevent harm being done.


Mozilla already tried that with the HARICA CA. But, the result was somewhat
nonsensical because there is no way to explain the intended scope of HARICA
precisely enough in terms of name constraints.


 I think of this as CA scope transparency.  Not constraining what the CAs
 do, but asking them to be transparent about what they do.  That way if they
 do something they said they don't do, we can recognize it and reject it
 proactively.


 In general, it sounds sensible. But, just like when we try to figure out
ways restrict government CAs, it seems like when we look at the details, we
see that the value of the name constraints seems fairly limited. For
example, in the HARICA case, their name constraint still includes *.org
which means they can issue certificates for *.mozilla.org which means they
are a risk to the security of the Firefox browser (just like any other CA
that can issue for *.mozilla.org) except when the risk is limited by key
pinning.

It would be illustrative to see the list of CAs that volunteer to be
constrained such that they cannot issue certificates for any domains in
*.com. It seems like there are not many such CAs. Without having some way
to protect *.com, what's the point?

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-05 Thread Eric Mill
On Thu, Jun 4, 2015 at 9:18 PM, Chris Palmer pal...@google.com wrote:

 Certificate Transparency gets us what we want, I think. CT works
 globally, and is safer, and significantly changes the trust equation:

 * Reduces to marginal/effectively destroys the attack value of mis-issuance
 * Makes it possible for issuers to say, if under pressure to
 mis-issue, that to do so would destroy the entire issuer (under some
 legal regimes, I'm told this makes a difference)
 * Allows us to quantify the actual extent of mis-issuance by gathering
 reports of mis-issuance over time, and to name names

 CT enables us to verify that TinyCA, who promised only to issue
 certificates for Tinyland, is adhering to their promise.


I'm a fan of CT, but this seems like an oversimplification. CT makes all of
the above _possible_, but it's fundamentally depending on
meta-infrastructure that makes use of the information that CT provides.

I'd argue that the implied CT meta-infrastructure is primarily human, not
technical. Yeah, you can set up alerts for your domains, but this isn't
exactly contracts on the blockchain or whatever -- to actually do anything
about mis-issuance, humans will need to learn what happened, figure out
about whether it was supposed to happen, decide how severe the situation
is, decide what to do about it, and then do it.

You'll also have to define what mis-issuance even means for a given CA,
and that doesn't even look possible right now, except for subject-area
experts like the kind that are on this list. If you wanted to set up any
robots-only rules for detecting mis-issuance, that implies the kind of
schema for a CA's intended surface area that Richard is talking about.

And CT is after all about enforcement after-the-fact, and *deterring*
attacks, not directly preventing attacks.

Name constraints are technically enforceable matters that can directly
prevent attacks ahead of time, without relying on the speculative value of
deterrence. I think CT will be a pretty effective deterrent in all kinds of
ways, but it's not quantifiable (at least not for some years yet) and it's
not going to deter every kind of attack/er.



 It's more
 effective than name constraints, and it's more effective than chasing
 the Golden Roots dragon.


Saying CT is more effective than name constraints is overbroad, and
demonstrably not true for some situations (like directly preventing attacks
from succeeding at a technical level).

I'm personally highly optimistic about CT and think it's going to make the
world a more secure place. However, even in its most successful state, CT
depends on expert people keeping an eye on a system that guards the general
public. That's a huge help, but it's not a substitute for direct technical
enforcement.

Ryan's arguments against geographically bound name constraints as a
contributor to balkanization are pretty compelling. And the underlying
subtext I hear from him and others here, about profligate name constraints
generally making the marketplace more brittle and more burdensome, seem
reasonable.

Dismissing the idea of there being any viable system of name constraints
seems less well-supported, and describing CT as a superior solution for the
goals of name constraints seems not correct.



  It may be that there's not a technical framework that can accurately and
  flexibly capture the type of risk reduction I'm talking about here.  But
 it
  seems worth trying to figure out what the requirements are here, and
 seeing
  if it's possible to meet them or not.

 It's pretty clear to me that there is simply not a technical framework
 to *directly* capture a CA's intended scope.


That's not clear to me at all.

I've personally only seen one attempt so far, which is Richard's survey
https://docs.google.com/document/d/1nHcqeuWlgM9a1jZ6MjoOyJX7OL2p3GzAR9AJeNaxTV4/edit,
using zmap and the Google CT log, and resulting spreadsheet
https://docs.google.com/spreadsheets/d/12qtjvzpCoVk7z77LQXmjvXF7qANGpVggdbSW7BC3WRg/edit#gid=848145789
of
current issuance for each root certificate. That analysis is blunt, and
doesn't really separate TLDs from ccTLDs from gTLDs.

For many CAs, it may be that their commercial interests are too broad, and
the gTLD space too active, for there to be many safe assumptions you could
bake into constraints that would not make the CA space more brittle or
burdensome.

It certainly seems very plausible, though, that there are some CAs in there
-- or some CAs yet to come -- who could quite safely be constrained, and
which would meaningfully reduce the attack surface. Even a few would make a
large difference, by removing some datacenters from the world stage whose
compromise would allow arbitrary issuance.


 Furthermore, even the
 not-very-effective means we have incur the (IMO severe) problems of
 Balkanization


A valid concern when the constraints are geographic in nature, which Ryan
has argued persuasively. Not always the case, and potentially an
outweigh-able concern even when it 

Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-04 Thread Matt Palmer
Hi Richard,

On Thu, Jun 04, 2015 at 02:44:00PM -0400, Richard Barnes wrote:
 The thing that was driving my earlier proposal with regard to name
 constraints was a feeling of imbalance.  With every CA we add to our
 program we add risk for every site on the web.  That cost is supposed to be
 offset by the benefit that a CA provides in terms of adding security for
 its subscriber sites.  But when a CA exists to serve only a small slice of
 the web, this equation seems unbalanced.  Small CAs are a bad risk/reward
 trade-off.
 
 One way to balance this equation better is to scope the risk to the scope
 of the CA.  If a CA is only serving a small slice of the web, then they
 should only be able to harm a small slice of the web.  A CA should only be
 able to harm the entire web if it's providing benefit to a significant part
 of it.

Thank you for expressing the issue, as you see it, so clearly.  It has
helped me to solidify something that I have been uneasy about in previous
discussions, but haven't been able to express clearly until now.

You say that some CAs only serv[e] a small slice of the web, and thus
they should only be able to harm a small slice of the web.  This assumes a
set of CA stakeholders that I believe is incorrect.

The set of people whom a CA serves is not the sites that they issue
certificates to.  The stakeholders of a CA are the people who may be called
upon to rely upon the assertions that the CA has made in issuing a
certificate.

Since there is no meaningful way, currently, in which a certificate can only
be presented to a prior-defined subset of Internet users[1], that means that
*every* CA in existence, regardless of its target market (for revenue) or
any constraints on the set of names for which it can issue certificates,
serves every Internet user.

By this line of thinking, there can be no CA which is only serving a small
slice of the web.  Any proposal which contains an axiom along those lines
cannot come to an optimal conclusion, except by accident.

 I wonder if we can agree on this general point -- That it would be
 beneficial to the PKI if we could create a mechanism by which CAs could
 disclose the scope of their operations, so that relying parties could
 recognize when the CA makes a mistake or a compromise that goes outside
 that scope, and prevent harm being done.

I would certainly agree to that general point.

 I think of this as CA scope transparency.  Not constraining what the CAs
 do, but asking them to be transparent about what they do.  That way if they
 do something they said they don't do, we can recognize it and reject it
 proactively.

The purpose of a CPS is to provide transparency around what a CA does (or at
least says they do...).  Being transparent about what a CA *actually* does,
on a fine-grained scale, is the purpose of Certificate Transparency.

 The hard problem is how to define scope.  We've taken a couple of stabs
 at using name constraints as one definition of scope.  I still have some
 hope that if we can get the adaptability right, this might be viable, but
 clearly it's not going to match well for all cases.  I honestly don't have
 much in the way of other ideas.  Suggestions welcome.

For the simple case of name prefixes, a mechanism which is in some ways the
reciprocal of CAA records could work -- not that we have a reciprocal for
DNS, unfortunately, but that mental model works.

 It may be that there's not a technical framework that can accurately and
 flexibly capture the type of risk reduction I'm talking about here.  But it
 seems worth trying to figure out what the requirements are here, and seeing
 if it's possible to meet them or not.

My immediate thoughts are that there is, certainly, a way in which a schema
for declared constraints could be developed.  Jumping straight over that,
though, there are a number of practical deployment issues that come to mind:

* I can imagine most CAs getting some significant pucker factor over the
  idea of providing any sort of fine-grained details on which subset(s) of
  the possible customer base that they're serving.  This suggests that a
  more coarse-grained approach to declaring scope would be needed, however
  the coarser the grain, the more gaps there are for an attacker to slip
  into.

* Any automatic modification of scope will fall foul of the DNSSEC
  problem, which I define as the fact that what you're (mostly) protecting
  against is compromise of the DNS host, but if you're in there you can just
  change the DNSSEC records to match.  The same thing applies here -- if
  you manage to get the system to issue you a cert, that same system will
  probably automatically alter the scope to permit what you're trying to do.

* If scope modification is purely manual (and good luck making the Turing
  test for that stick, not to mention the idea that automated is usually
  less error-prone than manual anyway) then CAs will want to again declare
  scope as broadly as they can get away with, giving