Re: Policy 2.7 Proposal: Clarify Section 5.1 ECDSA Curve-Hash Requirements

2019-05-22 Thread Brian Smith via dev-security-policy
Ryan Sleevi  wrote:

>
>
>> It would be easier to understand if this is true if the proposed text
>> cited the RFCs, like RFC 4055, that actually impose the requirements that
>> result in the given encodings.
>>
>
> Could you clarify, do you just mean adding references to each of the
> example encodings (such as the above example, for the SPKI encoding)?
>

Exactly. That way, it is clear that the given encodings are not imposing a
new requirement, and it would be clear which standard is being used to
determine to correct encoding.

I realize that determining the encoding from each of these cited specs
would require understanding more specifications, including in particular
how ASN.1 DER requires DEFAULT values to be encoded. I would advise against
calling out all of these details individually less people get confused by
inevitable omissions.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Clarify Section 5.1 ECDSA Curve-Hash Requirements

2019-05-09 Thread Brian Smith via dev-security-policy
On Fri, Apr 26, 2019 at 11:39 AM Wayne Thayer  wrote:

> On Wed, Apr 24, 2019 at 10:02 AM Ryan Sleevi  wrote:
>
>> Thank you David and Ryan! This appears to me to be a reasonable
>> improvement to our policy.
>>
>
> Brian: could I ask you to review the proposed change?
>
>
>> This does not, however, address the last part of what Brian proposes -
>> which is examining if, how many, and which CAs would fail to meet these
>> encoding requirements today, either in their roots, subordinates, or leaf
>> certificates.
>>
>>
> While I agree that this would be useful information, for the purpose of
> moving ahead with this policy change would it instead be reasonable to set
> an effective date and require certificates issued (notBefore) after that
> date to comply, putting the burden on CAs to verify their implementations
> rather than relying on someone else to do that work?
>

My understanding here is that the proposed text is not imposing a new
requirement, but more explicitly stating a requirement that is already
imposed by the BRs. AFAICT BRs require syntactically valid X.509
certificates, RFC 5280 defines what's syntactically valid, RFC 5280 defers
to other documents about what is allowed for each algorithm identifier, and
this is an attempt to collect all those requirements into one spot for
convenience.

It would be easier to understand if this is true if the proposed text cited
the RFCs, like RFC 4055, that actually impose the requirements that result
in the given encodings.


>
> While this includes RSA-PSS, it's worth noting that mozilla::pkix does not
>> support these certificates, and also worth noting that the current encoding
>> scheme is substantially more verbose than desirable.
>>
>
I agree the encoding is unfortunate. But, also, there's no real prospect of
a shorter encoding being standardized and implemented in a realistic time
frame.

Cheers,
Brian
--
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Clarify Section 5.1 ECDSA Curve-Hash Requirements

2019-04-22 Thread Brian Smith via dev-security-policy
Wayne Thayer  wrote:

> Brian Smith  wrote:
>
>> Ryan Sleevi wrote:
>>
>>> Given that CAs have struggled with the relevant encodings, both for the
>>> signatureAlgorithm and the subjectPublicKeyInfo field, I’m curious if
>>> you’d
>>> be open to instead enumerating the allowed (canonical) encodings for
>>> both.
>>> This would address open Mozilla Problematic Practices as well - namely,
>>> the
>>> encoding of NULL parameters with respect to certain signature algorithms.
>>>
>>
>>
> I would be happy with that approach if it makes our requirements clearer -
> I'm just not convinced that doing so will eliminate the confusion I
> attempted to describe.
>

There are three (that I can think of) sources of confusion:

1. Is there any requirement that the signature algorithm that is used to
sign a certificate be correlated in any way to the algorithm of the public
key of the signed certificate? AFAICT, the answer is "no."

2. What combinations of public key algorithm (RSA vs. ECDSA vs EdDSA),
Curve (N/A vs. P-256 vs P-384 vs Ed25519), and digest algorithm (SHA-256,
SHA-384, SHA-512) are allowed? This is quite difficult to get *precisely*
right in natural language, but easy to get right with a list of encodings.

3. Given a particular combination of algorithm, curve, and digest
algorithm, which encodings of that information are acceptable? For example,
when a a NULL parameter required and when is it optional. Again, this is
hard to get right in natural language, and again, listing the encodings
makes this trivial to get exactly right.

 Agreed - is someone willing to take on this task?
>

I could transform what I did with webpki into some text.

However, first I think it would be useful if somebody could check that the
encodings that webpki expects actually match what certificates in
Certificate Transparency are doing. For example, does every CA already
encode a NULL parameter when one is required by RFC 4055 (which is included
by reference from RFC 5280)? Are there any algorithm combinations in use
that aren't in webpki's list? This is something I don't have time to
thoroughly check.

Thanks,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Require EKUs in End-Entity Certificates

2019-04-17 Thread Brian Smith via dev-security-policy
Wayne Thayer via dev-security-policy 
wrote:

> My conclusion from this discussion is that we should not add an explicit
> requirement for EKUs in end-entity certificates. I've closed the issue.
>

What will happen to all the certificates without an EKU that currently
exist, which don't conform to the program requirements?

For what it's worth, I don't object to a requirement for having an explicit
EKU in certificates covered by the program. Like I said, I think every
certificate that is issued should be issued with a clear understanding of
what applications it will be used for, and having an EKU extension does
achieve that.

The thing I am attempting to avoid is the implication that a missing EKU
implies a certificate is not subject to the program's requirements.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Clarify Section 5.1 ECDSA Curve-Hash Requirements

2019-04-04 Thread Brian Smith via dev-security-policy
Ryan Sleevi wrote:

> Given that CAs have struggled with the relevant encodings, both for the
> signatureAlgorithm and the subjectPublicKeyInfo field, I’m curious if you’d
> be open to instead enumerating the allowed (canonical) encodings for both.
> This would address open Mozilla Problematic Practices as well - namely, the
> encoding of NULL parameters with respect to certain signature algorithms.
>

I agree with Ryan. It would be much better to list more precisely what
algorithm combinations are allowed, and how exactly they should be encoded.
From my experience in implementing webpki [1], knowing the exact allowed
encodings makes it much easier to write software that deals with
certificates and also makes it easier to validate that certificates conform
to the requirements.

These kinds of details are things that CAs need to delegate to their
technical staff for enforcement, and IMO it would make more sense to ask a
programmer in this space to draft the requirements, and then have other
programmers verify the requirements are accurate. In particular, it is
hugely inefficient for non-programmers to try to attempt to draft these
technical requirements and then ask programmers and others to check them
because it's unreasonable to expect people who are not programmers to be
able to see which details are important and which aren't.

You can find all the encodings of the algorithm identifiers at [2].

[1] https://github.com/briansmith/webpki
[2] https://github.com/briansmith/webpki/tree/master/src/data

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Require EKUs in End-Entity Certificates

2019-04-03 Thread Brian Smith via dev-security-policy
Wayne Thayer  wrote:

> On Mon, Apr 1, 2019 at 5:36 PM Brian Smith via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> Here when you say "require EKUs," you mean that you are proposing that
>> software that uses Mozilla's trust store must be modified to reject
>> end-entity certificates that do not contain the EKU extension, if the
>> certificate chains up to the roots in Mozilla's program, right?
>
>
> That would be a logical goal, but I was only contemplating a policy
> requirement.
>

OK, let's say the policy were to change to require an EKU in every
end-entity certificate. Then, would the policy also require that existing
unexpired certificates that lack an EKU be revoked? Would the issuance of a
new certificate without an EKU be considered a policy violation that would
put the CA at risk of removal?

The thing I want to avoid is saying "It is OK for the CA to issue an
end-entity certificate without an EKU and if there is no EKU we will
consider it out of scope of the program." In particular, I don't want to
put software that (correctly) implements the "no EKU extension implies all
usages are acceptable" at risk.


>
> If so, how
>> would one implement the "chain[s] up to roots in our program" check?
>> What's
>> the algorithm? Is that actually well-defined?
>>
>>
> My starting proposal would be to reject all EE certs issued after a
> certain future date that don't include EKU(s), or that assert anyEKU. If
> your point is that it's not so simple and that this will break things, I
> suspect that you are correct.
>

The part that seems difficult to implement is the differentiation of a
certificate that chains up to a root in Mozilla's program from one that
doesn't. I don't think there is a good way to determine, given the
information that the certificate verifier has, whether a certificate chains
up to a root in Mozilla's program or not, so to be safe software has to
apply the same rules to regardless of whether the certificate appears to
chain up to a root in Mozilla's program or not.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Require EKUs in End-Entity Certificates

2019-04-01 Thread Brian Smith via dev-security-policy
Wayne Thayer via dev-security-policy 
wrote:

> This leads to confusion such as [1] in
> which certificates that are not intended for TLS or S/MIME fall within the
> scope of our policies.
>

I disagree that there is any confusion. The policy is clear, as noted in
https://bugzilla.mozilla.org/show_bug.cgi?id=1523221#c3.

Simply requiring EKUs in S/MIME certificates won't solve the problem unless
> we are willing to exempt certificates without an EKU from our policies, and
> doing that would create a rather obvious loophole for issuing S/MIME
> certificates that don't adhere to our policies.
>

I agree that a requirement to add an EKU to certificates does not solve the
problem, because the problem that software (Mozilla's and others')
interprets the lack of an EKU extension as meaning "there is no restriction
on the EKU," which is the correct interpretation.


> The proposed solution is to require EKUs in all certificates that chain up
> to roots in our program, starting on some future effective date (e.g. April
> 1, 2020).


Here when you say "require EKUs," you mean that you are proposing that
software that uses Mozilla's trust store must be modified to reject
end-entity certificates that do not contain the EKU extension, if the
certificate chains up to the roots in Mozilla's program, right? If so, how
would one implement the "chain[s] up to roots in our program" check? What's
the algorithm? Is that actually well-defined?


> Alternately, we could easily argue that section 1.1 of our existing policy
> already makes it clear that CAs must include EKUs other than
> id-kp-serverAuth and id-kp-emailProtection in certificates that they wish
> to remain out of scope for our policies.
>

I agree the requirements are already clear. The problem is not the clarity
of the requirements. Anybody can define a new EKU because EKUs are listed
in the certificate by OIDs, and anybody can make up an EKU. A standard
isn't required for a new OID. Further, not agreeing on a specific EKU OID
for a particular kind of usage is poor practice, and we should discourage
that poor practice.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SHA256 for OCSP response issuer hashing

2016-12-20 Thread Brian Smith
Roland Shoemaker  wrote:
> Let's Encrypt is currently considering moving away from using SHA1 as
> the issuer subject/public key hashing function in OCSP responses and
> using SHA256 instead. Given a little investigation this seems like a
> safe move to make but we wanted to check with the community to see if
> anyone was aware of legacy (or contemporary) software issues that may
> cause us any trouble.

I'm not sure I understand you correctly, but see:
https://bugzilla.mozilla.org/show_bug.cgi?id=966856
https://hg.mozilla.org/mozilla-central/annotate/578899c0b819/security/pkix/lib/pkixocsp.cpp#l717

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.4 Proposal: Use language of capability throughout

2016-12-16 Thread Brian Smith
Gervase Markham <g...@mozilla.org> wrote:
> On 10/12/16 21:25, Brian Smith wrote:
>> Again, it doesn't make sense to say that the forms of names matter for
>> name constraints, but don't matter for end-entity certificates. If an
>> end-entity certificate doesn't contain any names of the forms dNSName,
>> iPAddress, SRVName, rfc822Name, then it shouldn't be in scope.
>
> Why would it have id-kp-serverAuth or id-kp-emailProtection and not have
> any names of those forms?

I'm more thinking of certificates that don't have an EKU extension but
do have names of those forms. Such certificates should be in scope.

Otherwise, A CA can easily issue just a certificate that is trusted
for every browser except Firefox, which is out of scope for Mozilla's
CA program. A CA might do this, for example, if Mozilla were being
more difficult than other root stores and/or the customer in question
doesn't care if the site works in Firefox or not.

>> Also, the way that the text is worded the above means that an
>> intermediate certificate that contains anyExtendedKeyUsage in its EKU
>> would be considered out of scope of Mozilla's policy. However, you
>> need to have such certificates be in scope so that you can forbid them
>> from using anyExtendedKeyUsage.
>
> Well, there are two responses to that.
>
> Firstly, no. The certs in scope are: "Intermediate certificates ...
> which are not technically constrained such that they are unable to issue
> working server or email certificates." If an intermediate certificate
> has an EKU with anyEKU, it is able to issue working server or email
> certificates. So it's in scope. (This utilises my use of "working"
> rather than "trusted", as noted in my previous email.)

What does "working" mean? If I were a CA I would interpret "working"
to mean "works in Firefox" which would then allow me to issue
certificates that violate Mozilla's CA policies by issuing them from
an intermediate that has (only) anyExtendedKeyUsage, so that they work
in every browser except Firefox and are out of scope of your policy.

> But secondly, I'm not banning the use of anyEKU, because Firefox doesn't
> trust cert chains that rely on it, so there's no need to ban it. Is there?

Again, the reason for banning anyEKU is to prevent, through policy,
CAs from using/issuing intermediate certificates that work in every
browser except Firefox, for whatever reason (most likely, to work
around a CA policy disagreement).

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Taiwan GRCA Root Renewal Request

2016-12-15 Thread Brian Smith
Kathleen Wilson  wrote:
> How about the following?

That sounds right to me.

It is important to fix the DoS issue with the path building when there
are many choices for the same subject. SKI/AKI matching only fixes the
DoS issue for benign cases, not malicious cases. Therefore some way of
limiting the resource usage without relying on AKI/SKI matching is
needed.

I'm not sure how to incorporate the possibility of the issue being
fixed into your text.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Taiwan GRCA Root Renewal Request

2016-12-14 Thread Brian Smith
On Tue, Dec 13, 2016 at 12:36 PM, Kathleen Wilson  wrote:
> Question: Do I need to update 
> https://wiki.mozilla.org/CA:How_to_apply#Root_certificates_with_the_same_subject_and_different_keys
>  ?

That description seems to have been written to describe the behavior
of the old, non-libpkix, NSS verification code. NSS's libpkix probably
works differently than that. Also, that description is not accurate
and is somewhat misleading for mozilla::pkix.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.4 Proposal: Require all OCSP responses to have a nextUpdate field

2016-12-10 Thread Brian Smith
Gervase Markham <g...@mozilla.org> wrote:
> On 08/12/16 12:46, Brian Smith wrote:
>> Are you intending to override the BR laxness for maximum OCSP lifetime
>> for intermedaites, or just match the BR requirements?
>
> The wider context of this section includes an "For end-entity
> certificates:". So the wording as proposed matches the BRs in terms of
> duration.

OK. This means that the policy isn't really sufficient for use with
the OCSP mult-stapling extension. Mutli-stapling only works well when
the OCSP responses for the intermediate CA certificates are treated
like what is proposed for end-entity certificates w.r.t. nextUpdate.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.4 Proposal: Use language of capability throughout

2016-12-10 Thread Brian Smith
On Thu, Dec 8, 2016 at 10:46 AM, Gervase Markham  wrote:
> We want to change the policy to make it clear that whether a cert is
> covered by our policy or not is dependent on whether it is technically
> capable of issuing server certs, not whether it is intended by the CA
> for issuing server certs.

I'll quote part of the proposed change here:

> 2. Intermediate certificates which have at least one valid, unrevoked chain up
> to such a CA certificate and which are not technically constrained such
> that they are unable to issue server or email certificates. Such technical
> constraints could consist of either:
> * an Extended Key Usage (EKU) extension which does not contain either of the
>   id-kp-serverAuth and id-kp-emailProtection EKUs; or:
> * name constraints which do not allow SANs of any of the following types:
>   dNSName, iPAddress, SRVName, rfc822Name
>
> 3. End-entity certificates which have at least one valid, unrevoked chain up 
> to
> such a CA certificate through intermediate certificates which are all in
> scope, such end-entity certificates having either:
> * an Extended Key Usage (EKU) extension which contains one or more of the
>   id-kp-serverAuth and id-kp-emailProtection EKUs; or:
> * no EKU extension.

Again, it doesn't make sense to say that the forms of names matter for
name constraints, but don't matter for end-entity certificates. If an
end-entity certificate doesn't contain any names of the forms dNSName,
iPAddress, SRVName, rfc822Name, then it shouldn't be in scope.

Also, the way that the text is worded the above means that an
intermediate certificate that contains anyExtendedKeyUsage in its EKU
would be considered out of scope of Mozilla's policy. However, you
need to have such certificates be in scope so that you can forbid them
from using anyExtendedKeyUsage.

Cheers,
Brian
--
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.4 Proposal: Use language of capability throughout

2016-12-10 Thread Brian Smith
Gervase Markham <g...@mozilla.org> wrote:
> On 08/12/16 13:06, Brian Smith wrote:
>> In particular, I suggest replacing "unable to issue server or email
>> certificates" with "unable to issue *trusted* server or email
>> certificates" or similar.
>
> I think I would prefer not to make that tie, because the obvious
> question is "trusted in which version of Firefox"? I would prefer to
> modify Firefox and the policy to match, but have the ability to skew
> those two updates as necessary, rather than tie the policy to what
> Firefox does directly.

"Unable to issue" means "unable to sign with the private key" which
can only happen if they don't have the private key. But they do have
the private key so they're always able to issue certificates with any
contents they want. Thus "unable to issue" is a not a useful criteria
since no CA meets it and so you need a different criteria.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Taiwan GRCA Root Renewal Request

2016-12-08 Thread Brian Smith
Gervase Markham  wrote:
> Just to help me be clear: the request is for the inclusion of a root
> with the same DN as a previous root, which will still be included after
> the addition? Or the problem with duplicate DNs occurs further down the
> hierarchy?

Some people claimed some software may be unable to cope with two
different CA certificates with the same subject DNs. Nobody claimed
that Firefox is unable to cope with two CA certificates having the
same subject DN. It should work fine in Firefox because Firefox will
attempt every CA cert it finds with the same DN.

One caveat: If there are "too many" CA certificates with the same
subject DN, Firefox will spend a very long time searching through
them. This is a bug in Firefox that's already on file.

> Does Firefox build cert chains using DNs, or using Key Identifiers as
> Wen-Cheng says it should? I assume it's the former, but want to check.

Firefox doesn't even parse the key identifiers. Using the key
identifiers are only helpful when a CA does the thing that this
particular CA does, using the same subject DN for multiple CA
certificates, to prevent the "too many" problem mentioned above.

I'm unconvinced that it is worthwhile to add the Key Identifier stuff
just to accommodate this one public CA plus any private CAs that do
similarly. I think it's better to ask this CA to instead do things the
way all the other public CAs do (AFAIK). In other words, this is kind
of where the Web PKI diverges from PKIX.

However, the CA changing its practices could be done on a
going-forward basis; the existing instances shouldn't be problematic
and so I don't think they should be excluded on the basis of what they
already did.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.4 Proposal: Use language of capability throughout

2016-12-08 Thread Brian Smith
Gervase Markham  wrote:
> We want to change the policy to make it clear that whether a cert is
> covered by our policy or not is dependent on whether it is technically
> capable of issuing server certs, not whether it is intended by the CA
> for issuing server certs.

NIT: The issue isn't whether it is technically capable of *issuing* a
cert, but what certificates it issues are trusted by Firefox (or, more
abstractly, TLS certificates trusted by a TLS client or email
certificates trusted by an S/MIME-capable email client).

In particular, I suggest replacing "unable to issue server or email
certificates" with "unable to issue *trusted* server or email
certificates" or similar.

Cheers,
Brian

> Until we change Firefox to require id-kp-serverAuth, the policy will
> define "capable" as "id-kp-serverAuth or no EKU".

This would allow anyExtendedKeyUsage in a way that isn't what you
intend, AFAICT. I suggest "without an EKU constraint that excludes
id-kp-serverAuth." This suggested new wording doesn't explicitly allow
anyExtendedKeyUsage to be used, not does it exclude a cert with
anyExtendedKeyUsage from being in scope, but otherwise accomplishes
the same thing.

More generally, I suggest you use the wording in my latest message in
the id-kp-serverAuth thread. In particular, id-kp-serverAuth doesn't
apply to email certificates; id-kp-emailProtection does. Thus, you
need to consider which EKU, which trust bit, and which types of names
are relevant separately for email and TLS.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.4 Proposal: Require all OCSP responses to have a nextUpdate field

2016-12-08 Thread Brian Smith
Gervase Markham  wrote:
> Add a requirement that every OCSP response must have a nextUpdate field.
> This is required to ensure that OCSP stapling works reliably with all
> (at least most) server and client products.
>
> Proposal: update the second bullet in point 3 of the Maintenance section
> so that the last sentence reads:
>
> OCSP responses from this service must have a defined value in the
> nextUpdate field, and it must be no more than ten days after the
> thisUpdate field.

The baseline requirements has different requirements for end-entity
and intermediate certificates. It requires the nextUpdate field to be
no more than 10 days after the thisUpdate field, but it doens't have
the same requirement for intermediates.

Are you intending to override the BR laxness for maximum OCSP lifetime
for intermedaites, or just match the BR requirements?

If you are intending to be stricter than the BRs requires, then your
change sounds good but maybe call out specifically that this is
stricter for intermediates than what the BRs require.

Otherwise, if you're intending to match the BRs then I would remove
the ", and it must be no more than ten days after the thisUpdate
field."

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Can we require id-kp-serverAuth now?

2016-12-08 Thread Brian Smith
Gervase Markham <g...@mozilla.org> wrote:
> On 05/12/16 12:43, Brian Smith wrote:
>> However, I do think that if a CA certificate is name constrained to not
>> allow any dNSName or iPAddress names, and/or it EKU that doesn't contain
>> id-kp-serverAuth, then it shouldn't be in scope of the proposal. Either
>> condition is sufficient.
>
> Can we get a whitelist of types we care about? Note that we care about
> email as well as server certs.
>
> dNSName
> iPAddress (this covers both v4 and v6?)
> rfc822Name
> SRVName

Here are the choices (from RFC 5280):

GeneralName ::= CHOICE {
otherName   [0] OtherName,
rfc822Name  [1] IA5String,
dNSName [2] IA5String,
x400Address [3] ORAddress,
directoryName   [4] Name,
ediPartyName[5] EDIPartyName,
uniformResourceIdentifier   [6] IA5String,
iPAddress   [7] OCTET STRING,
registeredID[8] OBJECT IDENTIFIER }

Note that RFC 4985 defines srvName as an otherName { id-on 7}. See
also 
http://www.iana.org/assignments/smi-numbers/smi-numbers.xhtml#smi-numbers-1.3.6.1.5.5.7.8

If the issuing CA is trusted for email (as determined by root email
trust bit, and there are no EKU constraints that exclude
id-kp-emailProtection), then a certificate with an rfc822Name would be
in scope, unless all such names were excluded by the name constraints.

If the issuing CA is trusted for TLS (as determined by root SSL trust
bit, and there are no EKUs constraints that exclude id-kp-serverAuth),
then a certificate with dNSName and iPAddress or srvName (a subtype of
otherName), would be in scope, unless all such names were excluded by
the name constraints.

Not sure about whether you would want to include the URL type.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Can we require id-kp-serverAuth now?

2016-12-08 Thread Brian Smith
Gervase Markham <g...@mozilla.org> wrote:
> On 07/12/16 12:44, Brian Smith wrote:
>> Notice in the BRs that the KeyUsage extension (not to be confused with the
>> ExtendedKeyUsage extension we're talking about here) is optional. Why is it
>> OK to be optional? Because the default implementation allows all usages,
>> including in particular the usages that browsers need.
>
> The fact that some defaults are suitable doesn't mean that all defaults
> are suitable. You are assuming what you seek to prove.

In your proposal, an end-entity certificate is allowed to have any
EKUs in addition to id-kp-serverAuth, right? So, all EKUs are indeed
acceptable and so the default is acceptable.

> Changing the BRs in this way would (arguably, as the scope of the BRs is
> a matter of ongoing debate, something we hope this line of work will
> eventually clarify) bring a whole load of certs which are not currently
> issued under the BRs and which aren't supposed to be under the BRs,
> under the BRs.

If a certificate is in scope of the BRs then it must conform to the
requirements. In particular, it isn't the case that any certificate
that conforms to the requirements is in scope. Therefore, loosening
the requirements doesn't change the scope.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Can we require id-kp-serverAuth now?

2016-12-07 Thread Brian Smith
Rob Stradling  wrote:

> Mozilla's CA Certificate Inclusion Policy already requires that "issuance
> of certificates to be used for SSL-enabled servers must also conform to"
> the BRs, and most other browser providers already require this too.
>
> For Subscriber Certificates, the CABForum BRs already require that...
>   "F. extKeyUsage (required)
>Either the value id‐kp‐serverAuth [RFC5280] or id‐kp‐clientAuth
>[RFC5280] or both values MUST be present."
>
> Since the policy already requires #3, ISTM that the technical
> implementation should enforce #3 (unless there's a really good reason not
> to).
>

The policy (including the BRs) can and should be changed.

Notice in the BRs that the KeyUsage extension (not to be confused with the
ExtendedKeyUsage extension we're talking about here) is optional. Why is it
OK to be optional? Because the default implementation allows all usages,
including in particular the usages that browsers need.

Similarly, why are name constraints optional? Because the default is no
name constraints. Why are policy constraints optional? Because the default
is no constraints. In all these cases the defaults might be not as strict
as possible but they work out OK.


> For it to make any sense to not enforce #3 technically, there would need
> to be cross-industry agreement (and a corresponding update to the BRs) that
> end-entity serverAuth certs need not contain the EKU extension. Good luck
> with that!!
>

I would expect it to be easier than most other changes to the BRs, because
it doesn't require anybody to do any work.


> How much effort should we go to just to shave 21 bytes off the size of
> each end-entity serverAuth cert?
>

My proposal requires as close to zero effort as any proposal to change the
BRs.

In isolation, shaving off 21 bytes isn't a huge win. However, IIRC, based
on the last time we measured this, combined with other changes it adds up
to be, on average, larger than the size of the SCTs that we're adding to
certs and/or less than the size of an additional OCSP response (without
embedded signing cert) and/or the cost of a minimal OCSP response signing
cert.

Why don't browsers support OCSP multi-stapling (OCSP for intermediate CA
certs)? Part of the reason is that it would be inefficient because
certificates and OCSP responses are too big. Also some choose to avoid
using X.509 certificates completely due to size issues.

People are working on certificate compression to make certificates smaller.
See, for example, the messages in this thread:
https://www.ietf.org/mail-archive/web/tls/current/msg22065.html. Also see
Google's QUIC protocol, which implements compression. Unfortunately, not
every implementation can support GZIP-based compression, and it's a good
idea to minimize the size of the decompressed certs in any case. Also see
the work that BoringSSL/Chrome is doing to de-dupe certs in memory because
certs are taking up too much memory.

Also, like I said in my previous message, it seems like requiring the EKU
in the end-entity certificates doesn't actually solve the problem that it
was proposed to solve, so I'm not sure if there is any motivation for
requiring it now.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Can we require id-kp-serverAuth now?

2016-12-05 Thread Brian Smith
Gervase Markham <g...@mozilla.org> wrote:

> On 04/12/16 19:11, Brian Smith wrote:
> > If certificates without an EKU have dNSName or iPAddress subjectAltName
> > entries, then they should be considered in scope. Otherwise they don't
> need
> > to be considered in scope as long as Firefox doesn't use the Subject CN
> as
> > a dNSName. You've already started down the path of fixing the Subject CN
> > issue in https://bugzilla.mozilla.org/show_bug.cgi?id=1245280 and maybe
> > elsewhere.
>
> That would be an alternative way to do it. The problem is that if you
> try and do it this way, the issuing CA is always in scope for all its
> issuances, because whether the certs it issues have these features is
> inevitably a matter of CA policy, and could change at any time.
> Therefore, the issuing CA has to be run in a BR-compliant way all the time.
>

Let's consider the cases:

A root CA: It is in scope if it has the SSL trust bit.

An intermediate CA: It is in scope unless all the trusted certificates
issued for it have an EKU that excludes id-kp-serverAuth.

This is true regardless of whether you require an explicit id-kp-serverAuth
in the end-entity certificates, and/or if you base it on the subjectAltName
entries like I suggest, right? Because, at any time, the CA could issue an
end-entity certificate with id-kp-serverAuth in its EKU and/or a
certificate with a dNSName or iPAddress in its subjectAltNames.


> Doing it based on the technical capabilities of the issuing CA allows us
> to say "we don't care about any certs this CA issues", rather than "we
> might care about some of the certs this CA has issued; we now have to
> find and examine them all to see".
>

I do very much agree with this! But, what matters is the constraints placed
on the CA's certificate, not on the end-entity certificates it issues,
right?

Do you know what kind of impact it would have on the 60+ CAs in
> our root program to tell them that they have to reissue every
> intermediate in their publicly-trusted hierarchies to contain
> non-server-auth name constraints?
>

I wasn't suggesting anything to do with name constraints.

However, I do think that if a CA certificate is name constrained to not
allow any dNSName or iPAddress names, and/or it EKU that doesn't contain
id-kp-serverAuth, then it shouldn't be in scope of the proposal. Either
condition is sufficient.



> > AFAICT almost all Mozilla software except for Firefox and Thunderbird,
> > would still trust the EKU-less certificates for id-kp-serverAuth. Thus
> > requiring an explicit id-kp-serverAuth in Firefox wouldn't even have the
> > intended ramifications for all of Mozilla's products.
>
> Are you talking about Firefox for iOS? Or something else?
>

Besides Firefox for iOS, most other Mozilla software, such as software
written in Go or Python or anything except {Firefox, Thunderbird} for {Mac,
Windows, Linux}. IIUC, Firefox for Android sometimes contacts https://
servers using the native Android software stack, which won't require an
explicit EKU either.


> > Also, pretty much all non-Mozilla software is using RFC 5280 semantics
> > already. So, such a change wouldn't do anything to help non-Mozilla
> > software nor even all of Mozilla's products.
>
> Well, our policy doesn't take explicit account of non-Mozilla software :-)
>

That's true, but at the same time we want to have standardized behavior
right? Mozilla is or will be asking people to go beyond existing standards
by:

1. Honoring Microsoft's semantics for EKU in intermediates.
2. Dropping support for interpreting the subject CN as a domain name or IP
address.
3. Maybe requiring an explicit EKU in the end-entity certificate.

My point is that #1 and #2 are already sufficient to solve this problem.
Let's make it easy for people to agree with Mozilla, by not requiring more
than is necessary, #3.


> I want the scope of what our software trusts to match the scope of what
> our policy controls, and I want that united scope to both incorporate
> all or the vast majority of certificates people are actually using for
> server-auth, and to have clear and enforceable boundaries which are
> administratively convenient for CAs and us. I think requiring
> id-kp-serverAuth does that.
>

I think my proposal does the same, but in a way that's easier for other
software to adopt.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Can we require id-kp-serverAuth now?

2016-12-04 Thread Brian Smith
Gervase Markham <g...@mozilla.org> wrote:

> On 03/12/16 03:42, Brian Smith wrote:
> > The solution to this problem is to get rid of the idea of "intent" from
> the
> > CA policy (including the baseline requirements, or in spit of the BRs if
> > the BRs cannot be changed), so that all that matters is the RFC 5280
> > "trusted for" semantics.
>
> We intend to do that for the Mozilla policy. However, that then widens
> the scope of the policy massively. It makes it cover, if I remember the
> example correctly, millions of EKU-less certs on smart cards used for a
> government ID program in Europe. These are not intended for server use,
> but they are trusted for it, and so a misissuance here such that
> something appears in a CN or SAN that is domain-name-like would mean
> that cert could be used for a server.
>

If certificates without an EKU have dNSName or iPAddress subjectAltName
entries, then they should be considered in scope. Otherwise they don't need
to be considered in scope as long as Firefox doesn't use the Subject CN as
a dNSName. You've already started down the path of fixing the Subject CN
issue in https://bugzilla.mozilla.org/show_bug.cgi?id=1245280 and maybe
elsewhere.


> This is why a change to "trusted for" rather than "intended for" needs
> to be accompanied by a change to explicitly require id-kp-serverAuth, in
> order to keep the scope correct, and stop the Mozilla policy extending
> to cover certificates it's not supposed to cover and which CAs don't
> want it to cover.
>

See above. I bet that you can make the subjectAltName restrictions tighter
so that "trusted for" already works without requiring an explicit EKU.


> Requiring that every issuance under the publicly-trusted roots which is
> using no EKU and which is not intended for server auth change to use an
> EKU which explicitly does not include id-kp-serverAuth would have
> unknown ramifications of unknown size.


This is a textbook instance of FUD.


> Changing both the Mozilla policy
> and Firefox to require id-kp-serverAuth is reasonably confidently known
> to have only minor ramifications, and we know what they will be.
>

AFAICT almost all Mozilla software except for Firefox and Thunderbird,
would still trust the EKU-less certificates for id-kp-serverAuth. Thus
requiring an explicit id-kp-serverAuth in Firefox wouldn't even have the
intended ramifications for all of Mozilla's products.

Also, pretty much all non-Mozilla software is using RFC 5280 semantics
already. So, such a change wouldn't do anything to help non-Mozilla
software nor even all of Mozilla's products.


> >> The advantage of doing this is that it makes it much easier to scope our
> >> root program to avoid capturing certs it's not meant to capture.
> >
> > This is not true. Since no EKU extension implies id-kp-serverAuth, certs
> > without an EKU extension or with an EKU extension containing
> > id-kp-serverAuth or anyExtendedKeyUsage (even though Firefox doesn't
> > support that) should be within the scope of the program.
>
> But given the current situation, doing that extends the scope of the
> program beyond where we want it to be, and where CAs want it to be.
>

Limiting the scope to things that contain dNSName and iPAddress
subjectAltNames addresses this, right? And, more importantly, it addresses
it in a way that makes sense considering that other software doesn't
require an explicit EKU, because it doesn't rely on Firefox-specific
semantics.

To be clear, I'm not against the idea of Firefox making a technical change
that others have to catch up with, like name constraints. However, that
should be a last resort. There seems to be a better alternative in this
case.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Can we require id-kp-serverAuth now?

2016-12-02 Thread Brian Smith
On Tue, Nov 8, 2016 at 11:58 PM, Gervase Markham  wrote:

> At the moment, Firefox recognises an EE cert as a server cert if it has
> an EKU extension with id-kp-serverAuth, or if it has no EKU at all.
>

The EKU extension indicates the limits of the key usage. A certificate
without an EKU extension has no limits on its key usage. In particular,
when no EKU is present, id-kp-serverAuth is allowed, as far as the CA is
concerned. Many X.509 features are defined this way, where the default is
"no limit"--pretty much all of them. The advantage of omitting these
extensions is that the resulting certificates are smaller. Smaller
certificates are better. Therefore, Mozilla should encourage behaviors that
result in smaller certificates, including in particular omitting the EKU
extension and other extensions where the defaults "work."

The problem is that CAB Forum stuff is defined in terms of "intended for,"
which is different than "trusted for." So, for example, some CAs have
argued that they issue certificates that say they are trusted for
id-kp-serverAuth (because they have no EKU), but since they're not
"intended for" id-kp-serverAuth, the baseline requirements don't apply to
them.

The solution to this problem is to get rid of the idea of "intent" from the
CA policy (including the baseline requirements, or in spit of the BRs if
the BRs cannot be changed), so that all that matters is the RFC 5280
"trusted for" semantics.

So, it is now possible to change Firefox to mandate the presence of
> id-kp-serverAuth for EE server certs from Mozilla-trusted roots? Or is
> there some reason I've missed we can't do that?
>

I'd like to point out that I've given the above explanation to you multiple
times.


> The advantage of doing this is that it makes it much easier to scope our
> root program to avoid capturing certs it's not meant to capture.
>

This is not true. Since no EKU extension implies id-kp-serverAuth, certs
without an EKU extension or with an EKU extension containing
id-kp-serverAuth or anyExtendedKeyUsage (even though Firefox doesn't
support that) should be within the scope of the program. You simply need to
define the scope of the program in terms of the **technical** semantics.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Technically Constrained Sub-CAs

2016-11-21 Thread Brian Smith
Ryan Sleevi <r...@sleevi.com> wrote:

> On Mon, Nov 21, 2016 at 11:01 AM, Brian Smith <br...@briansmith.org>
> wrote:
> > Absolutely we should be encouraging them to proliferate. Every site that
> is
> > doing anything moderately complex and/or that wants to use key pinning
> > should be using them.
>
> I do hope you can expand upon the former as to what you see.
> As to the latter, key pinning is viable without the use of TCSCs.


A lot of people disagree, perhaps because they read the text after
"WARNING:" in
https://noncombatant.org/2015/05/01/about-http-public-key-pinning/.

If nothing else, using your own intermediate can help avoid the problems
with Google Chrome's implementation. (FWIW, Firefox's implementation also
can be coerced into behaving as badly as Chrome's, in some situations,
IIRC.)


> > My hypothesis is that CAs would be willing to start selling such
>
> certificates under reasonable terms if they weren't held responsible for
> > the things signed by such sub-CAs. It would be good to hear from CAs who
> > would be interested in that to see if that is true.
>
> That would require a change to the BRs, right? So far, no CAs have
> requested such a change, so why do you believe such CAs exist?
>

It would require changes to browsers' policies. Changing the BRs is one way
to do that, but it seems like CAB Forum is non-functional right now so it
might be better to simply route around the BRs.


> Why should a technical document be blocked on the policy document?
>

Nobody said anything about blocking 6962-bis. Removing that one section is
a smaller change in terms than the change Google made to the document just
last week, as far as the practical considerations are concerned.

Regardless, the argument for removing it is exactly your own arguments for
why you don't want to do it in Chrome. Read your own emails to learn more
about my technical objections to it.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Technically Constrained Sub-CAs

2016-11-21 Thread Brian Smith
Gervase Markham <g...@mozilla.org> wrote:

> On 18/11/16 19:13, Brian Smith wrote:
> > Regardless, the main point of that message of mine was left out: You
> could
> > limit, in policy and in code, the acceptable lifetime of name-constrained
> > externally-operated sub-CAs
>
> Presumably the "externally-operated" part would need to be policy, or a
> code-detectable marker enforced by policy, because there's no way of
> detecting that otherwise?
>

In another message in this thread, I suggested one way to mark intermediate
certificates as meeting the criteria of an name-constrained
externally-operated sub-CA that uses certificate policy OIDs. That proposed
mechanism also ensures externally-operated sub-CAs comply with Mozilla's
technical requirements (e.g. SHA-1 deprecation and future deprecations or
transisitions).


>
> > and/or the end-entity certificates they issue
> > strictly, independently of whether it can be done for all certificates,
> and
> > doing so would be at least part of the solution to making
> name-constrained
> > externally-operated sub-CAs actually a viable alternative in the market.
>
> I'm not sure what you mean by "a viable alternative" - I thought the
> concern was to stop them proliferating,


Absolutely we should be encouraging them to proliferate. Every site that is
doing anything moderately complex and/or that wants to use key pinning
should be using them.


> if what's underneath them was
> opaque? And if it's not opaque,


If draft-ietf-trans-rfc6962-bis section 4.2 discourages Mozilla from making
externally-operated name-constrained certificates viable then please have
somebody from Mozilla write to the TRANS list asking for section 4.2 to be
removed from the draft.


> why are they not a viable alternative
> now, and why would restricting their capabilities make them _more_ viable?
>

Go out and try to find 3 different CAs that will sell you a
name-constrained sub-CA certificate where you maintain control of the
private key and with no strings attached (no requirement that you implement
the same technical controls as root CAs or being audited to the same level
as them). My understanding is that you won't be able to find any that will
do so, because if you go off and issue a google.com certificate then
Mozilla and others will then hold the issuing root CA responsible for that.

My hypothesis is that CAs would be willing to start selling such
certificates under reasonable terms if they weren't held responsible for
the things signed by such sub-CAs. It would be good to hear from CAs who
would be interested in that to see if that is true.

To reiterate, I disagree that the name-constraint redaction is bad because
the certificates issued by the externally-operated name-constrained CAs
must be subject to all the terms of browsers' policies, including the BRs.
That kind of thinking is 100% counter to the reason Mozilla created the
exceptions for externally-operated name-constrained CAs in its policy in
the first place. (Similarly, the requirements on externally-operated
name-constrained CAs in the baseline requirements defeat the purpose of
treating them specially.) However, i do agree that the technical details
regarding (externally-operated) name-constrained CAs in Mozilla's policy
and in draft-ietf-trans-rfc6962-bis are insufficient, and that's why I
support (1) removing section 4.2 from draft-ietf-trans-rfc6962-bis-20, and
(2) improving Mozilla's policy and the BRs so that the technical details do
become sufficient. After that we can then see if it makes sense to revise
rfc6962-bis to add redaction based on the revised details of how root
stores treat name-constrained externally-operated sub-CAs.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Technically Constrained Sub-CAs

2016-11-18 Thread Brian Smith
Gervase Markham  wrote:

> RFC 6962bis (the new CT RFC) allows certs below technically-constrained
> sub-CAs (TCSCs) to be exempt from CT. This is to allow name privacy.
> TCSCs themselves are also currently exempt from disclosure to Mozilla in
> the Common CA Database.
>
> If this is the only privacy mechanism available for 6962bis,


First, here's the RFC 6969-bis draft:
https://tools.ietf.org/html/draft-ietf-trans-rfc6962-bis-20#section-4.2.

Please see my other messages in this thread, where I pointed out that
Mozilla's own definition of externally-operated name-constrained sub-CAs
should be improved because name constraints don't mitigate every serious
concern one might have regarding technically-constrained sub-CAs. I think
that's clearly true for what RFC 6962-bis is trying to do with name
constraints too.

I think there might be ways to fix the name-constrained sub-CA stuff for
RFC 6962-bis, but those kinds of improvements are unlikely to happen in RFC
6962-bis itself, it seems. They will have to happen in an update to RFC
6962-bis.

I also disagree with Google's position that it is OK to leave bad stuff in
the spec and then ignore it. The WGLC has passed, but that doesn't mean
that the spec can't be changed. Google's already proposed a hugely
significant change to the spec in the last few days (which I support),
which demonstrates this.

Accordingly, I think the exception mechanism for name-constrained sub-CAs
(section 4.2) should be removed from the spec. This is especially the case
if there are no browsers who want to implement it. If the draft contains
things that clients won't implement, then that's an issue that's relevant
for the IETF last call, as that's against the general IETF philosophy of
requiring running code.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Technically Constrained Sub-CAs

2016-11-18 Thread Brian Smith
Gervase Markham <g...@mozilla.org> wrote:

> On 18/11/16 01:43, Brian Smith wrote:
> > The fundamental problem is that web browsers accept certificates with
> > validity periods that are years long. If you want to have the agility to
> > fix things with an N month turnaround, reject certificates that are valid
> > for more than N months.
>
> That's all very well to say. The CAB Forum is deadlocked over a proposal
> to reduce the max validity of everything to 2 years + 3 months; some
> people like it because it removes a disadvantage of EV (which already
> has this limit), other's don't like it because people like not having to
> change their cert and are willing to pay for longer. Mozilla is in
> support, but without agreement, we can hardly implement unilaterally -
> the breakage would be vast.
>

Regardless, the main point of that message of mine was left out: You could
limit, in policy and in code, the acceptable lifetime of name-constrained
externally-operated sub-CAs and/or the end-entity certificates they issue
strictly, independently of whether it can be done for all certificates, and
doing so would be at least part of the solution to making name-constrained
externally-operated sub-CAs actually a viable alternative in the market.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Technically Constrained Sub-CAs

2016-11-17 Thread Brian Smith
Andrew Ayer  wrote:

> The N month turnaround is only a reality if operators of TCSCs start
> issuing certificates that comply with the new rules as soon as the new
> rules are announced.  How do you ensure that this happens?
>

Imagine that the TCSCs are also required to have a short validity period, N
months. Further, require that each TCSC indicate using a certificate policy
(as already in the spec, or perhaps some simpler mechanism) that indicates
the version of the technical requirements on certificates that that TCSC is
trusted for. Then the end-entity certificates are also similarly marked.
Each policy implicitly maps to a period of time for which that policy
applies. At any given time, trusted CAs are only allowed to issue TCSCs
with validity periods that are within the period of time specified by all
policies listed in that TCSC.

Let's say that this was implemented already two year ago. At that time CAs
could issue SHA-1 certificates and so a TCSC could be issued for the policy
which browsers understand allows TCSCs to be issued. Root programs require
that all such TCSCs expire before January 1, 2016, because that's when
SHA-1 issuance became disallowed. Also, browsers have code in them that
make it so that certificates without that policy OID included won't be
trusted for SHA-1.

Now, let's say I got a TCSC for example.com in March 2015, and I want to
issue SHA-1 certificates, so I ask for that allow-SHA1 policy OID to be
included in my TCSC. That means my certificate will expire in January 2016,
because that's the end date for the allow-SHA1 policy. And also, browsers
would be coded to not recognize that policy OID after January 2016 anyway.

Now, December 2015 roles around and I get another TCSC for January
2016-January 2017. But, the allow-SHA1 policy isn't allowed for that
validity period, so my TCSC won't have that policy; instead it will have
the only-SHA2 policy.

Now, here are my choices:

* Do nothing. My intermediate will expire, and all my servers' certificates
will become untrusted.

* Issue new SHA-1 end-entity certificates from my new only-SHA2
intermediate. But, browsers would not trust these because even if the
end-entity cert contains the allow-SHA1 policy OID, my TCSC won't include
it.

* Issue new SHA-2 end-entity certificates from my new only-SHA2
intermediate.

The important aspects with this idea are:
1. Every TCSC has to be marked with the policies that they are to be
trusted for.
2. Root store policies assign a validity period to each policy.
3. Browsers must enforce the policies in code, and the code for enforcing a
policy must be deployed in production before the end (or maybe the
beginning) of the policy's validity period.
4. A TCSC's validity period must be within all the validity periods for
each policy they are marked with; that is, a TCSC's notAfter must never be
allowed to be after any deprecation deadline that would affect it.

Note that for the latest root store policies, we may not know the end date
of the validity period for the policy. This is where we have to choose an
amount of time, e.g. 12 months, and say we're never going to deprecate
anything with less than 12 months (unless there's some emergency or
whatever), and so we'll allow TCSCs issued today for the current policies
to be valid for up to 12 months.

Also note that the existing certificate policy infrastructure used for the
EV indicator could probably be used, so the code changes to certificate
validation libraries would likely be small.

Thoughts?

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Technically Constrained Sub-CAs

2016-11-17 Thread Brian Smith
Ryan Sleevi  wrote:

> On Thu, Nov 17, 2016 at 3:12 PM, Nick Lamb  wrote:
> > There's a recurring pattern in most of the examples. A technical
> counter-measure would be possible, therefore you suppose it's OK to
> screw-up and the counter-measure saves us. I believe this is the wrong
> attitude. These counter-measures are defence in depth. We need this defence
> because people will screw up, but that doesn't make screwing up OK.
>
> I think there's an even more telling pattern in Brian's examples -
> they're all looking in the past. That is, the technical mitigations
> only exist because of the ability of UAs to change to implement those
> mitigations, and the only reason those mitigations exist is because
> UAs could leverage the CA/B Forum to prevent issues.
>
> That is, imagine if this was 4 years ago, and TCSCs were the vogue,
> and as a result, most major sites had 5 year 1024-bit certificates.
> The browser wants the lock to signify something - that there's some
> reasonable assurance of confidentiality, integrity, and authenticity.
> Yet neither 5 year certs nor 1024-bit certificates met that bar.
>

The fundamental problem is that web browsers accept certificates with
validity periods that are years long. If you want to have the agility to
fix things with an N month turnaround, reject certificates that are valid
for more than N months.

In fact, since TCSCs that use name constraints as the technical constraints
basically do not exist, you could even start enforcing even stricter
enforcement than other certificates. For example, externally-operated name
constrained intermediates could be limited to 12 months of validity even if
other certificates aren't so restricted. Just make sure you actually
enforce it in the browser.

If you have a better plan for getting people to actually issue TCSCs of the
name constrained variety, let's hear it.

Cheers,
Brian.
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Technically Constrained Sub-CAs

2016-11-17 Thread Brian Smith
Nick Lamb  wrote:

> There's a recurring pattern in most of the examples. A technical
> counter-measure would be possible, therefore you suppose it's OK to
> screw-up and the counter-measure saves us.


Right.


> I believe this is the wrong attitude. These counter-measures are defence
> in depth. We need this defence because people will screw up, but that
> doesn't make screwing up OK.
>

With that attitude, CAs would never issue intermediate CAs with name
constraints as the technical constraint on reasonable terms (not costing a
fortune, not forcing you to let the issuing CA have the private key), and
key pinning would remain too dangerous for the vast majority of sites to
ever deploy. Giving up those things would be a huge cost. What's the actual
benefit to end users in giving them up?

(Note: Key pinning isn't the only advantage to being able to freely operate
your own intermediate CA.)

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Technically Constrained Sub-CAs

2016-11-17 Thread Brian Smith
On Mon, Nov 14, 2016 at 6:39 PM, Ryan Sleevi  wrote:

> As Andrew Ayer points out, currently, CAs are required to ensure TCSCs
> comply with the BRs. Non-compliance is misissuance. Does Mozilla share
> that view? And is Mozilla willing to surrender the ability to detect
> misissuance, in favor of something which clearly doesn't address the
> use cases for redaction identified in the CA/Browser Forum and in the
> IETF?
>

I don't agree that a third-party TCSC failing to conform to the BRs should
be considered misissuance in every case, when the technical constrain is
name constraints.

Let's run with an example where I am Example Corp, I own example.com, I
want to get a name-constrained CA certificate for example.com and *.
example.com.

Let's say I screw up something and accidentally issue a certificate from my
sub-CA for google.com or addons.mozilla.org. Because of the name
constraints, this is a non-issue and shouldn't result in any sanctions on
the original root CA or Example Corp. (Note that this means that relying
parties need to implement name constraints, as Mozilla products do, and so
this should be listed as a prerequisite for using Mozilla's trust anchor
list in any non-Mozilla product.)

Let's say I issue a SHA-1-signed certificate for
credit-card-readers.example.com. Again, that's 100% OK, if unfortunate,
because after 2017-1-1 one shouldn't be using Mozilla's trust store in a
web browser or similar consumer product if they accept SHA-1-signed
certificates.

Let's say that the private key for https://www.example.com gets
compromised, but I didn't create any revocation structure so I can't revoke
the certificate for that key. That's really, really, unfortunate, and that
highlights a significant problem with the definition of name-constrained
TCSCs now. In particular, it should be required that the name-constrained
intermediate be marked using this mechanism
https://tools.ietf.org/html/rfc7633#section-4.2.2 in order to be considered
technically-constrained.

Let's say I issue a malformed certificate that is rejected from my
name-constrained intermediate. Again, IMO, we simply shouldn't care too
much. The important thing is that implementations don't implement
workarounds to accomodate such broken certificates.

Let's say I issue a SHA-2 certificate that is valid for 20 years from my
name-constrained certificate. Again, that is not good, but it won't matter
as long as clients are rejecting certificates that are valid for too long,
for whatever definition of "too long" is decided.

Why is it so important to be lenient like this for name-constrained TCSC's?
One big reason is that HPKP is dangerous to use now. Key pinning is really
important, so we should fix it by making it less dangerous. The clearest
way to make it safer is to use only pin the public keys of multiple TCSCs,
where each public key is in an intermediate issued by multiple CAs. But,
basically no CAs are even offering TCSCs using name constraints as the
technical constraint, which means that websites can't do this, and so for
the most part can't safely use key pinning. Absolving CAs from having to
babysit their customers' use of their certificates will make it more
practical for them to make this possible.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Proposed limited exception to SHA-1 issuance

2016-02-25 Thread Brian Smith
Gervase Markham  wrote:

> On 23/02/16 18:57, Gervase Markham wrote:
> > Mozilla and other browsers have been approached by Worldpay, a large
> > payment processor, via Symantec, their CA. They have been transitioning
> > to SHA-2 but due to an oversight have failed to do so in time for a
> > portion of their infrastructure, and failed to renew some SHA-1 server
> > certificates before the issuance deadline of 31st December 2015.
>
> In relation to this issue, we just published a blog post:
>
> https://blog.mozilla.org/security/2016/02/24/payment-processors-still-using-weak-crypto/


This is all very disappointing. Effectively, Mozilla is punishing,
economically, all of WorldPay's and Symantec's competitors who spent real
money and/or turned down money in an effort to comply with Mozilla's
guidance on SHA-1. Meanwhile, no doubt Symantec receives a hefty fee in
return for issuing these certificates. Thus, Mozilla has effectively
reversed the economic incentives for CAs so that it is profitable to go
against Mozilla's initiatives to improve web security. And, in the course
of doing so, Mozilla has damaged its own credibility and reduced leverage
in enforcing its CA policies going forward.

Even worse, Firefox still hasn't been changed to block SHA-1 certificates
that chain to publicly-trusted CAs with a notBefore date after 2016-01-01.
After I left Mozilla, I continued to work on mozilla::pkix in part to make
it easy for Mozilla to implement such blocking, specifically, so I know as
well as anybody that it is easy to do. If such blocking were implemented
then Firefox users wouldn't even be affected by the above-mentioned
certificates. This was (is) an opportunity for Firefox to lead other
browsers in at least a small part of certificate security. The existing bug
[1] for this was closed when the botched attempt to implement it was
checked in, but it wasn't re-opened when the botched patch was reverted.
I've reopened the bug. It would be great to see somebody working on it.

Even the bug about passively warning users about SHA-1 certificates in the
chain [2] is currently assigned to *nobody*. AFAICT, Google Chrome has been
doing this since 2014. Firefox needs to catch up, at least.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=942515
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1183718

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Name issues in public certificates

2015-11-18 Thread Brian Smith
Peter Bowen  wrote:

> 2) For commonName attributes in subject DNs, clarify that they can only
> contain:
>
- IPv4 address in dotted-decimal notation (specified as IPv4address
> from section 3.2.2 of RFC 3986)
> - IPv6 address in coloned-hexadecimal notation (specified as
> IPv6address from section 3.2.2 of RFC 3986)
> - Fully Qualified Domain Name or Wildcard Domain Name in the
> "preferred name syntax" (specified by Section 3.5 of RFC1034 and as
> modified by Section 2.1 of RFC1123)
> - Fully Qualified Domain Name or Wildcard Domain Name in containing
> u-labels (as specified in RFC 5890)


> 3) Forbid commonName attributes in subject DNs from containing a Fully
> Qualified Domain Name or Wildcard Domain Name that contains both one
> or more u-labels and one or more a-labels (as specified in RFC 5890).
>

I don't think these rules are necessary, because CAs are already required
to encode all this information in the SAN, and if there is a SAN with a
dNSName and/or iPAddress the browser is required to ignore the subject CNs.
That is, if the certificate a SAN with a dNSName and/or iPAddress entry,
then it doesn't really matter how the CN is encoded as long as it isn't
misleading.


> If the Forum decides to allow an exception to RFC 5280 to permit IP
> address strings in dNSName general names, then require the same format
> as allowed for common names.
>

That should not be done. As I mentioned in my other reply in this thread,
Ryan Sleevi already described a workaround that seems to work very well:
Encode the IP addresses in the SubjectAltName as iPAddress entries, and
then also encode them as (normalized) ASCII dotted/colon-separated text in
the subject CN, using more than one subject CN if there is more than one IP
address.

By the way, I believe that mozilla::pkix will reject all the invalid names
that you found, except it accepts "_" in dNSNames. If you found some names
that mozilla::pkix accepts that you think are invalid, I would love to hear
about that.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Name issues in public certificates

2015-11-18 Thread Brian Smith
On Tue, Nov 17, 2015 at 4:40 PM, Richard Wang  wrote:

> So WoSign only left IP address issue that we added both IP address and DNS
> Name since some browser have warning for IP address only in SAN.
>

Put the IP addresses in the SAN as an iPAddress and then also put them in
the Subject CN, one CN per IP address. Then all browsers will accept the
certs and they will conform to the baseline requirements (IIUC).

Note that this is Ryan Sleevi's good idea.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy Update Proposal -- Remove Email Trust Bit

2015-10-15 Thread Brian Smith
On Tue, Oct 13, 2015 at 5:04 AM, Kathleen Wilson 
wrote:

> I believe that such a resource commitment would satisfy all of the
> arguments against the Email trust bit that Ryan so eloquently summarized.
> [3]
>
> Is this a fair assessment?
>
> Is there anything else that should be added to the "job description" above?


I think your summary of what needs to be done with respect to the email
trust bit is good.

In an earlier message, you mentioned the idea of splitting the S/MIME
policy into a separate document from the TLS policy. I think that such a
split would be good and I think it should happen early on in the process
for version 2.3 of the policy. In particular, such a split would enable us
to have simpler language in the TLS policy, especially with respect to the
Extended Key Usage (EKU) extension.

I also think it would be good to have CAs apply for the TLS trust bit
separately from the email trust bit. In particular, when it comes time for
the public review of a CA inclusion request or update, I think it would be
better to have a separate email threads for the public discussions of the
granting of the TLS trust bit and the granting of the S/MIME trust bit, for
the same CA.

Note that certificate sfor TLS and for S/MIME are much more different than
they may first appear. In particular, it is very reasonable to have a
public log of issued certificates for TLS (Certificate Transparency) and
revocation via short-lived certificates and OCSP stapling should eventually
work. However, email certificates often contain personally identifiable
information (PII) and it isn't clear how to deal with that in CT. Also, the
privacy/security trade-off for revocation checking for S/MIME is much
different--much more difficult--than for TLS. So, I expect the technical
aspects of the TLS and S/MIME policies to be quite different going forward.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy Update Proposal -- Align with RFC 3647 now

2015-10-15 Thread Brian Smith
Ryan Sleevi  wrote:

> On Thu, October 15, 2015 12:30 pm, Kathleen Wilson wrote:
> >  It was previously suggested[1] that we align Mozilla's CA Certificate
> >  Policy to RFC 3647, so CAs can compare their CP/CPS side-by-side with
> >  Mozilla's policy, as well as the BRs and audit criteria (such as the
> >  forthcoming ETSI 319 411 series).
>
> Kathleen,
>
> I remain incredibly dubious and skeptical of the proposed value, and thus
> somewhat opposed. Though I've been a big proponent of adopting the 3647
> format for the CA/Browser Forum documents, I don't believe that root store
> requirements naturally fit into that form, nor should they.


I agree with Ryan. The organization of Mozilla's policy is good. The
technical requirements need to be improved. We should focus on improving
the technical requirements, not the organization.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Fwd: Policy Update Proposal: Remove Code Signing Trust Bit

2015-10-02 Thread Brian Smith
-- Forwarded message --
From: Brian Smith <br...@briansmith.org>
Date: Thu, Oct 1, 2015 at 7:15 AM
Subject: Re: Policy Update Proposal: Remove Code Signing Trust Bit
To: Gervase Markham <g...@mozilla.org>
Cc: "kirk_h...@trendmicro.com" <kirk_h...@trendmicro.com>


On Wed, Sep 30, 2015 at 11:05 PM, Gervase Markham <g...@mozilla.org> wrote:

> On 01/10/15 02:43, Brian Smith wrote:
> > Perhaps nobody's is, and the whole idea of using publicly-trusted CAs for
> > code signing and email certs is flawed and so nobody should do this.
>
> I think we should divide code-signing and email here. I can see how one
> might make an argument that using Mozilla's list for code-signing is not
> a good idea; a vendor trusting code-signing certs on their platform
> should choose which CAs they trust themselves.
>
> But if there is no widely-trusted set of email roots, what will that do
> for S/MIME interoperability?
>

First of all, there is a widely-trusted set of email roots: Microsoft's.
Secondly, there's no indication that having a widely-trusted set of email
roots *even makes sense*. Nobody has shown any credible evidence that it
even makes sense to use publicly-trusted CAs for S/MIME. History has shown
that almost nobody wants to use publicly-trusted CAs for S/MIME, or even
S/MIME at all.

Further, there's been actual evidence presented that Mozilla's S/MIME
software is not trustworthy due to lack of maintenance. And, really, what
does Mozilla even know about S/MIME? IIRC, most of the S/MIME stuff in
Mozilla products was made by Sun Microsystems. (Note: Oracle acquired Sun
Microsystems in January 2010. But, I don't remember any Oracle
contributions related to S/MIME. So, yes, I really mean Sun Microsystems
that hasn't even existed for almost 6 years.)

Cheers,
Brian
-- 
https://briansmith.org/




-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Fwd: Policy Update Proposal: Remove Code Signing Trust Bit

2015-10-02 Thread Brian Smith
On Fri, Oct 2, 2015 at 7:41 AM, Joshua Cranmer  <pidgeo...@gmail.com>
wrote:

> On 10/2/2015 11:36 AM, Brian Smith wrote:
>
>> First of all, there is a widely-trusted set of email roots: Microsoft's.
>> Secondly, there's no indication that having a widely-trusted set of email
>> roots *even makes sense*. Nobody has shown any credible evidence that it
>> even makes sense to use publicly-trusted CAs for S/MIME. History has shown
>> that almost nobody wants to use publicly-trusted CAs for S/MIME, or even
>> S/MIME at all.
>>
>
> There is demonstrably more use of S/MIME than PGP. So, by extension of
> your argument, almost nobody wants to use secure email, and there is
> therefore no point in supporting them.


I think it is fair to say the empirical evidence does support the claim
that the vast majority of people don't want to, or can't, use S/MIME or GPG
as it exists today. I do think that almost everybody does want secure
email, though, if we can find a way to give it to them that they can
actually use.


> I do realize that I'm using strong language, but this does feel to me to
> be part of a campaign to intentionally sabotage Thunderbird development
> simply because it's not Firefox


It is much simpler than that: I don't want the S/MIME-related stuff to keep
getting in the way of the SSL-related stuff in Mozilla's CA inclusion
policy. People argue that the S/MIME stuff must keep being in the way of
the SSL-related stuff because of Thunderbird and other NSS-related
projects. I just want to point out that the "dereliction of duty," as you
put it, in the maintenance of that software seems to make that argument
dubious, at best.

Cheers,
Brian
--
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy Update Proposal: Remove Code Signing Trust Bit

2015-09-30 Thread Brian Smith
On Wed, Sep 30, 2015 at 3:11 PM, kirk_h...@trendmicro.com <
kirk_h...@trendmicro.com> wrote:

> The Mozilla NSS root store is used by some well-known applications as
> discussed, but also by many unknown applications.  If the trust bits are
> removed, CAs who issue code signing or email certs may find multiple
> environments dependent on the NSS root store where the CA's products will
> no longer work - and we don't have a list of those environments today.
>

That's OK.


> Mozilla does a sensible public review of a CA's practices for code signing
> and email certs before turning on the trust bits - and if Mozilla's review
> isn't sufficient, whose is?


Perhaps nobody's is, and the whole idea of using publicly-trusted CAs for
code signing and email certs is flawed and so nobody should do this.


> Who can conduct this review better than Mozilla?  (Answer: no one, and no
> one else will bother to do the review.).


If nobody will do it then that means nobody thinks it is important enough
to invest in. Why should Mozilla bother doing it if nobody cares enough to
invest in it?


> Without Mozilla trust bits, the trustworthiness of these types of certs
> will likely go down.
>

Isn't that a good thing? If the issuing policies have been insufficiently
reviewed, then that means Mozilla's current endorsement of these CAs is
misleading people into trusting these certs more than they should be.
Dropping these trust bits would be a clear sign that trust in these certs
should be re-evaluated, which is a good thing.


> Finally, if the trust bits are turned off, I'm concerned that some
> applications that use code signing and email certs will just go static on
> their trusted roots


A vendor that does that is a bad vendor with bad judgement and you should
probably not trust any of their products.


> Trusted by default, but can lose the trust bits by bad actions.
>

I wish you would have led with these completely ridiculous suggestion
instead of the only-slightly-less ridiculous stuff that preceded it.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy Update Proposal -- Specify audit criteria according to trust bit

2015-09-22 Thread Brian Smith
Kathleen Wilson  wrote:

> Arguments for removing the Email trust bit:
> - Mozilla's policies regarding Email certificates are not currently
> sufficient.
> - What else?
>
>
* It isn't clear that S/MIME using certificates from publicly-trusted CAs
is a model of email security that is worth supporting. Alternatives with
different models exist, such a GPG and TextSecure. IMO, the TextSecure
model is more in line with what Mozilla is about that the S/MIME model.

* It is better to spend energy improving TLS-related work than
S/MIME-related stuff. The S/MIME stuff distracts too much from the TLS work.

* We can simplify the policy and tighten up the policy language more if the
policy only has to deal with TLS certificates.

* Mozilla's S/MIME processing isn't well supported. Large parts of it are
out of date and the people who maintain the certificate validation logic
aren't required to keeping S/MIME stuff working. In particular, it is OK
according to current development policies for us to change Gecko's
certificate validation logic so that it works for SSL but doesn't
(completely) work for S/MIME. So, basically, Mozilla doesn't implement
software that can properly use S/MIME certificates, as far as we know.

Just to make sure people understand the last point: I think it is great
that people try to maintain Thunderbird. But, it was a huge burden on Gecko
developers to maintain Thunderbird on top of maintaining Firefox, and some
of us (including me, when I worked at Mozilla) lobbied for a policy change
that let us do our work without consideration for Thunderbird. Thus, when
we completely replaced the certificate verification logic in Gecko last
year, we didn't check how it affected Thunderbird's S/MIME processing.
Somebody from the Thunderbird maintenance team was supposed to do so, but I
doubt anybody actually did. So, it would be prudent to assume that
Thunderbird's S/MIME certificate validation is broken.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy Update Proposal -- Refer to BRs for Name ConstraintsRequirement

2015-09-22 Thread Brian Smith
On Tue, Sep 22, 2015 at 12:51 AM, Rob Stradling <rob.stradl...@comodo.com>
wrote:

> On 22/09/15 01:01, Brian Smith wrote:
> 
>
>> But, if the intermediate CA certificate is allowed to issue SSL
>> certificates, then including the EKU extension with id-kp-serverAuth is
>> just wasting space. Mozilla's software assumes that when the intermediate
>> CA certificate does not have an EKU, then the certificate is valid for all
>> uses. So, including an EKU with id-kp-serverAuth is redundant. And, the
>> wasting of space within certificates has material consequences that affect
>> performance and thus indirectly security.
>>
>
> Brian,
>
> Given that the BRs require id-kp-serverAuth in Technically Constrained
> intermediates, what would be the point of Mozilla dropping that same
> requirement?
>
> There seems little point providing options that, in reality, CAs are never
> permitted to choose.


It would be the first step towards changing the BRs in the analogous manner.

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy Update Proposal -- Refer to BRs for Name Constraints Requirement

2015-09-21 Thread Brian Smith
On Mon, Sep 21, 2015 at 4:02 PM, Kathleen Wilson 
wrote:

> Section 7.1.5 of version 1.3 of the Baseline Requirements says:
> The proposal is to simplify item #9 of the Inclusion Policy,
>
> https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/inclusion/
> by referring to the BRs in the first bullet point, as follows:
> ~~
> We encourage CAs to technically constrain all subordinate CA certificates.
> For a certificate to be considered technically constrained, the certificate
> MUST include an Extended Key Usage (EKU) extension specifying all extended
> key usages that the subordinate CA is authorized to issue certificates for.


I think it is better to resolve whether email certificates and code signing
certificates are in or out of scope for Mozilla's policy first. I would
prefer that email and code signing certificates to be considered out of
scope. In that case, the requirement that the intermediate certificate must
contain an EKU extension can clearly be removed.

The EKU-in-intermediate-certificate mechanism is most useful for the case
where the intermediate CA is NOT allowed to issue SSL certificates. In that
case, the EKU extension MUST be included, and the id-kp-serverAuth OID must
NOT be included in it.

But, if the intermediate CA certificate is allowed to issue SSL
certificates, then including the EKU extension with id-kp-serverAuth is
just wasting space. Mozilla's software assumes that when the intermediate
CA certificate does not have an EKU, then the certificate is valid for all
uses. So, including an EKU with id-kp-serverAuth is redundant. And, the
wasting of space within certificates has material consequences that affect
performance and thus indirectly security.



> - If the certificate includes the id-kp-emailProtection extended key
> usage, then all end-entity certificates MUST only include e-mail addresses
> or mailboxes that the issuing CA has confirmed (via technical and/or
> business controls) that the subordinate CA is authorized to use.
> - If the certificate includes the id-kp-codeSigning extended key usage,
> then the certificate MUST contain a directoryName permittedSubtrees
> constraint where each permittedSubtree contains the organizationName,
> localityName (where relevant), stateOrProvinceName (where relevant) and
> countryName fields of an address that the issuing CA has confirmed belongs
> to the subordinate CA.
> ~~
>

These requirements can be removed pending the resolution of the
email/code-signing trust bit issues. If Mozilla is only going to manage a
root program for SSL certificates, then it shouldn't impose requirements on
certificates that are marked (by having an EKU extension that does not
assert id-kp-serverAuth) as not relevant to SSL.

Cheers,
Brian

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Pre-cert misissuance

2015-09-19 Thread Brian Smith
On Sat, Sep 19, 2015 at 7:20 AM, Gervase Markham  wrote:

> Symantec just fired people for mis-issuing a google.com 1-day pre-cert:
>
> http://www.symantec.com/connect/blogs/tough-day-leaders
>
>
> http://googleonlinesecurity.blogspot.co.uk/2015/09/improved-digital-certificate-security.html
>
> Google: "Our primary consideration in these situations is always the
> security and privacy of our users; we currently do not have reason to
> believe they were at risk."
>
> Gerv
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>

People have been fired for worse reasons.

Good job, Google!

Cheers,
Brian
-- 
https://briansmith.org/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-17 Thread Brian Smith
Gervase Markham g...@mozilla.org wrote:

 On 06/06/15 02:12, Brian Smith wrote:
  Richard Barnes rbar...@mozilla.com wrote:
 
  Small CAs are a bad risk/reward trade-off.
 
  Why do CAs with small scope even get added to Mozilla's root program in
 the
  first place? Why not just say your scope is too limited to be worthwhile
  for us to include?

 There's the difficultly. All large CAs start off as (one or more :-)
 small CAs. If we admit no small CAs, we freeze the market with its
 current players.


By small scope, I'm referring to CAs who limit their scope to a certain
geographical region, language, or type of institution.

For example, thinking about it more, I think it is bad to include
government-only CAs at all, because including government-only CAs means
that there would eventually be 196 government-only CAs and if each one has
just 1 ECDSA root and 1 RSA root, then that's 392 roots to deal with. If we
assume that every government will eventually want as many roots as Symantec
has, then there will be thousands of government-only roots. It's not
reasonable.

StartCom and Let's Encrypt is a converse example, because even though they
had issued certificates to zero websites when they started, their intent is
to issue certificates to every website.

For example, if Amazon had applied with a CP/CPS that limited the scope to
only their customers, then I would consider them to have too small a scope
to be included.

 Mozilla already tried that with the HARICA CA. But, the result was
 somewhat
  nonsensical because there is no way to explain the intended scope of
 HARICA
  precisely enough in terms of name constraints.

 Can you expand on that a little?


I did, in my original message. HARICA's constraint includes *.org, which is
much broader in scope than they intend to issue certificates for. dNSName
constraints can't describe HARICA's scope.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA scope transparency (was Re: Name-constraining government CAs, or not)

2015-06-05 Thread Brian Smith
Richard Barnes rbar...@mozilla.com wrote:

 Small CAs are a bad risk/reward trade-off.


Why do CAs with small scope even get added to Mozilla's root program in the
first place? Why not just say your scope is too limited to be worthwhile
for us to include?


 One way to balance this equation better is to scope the risk to the scope
 of the CA.  If a CA is only serving a small slice of the web, then they
 should only be able to harm a small slice of the web.  A CA should only be
 able to harm the entire web if it's providing benefit to a significant part
 of it.

 I wonder if we can agree on this general point -- That it would be
 beneficial to the PKI if we could create a mechanism by which CAs could
 disclose the scope of their operations, so that relying parties could
 recognize when the CA makes a mistake or a compromise that goes outside
 that scope, and prevent harm being done.


Mozilla already tried that with the HARICA CA. But, the result was somewhat
nonsensical because there is no way to explain the intended scope of HARICA
precisely enough in terms of name constraints.


 I think of this as CA scope transparency.  Not constraining what the CAs
 do, but asking them to be transparent about what they do.  That way if they
 do something they said they don't do, we can recognize it and reject it
 proactively.


 In general, it sounds sensible. But, just like when we try to figure out
ways restrict government CAs, it seems like when we look at the details, we
see that the value of the name constraints seems fairly limited. For
example, in the HARICA case, their name constraint still includes *.org
which means they can issue certificates for *.mozilla.org which means they
are a risk to the security of the Firefox browser (just like any other CA
that can issue for *.mozilla.org) except when the risk is limited by key
pinning.

It would be illustrative to see the list of CAs that volunteer to be
constrained such that they cannot issue certificates for any domains in
*.com. It seems like there are not many such CAs. Without having some way
to protect *.com, what's the point?

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Name-constraining government CAs, or not

2015-05-31 Thread Brian Smith
On Sun, May 31, 2015 at 12:43 PM, Ryan Sleevi 
ryan-mozdevsecpol...@sleevi.com wrote:

 However, that you later bring in the idea that government's may pass laws

that make it illegal for browsers to take enforcement is, arguably,
 without merit or evidence. If we accept that governments may pass laws to
 do X, then we can also logically assume two related statements

 1) Governments may pass laws to compel a CA to issue a certificate to a
 party other than a domain applicant.
 2) Governments may pass laws to compel browser makers to include
 particular roots as trusted.

 The added tinge of uncertainty In fact, it might already be illegal to do
 in some circumstances adds to the fear and doubt already sowed here.


The practical effects of FUD is really the concern I am raising. The
question I was responding to is basically How is the threat model
different for Government CAs vs non-Government CAs? Are browser makers
going to have additional fear, uncertainty, and doubt about taking
enforcement action that would negatively impact governments' essential
services? I think the publicly-available information on the DigiNotar and
ANSSI incidents at least hints that the answer is yes. Do governments
have an unfair advantage as far as legal action regarding root removal is
concerned, to the point where browser makers should be especially concerned
about legal fights with them? I think it's kind of obvious that the answer
is yes, though I agree my reasoning is based on speculation.

I agree with you that these concerns also apply in scenerios where
governments have some kind of relationship with a commercial CA.


 After all, why wouldn't we argue that the risk of being sued for tortious

interference exists if a browser removes trust, ergo they can't enforce
 their CA policies in any meaningful way?


It seems like a safe bet that whoever has the most money is most likely to
win a legal battle. Governments generally have more money than anybody
else. It's better to avoid getting into such legal battles in the first
place.

  More generally, browsers should encourage CAs to agree to name

  constraints,
   regardless of the government status of the CA.

 Of this, I absolutely agree. But I think there's a fine line between
 encouraging and requiring, and how it's presented is key.

 Most importantly, I don't believe for a second that constraints justify a
 relaxation of security policy - they're an optional control for the CA to
 opt-in to, as a means of reducing their exposure.

 Name constraints can't replace compliance with the Mozilla Security
 Policy, nor should it, in part or in whole.


My hope, which may be too optimistic, is that the USG is content with
issuing itself certificates only for *.gov and *.mil, that they are willing
to be constrained to issuing certificates only to *.gov and *.mil, and that
we can easily help them get to the point where they are **effectively**
constrained to *.gov and *.mil. More generally, I hope that other
government CAs would also do likewise. Obviously there are a lot of cases
where the distinction between government CA and commercial CA is blurred,
but I don't think we should let those less clear situations stop us from
working with governments to improve the situation for situations where the
distinction is clear and meaningful.


 To be clear, my words are strong here because your argument is so
 appealing and so enchanting in a world of post-Snowdonia, in which trust
 no one is the phrase du jour, in which the NSA is the secret puppet
 master of all of NIST's activities, and in which vanity crypto sees a
 great resurgence. Your distinctions of government CAs, and their ability,
 while well intentioned, rest on arguments that are logically unsound and
 suspect, though highly appetizing for those who aren't following the
 matter closely.


I didn't and don't intend to make that kind of argument. My argument is
simpler: I expect less enforcement action against government CAs than
against commercial CAs by browser makers and I expect more incidents of
non-compliance from government CAs. In the situations where
governments--indeed any CAs--are willing to accept constraints, at least,
we should add the constraints, to reduce the negative consequences of of a
government CA being non-compliant and a browser delaying or forgoing
enforcement action.

I freely admit that my thinking is based on speculative inferences from my
understanding of the publicly-available history of browser CA policy
enforcement. I agree with you that the browser makers shouldn't delay or
forgo enforcement in these situations. In a perfect world we wouldn't need
to consider mitigations for if/when they do.

My specific concern is that, while some proposals for the use of name
constraints aren't good, this discussion is moving toward us not trying
very hard to convince governments to accept very useful and practical name
constraints. Name constraints do have a place in this.

Cheers,
Brian

Re: Requirements for CNNIC re-application

2015-05-30 Thread Brian Smith
On Tue, May 26, 2015 at 5:50 AM, Gervase Markham g...@mozilla.org wrote:

 On 24/05/15 06:19, percyal...@gmail.com wrote:
  This is Percy from GreatFire.org. We have long advocated for the
  revoking of CNNIC.
 
 https://www.google.com/webhp?sourceid=chrome-instantion=1espv=2ie=UTF-8#q=site%3Agreatfire.org%20cnnic
 
   If CNNIC were to re-included, CT MUST be implemented.

 At the moment, Mozilla does not have an official position of support for
 CT - we are watching with interest :-) Therefore, it's not really
 appropriate for Mozilla to mandate CT-related things as conditions of
 reinclusion for CNNIC.


We should be careful we don't don't turn that into Mozilla doesn't
implement CT, so Mozilla has to allow CNNIC back in without requiring CT,
even if it would be clearly less safe to do so. A better interpretation
would be Mozilla can't let CNNIC back in until it implements CT or
similar, because doing so would be clearly less safe.

By the way, what is Firefox's market share in China and other places that
commonly use CNNIC-issued certificates? My understanding is that it is
close to 0%. That's why it was relatively easy to remove them in the first
place. It also means that there's no need to rush to add them back, AFAICT.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Name-constraining government CAs, or not

2015-05-30 Thread Brian Smith
Gervase Markham g...@mozilla.org wrote:

 1) Is the security analysis relating to government CAs, as a class,
 different to that relating to commercial CAs? If so, how exactly?


It seems reasonable to assume that governments that have publicly-trusted
roots will provide essential government services from websites secured
using certificates that depend on those roots staying publicly-trusted.
Further, it is likely that, especially in the long run, they will do
things, including pass legislation, that would make it difficult for them
to offer these services using certificates issued by CAs other than
themselves, as being in full control will be seen as being a national
security issue. Further, governments may even pass laws that make it
illegal for browser makers to take any enforcement action that would reduce
or eliminate access to these government services. In fact, it might already
be illegal to do so in some circumstances.

The main sticks that browsers have in enforcing their CA policies is the
threat of removal. However, such a threat seem completely empty when
removal means that essential government services become inaccessible and
when the removal would likely lead to, at best, a protracted legal battle
with the government--perhaps in a secret court. Instead, it is likely that
browser makers would find that they cannot enforce their CA policies in any
meaningful way against government CAs. Thus, government CAs' ability to
create and enforce real-world laws likely will make them above the law as
far as browsers' CA policies are concerned.

Accordingly, when a browser maker adds a government CA to their default
trust store, and especially when that government CA has jurisdiction over
them, the browser maker should assume that they will never be able to
enforce any aspect of their policy for that CA in a way that would affect
existing websites that use that CA. And, they will probably never be able
to remove that CA, even if that CA were to be found to mis-issue
certificates or even if that CA established a policy of openly
man-in-the-middling websites.

IIRC, in the past, we've seen CAs that lapse in compliance with Mozilla's
CA policies and that have claimed they cannot do the work to become
compliant again until new legislation has passed to authorize their budget.
These episodes are mild examples show that government legislative processes
already have a negative impact on government CAs' compliance with browsers'
CA policies.

2) If it is different, does name-constraining government CAs make
 things better, or not?


Name constraints would allow governments that insist on providing
government services using certificates that they've issued themselves to do
so in a way that is totally independent of any browser policies. When a
government agrees to the name constraints, such as the case of the US FPKI
it seems like a no-brainer to add them.

More generally, browsers should encourage CAs to agree to name constraints,
regardless of the government status of the CA.

As far as what to do when a CA that is--or seems like--a government CA
wants to be able to issue certificates for everybody, I agree with Ryan
Sleevi and the other Googlers. In general, it seems like CT or similar
technology is needed to deal with the fact that browsers have (probably)
admitted, and will admit, untrustworthy CAs into their programs.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DRAFT of next CA Communication

2015-04-13 Thread Brian Smith
Kathleen Wilson kwil...@mozilla.com wrote:
 ACTION #4
 Workarounds were implemented to allow mozilla::pkix to handle the things
 listed here:
 https://wiki.mozilla.org/SecurityEngineering/mozpkix-testing#Things_for_CAs_to_Fix

Hi Kathleen,

Thanks for including this in the CA communication.

That list of workarounds is out of date. I think it would be useful to
re-triage the fixed and still-open bugs in the PSM component related
to certificate verification and look for ones that were fixed by
implementing a workaround for a certificate with malformed or
deprecated content.

For example, here are some other things that should be on the list:

* Bug 1152515: CAs should ensure that all times in all certificates
are encoding in a way that conforms to the stricter requirements in
RFC 5280. In particular, the timezone must always be specified as Z
(Zulu/GMT).

* CAs should ensure, when signing OCSP responses with a delegated OCSP
response signing certificate, that the delegated OCSP response signing
certificate will not expire before the OCSP response expires.
Otherwise, when doing OCSP stapling, some servers will cache the OCSP
response past the point where the delegated response signing
certificate expires, and then Firefox will reject the connection.

* Bug 970760: CAs should ensure that all RSA end-entity certificates
that have a KeyUsage extension should include keyEncipherment in the
KeyUsage extension if the subscriber intends for the certificate to be
used for RSA key exchange in TLS. In other words, include
keyEncipherment in RSA certificates--but not ECDSA
certificates--unless the subscriber asks for it not to be included.
This way, Firefox can start enforcing the correct KeyUsage in
certificates sooner.

* CAs must ensure they include the subjectAltName extension with
appropriate dNSName/iPAddress entries in all certificates. Hopefully
soon Firefox and Chrome will be able to stop falling back on the
subject CN when there are no dNSName/iPAddress SAN entries.

* CAs should stop using any string types other than PrintableString
and UTF8String in DirectoryString components of names. In particular,
RFC 5280 says TeletexString, BMPString, and UniversalString are
included for backward compatibility, and SHOULD NOT be used for
certificates for new subjects. Hopefully we will stop accepting
certificates that use those obsolete encodings soon.

There are other issues that should be on that list, but these are the
main ones off the top of my head.

Again, thanks for putting this together.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Requirements for CNNIC re-application

2015-04-10 Thread Brian Smith
Richard Barnes rbar...@mozilla.com wrote:
 My argument is that if we think that CNNIC is likely to cause such
 mis-issuance to occur because it runs the registry for those TLDs,
 then there should be additional controls in place so that control over
 those registries won't result in misissuance.

 Constraining what a registry can do for names over which it is authoritative
 is exactly what things like pinning and CT are for.  So maybe what you're
 actually saying is that there should be a requirement for CT as a check on
 CNNIC's ability to issue even for names for which they are authoritative?

Yes.

If a US-based CA were in a similar situation, would we consider name
constraining them to *.com, *.org, *.net, *.us? No, because that's not
much of a constraint. For people within China and others, a name
constraint of *.cn isn't much different than that. I think such a
constraint gives most of the people on this list a false sense of
resolution, because we *.cn websites aren't relevant to the our
security, so constraining CNNIC to *.cn is basically equivalent to
keeping them out of the program. But, there are many millions of
people for whom the security of *.cn websites does matter, and name
constraints don't help them.

Also, given how things seem to go in China, it seems reasonable to
expect some authorities in China to react to removal or limiting CNNIC
by blocking Let's Encrypt from operating correctly for *.cn and/or for
servers operating in China. Consequently, I'm doubting that building a
wall is ultimately what's best in the long term. The advantage of the
CT-based approach is that it avoids being such a wall.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Consequences of mis-issuance under CNNIC

2015-04-02 Thread Brian Smith
Florian Weimer f...@deneb.enyo.de wrote:
 Gervase Markham wrote:
 On 24/03/15 09:35, Florian Weimer wrote:
 Sadly, name constraints do not work because they do not constrain the
 Common Name field.  The IETF PKIX WG explicitly rejected an erratum
 which corrected this oversight.

 NSS used to be different (before the mozilla::pkix rewrite), but it's
 not PKIX-compliant.

 My understanding is that we continue to constrain the CN field using
 name constraints, even after adopting mozilla::pkix; do you know
 differently?

 I simply have not investigated, my comment was poorly phrased in this
 regard.

mozilla::pkix does enforce name constraints on domain names in the CN
attribute of the subject field.

https://mxr.mozilla.org/mozilla-central/source/security/pkix/test/gtest/pkixnames_tests.cpp#2186

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Require separation between Issuing CAs and Policy CAs

2015-03-25 Thread Brian Smith
Peter Bowen pzbo...@gmail.com wrote:
 One possible solution is to require that all certificates for CAs that
 issue Subscriber certificates (those without CA:TRUE) have zero path
 length constraint in the basic constraints extension. All CAs with
 certificates with a longer allowed path length or no length constraint
 would only be allowed to issue certificate types that a Root CA is
 allowed to issue.

Consider a wildcard certificate for *.example.com. Now, consider an
intermediate CA certificate name constrained to .example.com. I don't
see why it is bad for the same CA certificate to be used to issue
both. In fact, I think it would be problematic to do so, because it
would add friction for websites to switch from wildcard certificates
to name-constrained intermediate certificates. That switch is
generally a good thing.

However, I do see how it could be valuable to separate non-constrained
intermediate CA certificates from the rest, because that would make
HPKP more effective. However, that would require not only that a
different CA certificate is used, but also that different keys were
used by the CA certificates.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Tightening up after the Lenovo and Comodo MITM certificates.

2015-02-24 Thread Brian Smith
Daniel Veditz dved...@mozilla.com wrote:
 I don't think we can restrict it to add-ons since external programs like
 Superfish (and the Lenovo removal tool, for that matter) write directly
 into the NSS profile database. It would be a bunch of work for precisely
 zero win.

mozilla::pkix makes it so that you can ignore the NSS profile
database, if you wish to do so.

 Could we make the real and only root accepted by Firefox be a Mozilla
 root, which cross-signs all the built-in NSS roots as well as any
 corporate roots submitted via this kind of program?

This is effectively what the built-in roots module already does,
except the Mozilla root CA certificate is implied instead of explicit.

 I thought pkix gave us those kinds of abilities.

mozilla::pkix offers a lot of flexibility in terms of how certificate
trust is determined.

 Or we could reject any added root that wasn't logged in CT, and then put
 a scanner on the logs looking for self-signed CA=true certs. Of course
 that puts the logs in the crosshairs for spam and DOS attacks.

Those spam and DoS attacks are why logs are specified (required?
recommended?) to not accept those certificates.

If Mozilla wanted to, it is totally possible to make an extension API
that allows an extension, when it is not disabled, to provide PSM with
a list of roots that should be accepted as trust anchors. And, it is
totally possible for PSM to aggregate those lists of
extension-provided trust anchors and use that list, in conjunction
with the read-only built-in roots module, to determine certificate
trust, while ignoring the read/write profile certificate database.

Whether or not that is a good idea is not for me to decide. But, it
would not be a huge amount of work to implement.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Trusted PEM distribution of Mozilla's CA bundle

2014-10-20 Thread Brian Smith
On Mon, Oct 20, 2014 at 8:33 AM, Ryan Sleevi 
ryan-mozdevsecpol...@sleevi.com wrote:

 On Mon, October 20, 2014 7:17 am, Anne van Kesteren wrote:
   On Mon, Oct 20, 2014 at 3:41 PM, Gervase Markham g...@mozilla.org
 wrote:
   Perhaps we just need to jump that gap and accept what is /de facto/
   true.
 
   Yeah, as with publicsuffix.org we should own this up.

 I would, in fact, argue strongly against this, despite recognizing the
 value that the open root program has.


I strongly agree with Ryan. Besides his very good points, I will add:

Not all of the relevant information about the roots is even available in
certdata.txt. For example, the name constraints on DCSSI are not encoded in
certdata.txt. For a long time there were hard-coded restrictions on some
Comodo and the Diginotar certificates, which weren't encoded in
certdata.txt. None of Google's CRLSet information is in certdata.txt, and
none of Mozilla's OneCRL information is in certdata.txt. One of the key
pinning information is in certdata.txt.

More generally, when Mozilla or Google address issues related to the root
program, they may do so with a code change to Firefox or Chrome that never
touches certdata.txt. And, they might do so in a hidden bug that people
trying to reuse certdata.txt may not see for many months. It's not
reasonable to give everybody who wants to use certdata.txt access to those
bugs, and it's not reasonable to constrain fixes to require they be done
through certdata.txt. AFAICT, none of the libraries or programs that try to
reuse the Mozilla root set even have enough flexibility to add all of these
additional data sources to their validation process.

For example, let's say some CA in Mozilla's root program mis-issues a
google.com certificate. Because google.com is pinned to certain CAs in
Firefox, Mozilla might not take any immediate action and may not make the
public aware of the issue for a long time (or ever).

Note that a consequence of this, even applications that use the NSS cert
verification APIs might not be as safe as they expect to be when trusting
the NSS root CA set.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Short-lived certs

2014-09-17 Thread Brian Smith
On Wed, Sep 17, 2014 at 12:25 AM, Gervase Markham g...@mozilla.org wrote:
 On 16/09/14 23:13, Richard Barnes wrote:
 From a browser perspective, I don't care at all whether certificates
 excused from containing revocation URLs if they're sufficiently short
 lived.

 From a technical perspective, that is true. However, if we have an
 interest in making short-lived certs a usable option, we have to
 consider the ecosystem. CAs will have to do engineering work to issue
 (and reissue) such certs every 24 hours, and sites will have to do
 engineering work to request and deploy those certificates.

Changing a server to properly and safely support replacing its
certificate on the fly is a very error-prone and difficult thing to
do, compared to changing a server to properly and safely support OCSP
stapling. For example, when the server updates its certificate, it
needs to verify that the new certificate is the right one. Otherwise,
the updated certificate could contain a public key for which an
attacker owns the private key, and the server would be facilitating
its own compromise by switching to that new certificate.
In contract, with OCSP stapling, an attacker can never replace your
server's public key, and so there is much less risk of catastrophe
with OCSP stapling.

Because of the added risk and added complication of short-lived
certificates relative to OCSP stapling, and because OCSP stapling is
already well-specified and quite widely implemented (though not yet
commonly enabled), it would be better to prioritize shortening the
maximum acceptable OCSP response validity period (e.g. to 72 hours)
and to define and implement Must-Staple, over defining new standards
for short-lived certificates. Those two improvements would have an
immediate positive impact.

Note, also, that browsers already effectively support short-lived
certificates, even without any CABForum or browser policy work. And,
also, I do support defining standards for short-lived certificates; I
just think that fixing OCSP stapling should be a higher priority.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Removal of 1024 bit CA roots - interoperability

2014-08-04 Thread Brian Smith
On Mon, Aug 4, 2014 at 3:52 PM, Kathleen Wilson kwil...@mozilla.com wrote:
 It turns out that including the 2048-bit version of the cross-signed
 intermediate certificate does not help NSS at all. It would only help
 Firefox, and would cause confusion.

That isn't true, AFAICT.

 It works for Firefox, because mozilla::pkix keeps trying until it finds a
 certificate path that works.

NSS's libpkix also keeps trying until if finds a certificate path that
works. libpkix is used by Chromium and by Oracle's products (IIUC).

 Therefore, it looks like including the 2048-bit intermediate cert directly
 in NSS would cause different behavior depending on where the root store is
 being used. This would lead to confusion.

IMO, it isn't reasonable to make decisions like this based on the
behavior of the classic NSS path building. Really, the classic NSS
path building logic is obsolete, and anybody still using it is going
to have lots of compatibility problems due to this change and other
things, some of which are out of our control.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Removal of 1024 bit CA roots - interoperability

2014-07-31 Thread Brian Smith
Hubert Kario hka...@redhat.com wrote:
 Brian Smith wrote:
 It depends on your definition of help. I assume the goal is to
 encourage websites to migrate from 1024-bit signatures to RSA-2048-bit
 or ECDSA-P-256 signatures. If so, then including the intermediates in
 NSS so that all NSS-based applications can use them will be
 counterproductive to the goal, because when the system administrator
 is testing his server using those other NSS-based tools, he will not
 notice that he is depending on 1024-bit certificates (cross-signed or
 root) because everything will work fine.

 The point is not to ship a 1024 bit cert, the point is to ship a 2048 bit 
 cert.

 So for sites that present a chain like this:

 2048 bit host cert - 2048 bit old sub CA - 1024 bit root CA

 we can find a certificate chain like this:

 2048 bit host cert - 2048 bit new cross-signed sub CA - 2048 bit root CA

 where the cross-signed sub CA is shipped by NSS

Sure. I have no objection to including cross-signing certificates
where both the subject public key and the issuer public key are 2048
bits (or more). I am objecting only to including any cross-signing
certificates of the 1024-bit-subject-signed-by-2048-bit-issuer
variety. It has been a long time since we had the initial
conversation, but IIRC both types of cross-signing certificates exist.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Dynamic Path Resolution in AIA CA Issuers

2014-07-30 Thread Brian Smith
On Wed, Jul 30, 2014 at 12:17 PM, Kathleen Wilson kwil...@mozilla.com wrote:
 On 7/28/14, 11:00 AM, Brian Smith wrote:

 I suggest that, instead of including the cross-signing certificates in
 the NSS certificate database, the mozilla::pkix code should be changed
 to look up those certificates when attempting to find them through NSS
 fails. That way, Firefox and other products that use NSS will have a
 lot more flexibility in how they handle the compatibility logic.

 There's already a bug for fetching missing intermediates:
 https://bugzilla.mozilla.org/show_bug.cgi?id=399324

 I think it would help with removal of roots (the remaining 1024-bit roots,
 non-BR-complaint roots, SHA1 roots, retired roots, etc.), and IE has been
 supporting this capability for a long time.

First of all, there is no such thing as a SHA1 root. Unlike the public
key algorithm, the hash algorithm is NOT fixed per root. That means
any RSA-2048 root can already issue certificates signed using SHA256
instead of SHA1. AFAICT, there's no reason for a CA to insist on
adding new roots for SHA256 support.

Other desktop browsers do support AIA certificate fetching, but many
mobile browsers don't. For example, Chrome on Android does not support
AIA fetching (at least, at the time I tried it) but Chrome on desktop
does support it. So, if Firefox were to add support for AIA
certificate fetching, it would be encouraging website administrators
to create websites that don't work on all browsers.

The AIA fetching mechanism is not reliable, for the same reasons that
OCSP fetching is not reliable. So, if Firefox were to add support for
AIA certificate fetching, it would be encouraging websites to create
websites that don't work reliably.

The AIA fetching process and OCSP fetching are both very slow--much
slower than the combination of all other SSL handshaking and
certificate verification. So, if Firefox were to add support for AIA
certificate fetching, it would be encouraging websites to create slow
websites.

The AIA fetching mechanism and OCSP fetching require an HTTP
implementation in order to verify certificates, and both of those
mechanisms require (practically, if not theoretically) the fetching to
be done over unauthenticated and unencrypted channels. It is not a
good idea to add the additional attack surface of an entire HTTP stack
to the certificate verification process.

If we are willing to encourage administrators to create websites that
don't work with all browsers, then we should just preload the
commonly-missing intermediate certificates into Firefox and/or NSS.
This would avoid all the performance problems, reliability problems,
and additional attack surface, and still provide a huge compatibility
benefit. In fact, most misconfigured websites would then work better
(faster, more reliably) in Firefox than in other browsers.

One of the motivations for creating mozilla::pkix was to make it easy
for Firefox to preload these certificates without having to have them
preloaded into NSS, because Wan-Teh had objected to preloading them
into NSS when I proposed it a couple of years ago. So, I think the
best course of action would be for us to try the preloading approach
first, and then re-evaluate whether AIA fetching is necessary later,
after measuring the results of preloading.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Proposal: Switch generic icon to negative feedback for non-https sites

2014-07-22 Thread Brian Smith
[+keeler, +cviecco]

On Tue, Jul 22, 2014 at 1:55 PM, Chris Palmer pal...@google.com wrote:
 On Tue, Jul 22, 2014 at 3:01 AM, Hubert Kario hka...@redhat.com wrote:

 I'm pretty sure Firefox merely remembers your decision to click
 through the warning, not that it pins the keys/certificates in the
 chain you clicked through on.

 No, I'm sure it remembers the certificate.

 1. Generate a self-signed cert; configure Apache to use it; restart Apache.
 2. Browse to the server with Firefox. Add Exception for the cert.
 3. Quit Firefox; restart Firefox; browse to server again. Everything is good.
 4. Generate a *new* self-signed cert; configure Apache to use it;
 restart Apache.
 5. Quite Firefox; restart Firefox; browse to server again.

 Results:

 A. On first page-load after step (5), no certificate warning. (I
 assume a cached page was being shown.)
 B. Reload the page; now I get a cert warning as expected. But,
 crucially, this not a key pinning validation failure; just an unknown
 authority error. (Error code: sec_error_untrusted_issuer)

Firefox's cert override mechanism uses a different pinning mechanism
than the key pinning feature. Basically, Firefox saves a tuple
(domain, port, cert fingerprint, isDomainMismatch,
isValidityPeriodProblem, isUntrustedIssuer) into a database. When it
encounters an untrsuted certificate, it computes that tuple and tries
to find a matching one in the database; if so, it allows the
connection.

 C. I do the clicks to Add Exception, but it fails: In the Add Security
 Exception dialog, the [ ] Permanently store this exception checkbox is
 grayed out, and the [ Confirm Security Exception ] button is also
 grayed out. I can only click [ Cancel ].

 I take it this is a Firefox UI bug...? Everything was working as I
 expected except (C). I think the button and the checkbox should be
 active and should work as normal.

It seems like a UI bug to me.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Checking certificate requirements

2014-05-28 Thread Brian Smith
On Wed, May 28, 2014 at 4:42 PM, Ryan Sleevi 
ryan-mozdevsecpol...@sleevi.com wrote:

 Whether it's version 1 or 3 has no effect on path building. If the policy
 does require this, it's largely for cosmetic reasons than any strong
 technical reasons.

 That said, cutting a new v3 root may involve bringing the root signing key
 out of storage, hoisting a signing ceremony, etc. It may not be worth the
 cost. NSS could, if it wanted, create dummy certs (with invalid
 signatures) that looked just like the real thing, and things 'should' just
 work (mod, you know, the inevitable avalanche of bugs that crop up when I
 make statements like this).


mozilla::pkix will not trust a v1 certificate as an intermediate CA, but it
does accept v1 root certificates for backward compatibility with NSS and
for the reasons Ryan mentioned.

v1 TLS end-entity certificates do not comply with our policy because a v1
certificate cannot (according to the spec) contain a subjectAltName
extension and we require all TLS end-entity certificates to contain
subjectAltName. Similarly, v1 certificates cannot legally contain an OCSP
responder URI which is also required (practically).

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA Communication - May 12, 2014

2014-05-14 Thread Brian Smith
On Wed, May 14, 2014 at 10:06 AM, Patrick Kobly patr...@kobly.com wrote:

 Perhaps I'm dense and missing something or perhaps this isn't the right
 place to be asking.  Why would this necessitate bringing the CA online when
 responses can be signed by an Authorized Responder (i.e. cert with EKU
 id-kp-OCSPSigning)?


Right. Bulk preproduction of direct-signed OCSP responses is another way of
handling it. Nobody wants CA certificates to be online more than otherwise
necessary just to support shorter validity periods for OCSP responses.


 FWIW, Rob's concerns regarding the change process are certainly reasonable.


We did not intentionally want to short-circuit any process. I implemented
the restriction to 10 days due to a misunderstanding of the baseline
requirements, and then we decided my misunderstanding is better than what
the BRs would say, so we considered leaving my misunderstanding in the code
while we concurrently worked to improve the BRs to match my
misunderstanding. Ultimately, we decided to revert to the less-reasonable
but more compatible behavior.

It is OK (good even) for us to add additional requirements that go beyond
the baseline  EV requirements and not everything has to be approved
through CAB Forum. We do it all the time (otherwise our CA program
documentation would consist solely of See the Baseline Requirements and EV
Requirements). Google is doing the same with their proposed CT
requirements for EV. In this case, though, it was just an accident.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DRAFT: May CA Communication

2014-05-08 Thread Brian Smith
On Thu, May 8, 2014 at 6:40 AM, Gervase Markham g...@mozilla.org wrote:

 On 06/05/14 20:58, Brian Smith wrote:
  That isn't quite right either. It is OK for the intermediate certificate
 to
  omit the EKU extension entirely.

 Well, not if we fix
 https://bugzilla.mozilla.org/show_bug.cgi?id=968817
 which Brian agreed that we could do.


We can *try* doing it for *end-entity* certificates. However:

1. IIRC, in that bug we were using information from Google's CT database to
conclude that every end-entity cert we can find has an EKU with
id-kp-serverAuth. However, Google's CT database doesn't (IIUC) include any
certificates for custom trust anchors. So, it could be the case that for
compatibility with custom root CAs, we may not be able to enforce this
extra requirement on top of RFC 5280.

2. The discussion in bug
968817https://bugzilla.mozilla.org/show_bug.cgi?id=968817was/is
about end-entity certificates. It isn't reasonable for us to enforce
a requirement that intermediate certificates have an id-kp-serverAuth EKU,
because that is non-standard (hopefully just not-yet standard).


 I think we should be aiming to require serverAuth in all intermediates
 and EE certs for SSL. I think that makes it much less likely that we
 will end up accepting as valid for SSL a cert someone has issued for
 another purpose entirely (e.g. smartcards).


It seems impractical to do that any time soon since that requirement is
stricter than any standard requires and I it is not hard to show
significant counter-examples where that would break the web, e.g.
https://mail.google.com. That is why I stated my suggestions for this the
way I did.

Kathleen is right that our policy doesn't require the use of technical
constraints if the certificate is audited as a public CA would be. However,
I think it is obvious that even publicly-disclosed  audited (sub-)CAs
benefit from implementing technical constraints. Consequently, it still
makes sense for us to recommend the use of technical constraints for
publicly-disclosed and audited external sub-CAs, and to require them for
other external sub-CAs.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DRAFT: May CA Communication

2014-05-06 Thread Brian Smith
On Mon, May 5, 2014 at 4:45 PM, Kathleen Wilson kwil...@mozilla.com wrote:

 OK. Changed to the following.

 https://wiki.mozilla.org/SecurityEngineering/mozpkix-
 testing#Things_for_CAs_to_Fix
 --
 1. For all new intermediate certificate issuance, use the TLS Web Server
 Authentication (1.3.6.1.5.5.7.3.1) (serverAuth) EKU if that intermediate
 certificate will be signing SSL certificates. Mozilla will stop recognizing
 the Netscape Server Gated Crypto (2.16.840.1.113730.4.1) EKU.


That isn't quite right either. It is OK for the intermediate certificate to
omit the EKU extension entirely. But, if an intermediate does include an
EKU extension then it must include id-kp-serverAuth (1.3.6.1.5.5.7.3.1).
Additionally, no certificates should contain the Netscape Server Gated
Crypto (2.16.840.1.113730.4.1) EKU, which is already no longer recognized
for end-entity certificates and which will be no longer supported for
intermediate certificates soon.

New externally-operated subordinate CA certificates should/must include an
EKU extension that does NOT contain id-kp-serverAuth (1.3.6.1.5.5.7.3.1) or
anyExtendedKeyPurpose (2.5.29.37.0) if the subordinate CA is not authorized
to issue TLS server certificates. Conversely, new externally-operated
subordinate CA certificates should/must include an EKU extension with
id-kp-serverAuth (1.3.6.1.5.5.7.3.1) if they are allowed to issue TLS
certificates.

Remember that we added the new enforcement of EKU in intermediates in
mozilla::pkix in order to enhance the ability of CAs to technically
constrain externally-operated sub-CAs.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Behavior changes - inhibitAnyPolicy extension

2014-05-06 Thread Brian Smith
On Tue, May 6, 2014 at 3:48 PM, Kathleen Wilson kwil...@mozilla.com wrote:

 It has been brought to my attention that the above statement is very
 difficult to understand.


snip


 Any preference?


Let's just fix bug 989051 so that we can remove this statement completely.
It makes more sense to fix our bugs than it does to wordsmith a suggestion
to CAs for how to work around our bugs. The other things we're asking CAs
to do are actual problematic practices that need to be addressed, and we're
better off letting them focus on those things than to work around our bugs.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: No CRL UI as of Firefox 24

2014-03-14 Thread Brian Smith
On Fri, Mar 14, 2014 at 4:05 AM, Gervase Markham g...@mozilla.org wrote:

 On 13/03/14 19:20, Rick Andrews wrote:
  Is it because Mozilla intends to build CRLs sets in the future?

 Yes.


I always thought that the CRL requirement was in there because long of go
we expected that we'd eventually start fetching CRLs at some point, and
then it was left in there due to inertia, mostly.

Keep in mind that S/MIME and TLS have different requirements. OCSP is a
significant private issue for both. We can resolve that for TLS by
switching to OCSP stapling. But, there's no good, practical stapling
solution for S/MIME (S/MIME can in theory do stapling, but nobody does). It
may be that we need to have separate requirements for S/MIME and TLS.

I think it will make sense to revise the CRL requirement in the future,
after we've figured out how we're going to build the revocation lists, and
after we've figured out Must-Staple, and after we've figured out the issue
in the preceding paragraph. If we don't end up using CRLs for building the
revocation list then CRLs could very well be useless. Also, nobody has
proposed a plan for using CRLs to build the revocation list, though Google
does do that.

Practically speaking, I'd argue strongly against any inclusions of CAs that
don't support OCSP into our root store and/or EV programs. I wouldn't
argue, at this time, against any CAs that don't do CRLs, but that could
change.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert Request to Include Renewed Roots

2014-01-28 Thread Brian Smith
On Tue, Jan 28, 2014 at 4:25 PM, Kathleen Wilson kwil...@mozilla.com wrote:
 DigiCert has applied to include 5 new root certificates that will eventually
 replace the 3 DigiCert root certificates that were included in NSS via bug
 #364568. The request is to turn on all 3 trust bits and enable EV for all of
 the new root certs.

 1) DigiCert Assured ID Root G2 -- This SHA-256 root will eventually replace
 the SHA-1 “DigiCert Assured ID Root CA” certificate.

 2) DigiCert Assured ID Root G3 -- The ECC version of the Assured ID root.

 3) DigiCert Global Root G2 -- This SHA-256 root will eventually replace the
 SHA-1 “DigiCert Global Root CA” certificate.

 4) DigiCert Global Root G3 -- The ECC version of the Global root.

 5) DigiCert Trusted Root G4 -- This SHA-384 root will eventually replace the
 SHA-1 “DigiCert High Assurance EV Root CA” certificate.

I object, only on the grounds that there is no technical need to have
more than one root. I have a counter-proposal:

1. Add DigiCert Trusted Root G4 with all three trust bits set.
2. Ask DigiCert to issue versions of their intermediates that are
signed/issued by DigiCert Trusted Root G4.
3. Remove the existing DigiCert roots.
4. Preload all the intermediates signed by DigiCert Trusted Root G4
(with no trust bits, so they inherit trust from DigiCert Trusted Root
G4) into NSS.

Benefits of my counter-proposal:
1. Fewer roots for us to manage.
2. Sites that forget to include their intermediates in their TLS cert
chain are more likely to work in Firefox, without us having to do AIA
caIssuers, because of us preloading the intermediates.
3. Because of #1, there is potential for us to design a simpler root
certificate management UI.
4. We can do optimizations with the preloading of intermediates to
avoid building the whole chain every time. (That is, we can
precalculate the trust of the intermediates.)

This would set a good precedent for us to follow with all other CAs.
By working with all CAs to do something similar, we would end up with
one root per CA, and with a bunch of preloaded intermediates. Then we
can separate the view of intermediates from the view of roots in the
UI, and the UI will become much simpler.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert Request to Include Renewed Roots

2014-01-28 Thread Brian Smith
On Tue, Jan 28, 2014 at 8:45 PM, David E. Ross nobody@nowhere.invalid wrote:
 On 1/28/2014 4:37 PM, Brian Smith wrote :
 Benefits of my counter-proposal:
 1. Fewer roots for us to manage.
 2. Sites that forget to include their intermediates in their TLS cert
 chain are more likely to work in Firefox, without us having to do AIA
 caIssuers, because of us preloading the intermediates.
 3. Because of #1, there is potential for us to design a simpler root
 certificate management UI.
 4. We can do optimizations with the preloading of intermediates to
 avoid building the whole chain every time. (That is, we can
 precalculate the trust of the intermediates.)

 I do not consider Benefit #2 to be a benefit.  This would mean that
 Mozilla is enabling poor security practices by allowing server
 administrators to be lazy and incompetent -- allowing them to tell users
 their browsing session is secure while the server is incompletely
 configured.

First, let me split my proposal into two parts:

Part 1:
I'm proposing that we add five certs that are equivalent to the five
certs that DigiCert wants to add, EXCEPT that only one of them would
be a trusted root, and the other four would be intermediates of that
root. So, as far as what I'm proposing here is concerned, there would
be no change as to what websites would be required or not required to
send in their SSL handshakes, if DigiCert continues to require an
intermediate between the end-entity cert and any of those five certs.

Part 2:
It is considered bad practice by some to issue certificates directly
from a root. But, since four of those certificates wouldn't be roots,
then DigiCert could issue certificates directly off of them without
doing the thing that is perceived to be bad. if they did so, then
because those intermediates would be preloaded into NSS, then we would
be able to tolerate the failure of a website to send the intermediate
certificate.

I understand that it is not 100% great to do things that encourage
websites to skip the inclusion of intermediates in their certificate
chains, but we're currently on the losing side of this compatibility
issue since we also do not implement caIssuers. And, we've helped make
the problem bad by caching intermediates collected from surfing the
internet; the consequence of this is that when a website admin is
testing his broken configuration in Firefox, he/she often won't notice
the missing intermediate because Firefox has papered over the issue by
having cached the needed intermediate from the CA's website. I'd like
us to stop doing that, and it is likely that doing so will require us
to preload quite a few intermediates to maintain compatibility.

Cheers,
Brian
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Revoking Trust in one ANSSI Certificate

2013-12-11 Thread Brian Smith
On Wed, Dec 11, 2013 at 1:49 AM, Samuel L samuel.la...@sealweb.eu wrote:
 Le 11/12/13 01:08, Kathleen Wilson a écrit :
 Based on the list that Rob provided, there may be other domains that we
 might consider including.
 For example:
 *.ac-martinique.fr
 *.ac-creteil.fr
 *.ac-orleans-tours.fr
 *.education.fr
 *.ac-poitiers.fr

[snip]

 According to this page (from the french national education administration,
 which is one of the biggest, if not the biggest administrative body in
 France), there are actually 30 academies (regional bodies of the ministry of
 education), whose domains are :

snip

Thanks for the very helpful information. I think we should first ask
ANSSI to help those academies migrate to a different CA. My
understanding is that the French government already has used
certificates from other CAs:

Entrust: https://www.amendes.gouv.fr/portail/index.jsp?lang=en
Certplus/Certinomis:
https://www.tresor.economie.gouv.fr/autorisations-prealables-des-investissements-etrangers-en-france

So, it seems reasonable to think we could work with ANSSI to
coordinate the migration of websites that aren't serving critical
government functions to the other CAs that the French government is
already using, in a reasonably fast timeframe. I'd like us to try that
first.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Revoking Trust in one ANSSI Certificate

2013-12-10 Thread Brian Smith
On Tue, Dec 10, 2013 at 4:08 PM, Kathleen Wilson kwil...@mozilla.com wrote:
 Constrain the currently-included IGC/A root certificate to a certain set of
 domains. I think the restriction needs to be along the lines of *.gouv.fr.

I think it might help to explain the rationale for the choice of *.gouv.fr:

ANSSI is run by the French government and *.gouv.fr are government
websites. Thus, restricting ANSSI to issuing certificates under
*.gouv.fr limits the negative impact of any mis-issuance to French
government websites--i.e. they could only harm themselves if so
restricted.

Also, enabling *.gouv.fr sites that use ANSSI-issued certificates to
continue working would minimize disruption to essential government
services, like tax collecting, etc. Removing ANSSI completely may be
too disruptive to these essential services.

My personal opinion is that it is unlikely that all other browsers
would follow us if we completely removed ANSSI, but I think it would
be reasonable to expect other browsers to add constraints to
*.gouv.fr.

In case it isn't obvious, I support this proposal.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New problematic practice

2013-11-29 Thread Brian Smith
On Fri, Nov 29, 2013 at 1:20 AM, Gervase Markham g...@mozilla.org wrote:
 Comments?

I suggest you propose it as a change to the baseline requirements.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy