Re: CCADB Proposal: Add field called JSON Array of Partitioned CRLs Issued By This CA

2021-02-26 Thread Aaron Gable via dev-security-policy
On Fri, Feb 26, 2021 at 5:18 PM Ryan Sleevi  wrote:

> I do believe it's problematic for the OCSP and CRL versions of the
> repository to be out of sync, but also agree this is an area that is useful
> to clarify. To that end, I filed
> https://github.com/cabforum/servercert/issues/252 to make sure we don't
> lose track of this for the BRs.
>

Thanks! I like that bug, and commented on it to provide a little more
clarity for how the question arose in my mind and what language we might
want to update. It sounds like maybe what we want is language to the effect
that, if a CA is publishing both OCSP and CRLs, then a certificate is not
considered Revoked until it shows up as Revoked in both revocation
mechanisms. (And it must be Revoked within 24 hours.)

We'll make sure our parallel CRL infrastructure re-issues CRLs
close-to-immediately after a certificate in that shard's scope is revoked,
just as we do for OCSP today.

Thanks again,
Aaron
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CCADB Proposal: Add field called JSON Array of Partitioned CRLs Issued By This CA

2021-02-26 Thread Aaron Gable via dev-security-policy
On Fri, Feb 26, 2021 at 12:05 PM Ryan Sleevi  wrote:

> You can still do parallel signing. I was trying to account for that
> explicitly with the notion of the “pre-reserved” set of URLs. However, that
> also makes an assumption I should have been more explicit about: whether
> the expectation is “you declare, then fill, CRLs”, or whether it’s
> acceptable to “fill, then declare, CRLs”. I was trying to cover the former,
> but I don’t think there is any innate prohibition on the latter, and it was
> what I was trying to call out in the previous mail.
>
> I do take your point about deterministically, because the process I’m
> describing is implicitly assuming you have a work queue (e.g. pub/sub, go
> channel, etc), in which certs to revoke go in, and one or more CRL signers
> consume the queue and produce CRLs. The order of that consumption would be
> non-deterministic, but it very much would be parallelizable, and you’d be
> in full control over what the work unit chunks were sized at.
>
> Right, neither of these are required if you can “produce, then declare”.
> From the client perspective, a consuming party cannot observe any
> meaningful difference from the “declare, then produce” or the “produce,
> then declare”, since in both cases, they have to wait for the CRL to be
> published on the server before they can consume. The fact that they know
> the URL, but the content is stale/not yet updated (I.e. the declare then
> produce scenario) doesn’t provide any advantages. Ostensibly, the “produce,
> then declare” gives greater advantage to the client/root program, because
> then they can say “All URLs must be correct at time of declaration” and use
> that to be able to quantify whether or not the CA met their timeline
> obligations for the mass revocation event.
>

I think we managed to talk slightly past each other, but we're well into
the weeds of implementation details so it probably doesn't matter much :)
The question in my mind was not "can there be multiple CRL signers
consuming revocations from the queue?"; but rather "assuming there are
multiple CRL signers consuming revocations from the queue, what
synchronization do they have to do to ensure that multiple signers don't
decide the old CRL is full and allocate new ones at the same time?". In the
world where every certificate is pre-allocated to a CRL shard, no such
synchronization is necessary at all.

This conversation does raise a different question in my mind. The Baseline
Requirements do not have a provision that requires that a CRL be re-issued
within 24 hours of the revocation of any certificate which falls within its
scope. CRLs and OCSP responses for Intermediate CAs are clearly required to
receive updates within 24 hours of the revocation of a relevant certificate
(sections 4.9.7 and 4.9.10 respectively), but no such requirement appears
to exist for end-entity CRLs. The closest is the requirement that
subscriber certificates be revoked within 24 hours after certain conditions
are met, but the same structure exists for the conditions under which
Intermediate CAs must be revoked, suggesting that the BRs believe there is
a difference between revoking a certificate and *publishing* that
revocation via OCSP or CRLs. Is this distinction intended by the root
programs, and does anyone intend to change this status quo as more emphasis
is placed on end-entity CRLs?

Or more bluntly: in the presence of OCSP and CRLs being published side by
side, is it expected that the CA MUST re-issue a sharded end-entity CRL
within 24 hours of revoking a certificate in its scope, or may the CA wait
to re-issue the CRL until its next 7-day re-issuance time comes up as
normal?

Agreed - I do think having a well-tested, reliable path for programmatic
> update is an essential property to mandating the population. My hope and
> belief, however, is that this is fairly light-weight and doable.
>

 Thanks, I look forward to hearing more about what this will look like.

Aaron
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CCADB Proposal: Add field called JSON Array of Partitioned CRLs Issued By This CA

2021-02-26 Thread Aaron Gable via dev-security-policy
Thanks for the reminder that CCADB automatically dereferences URLs for
archival purposes, and for the info about existing automation! I don't
personally have CCADB credentials, so all of my knowledge of it is based on
what I've learned from others at LE and from this list.

If we leave out the "new url for each re-issuance of a given CRL" portion
of the design (or offer both url-per-thisUpdate and
static-url-always-pointing-at-the-latest), then we could in fact include
CRLDP urls in the certificates using the rolling time-based shards model.
And frankly we may want to do that in the near future: maintaining both CRL
*and* OCSP infrastructure when the BRs require only one or the other is an
unnecessary expense, and turning down our OCSP infrastructure would
constitute a significant savings, both in tangible bills and in engineering
effort.

Thus, in my mind, the dynamic sharding idea you outlined has two major
downsides:
1) It requires us to maintain our parallel OCSP infrastructure
indefinitely, and
2) It is much less resilient in the face of a mass revocation event.

Fundamentally, we need our infrastructure to be able to handle the
revocation of 200M certificates in 24 hours without any difference from how
it handles the revocation of one certificate in the same period. Already
having certificates pre-allocated into CRL shards means that we can
deterministically sign many CRLs in parallel.

Dynamically assigning certificates to CRLs as they are revoked requires
taking a lock to determine if a new CRL needs to be created or not, and
then atomically creating a new one. Or it requires a separate,
not-operation-as-normal process to allocate a bunch of new CRLs, assign
certs to them, and then sign those in parallel. Neither of these --
dramatically changing not just the quantity but the *quality* of the
database access, nor introducing additional processes -- is acceptable in
the face of a mass revocation event.

In any case, I think this conversation has served the majority of its
purpose. This discussion has led to several ideas that would allow us to
update our JSON document only when we create new shards (which will still
likely be every 6 to 24 hours), as opposed to on every re-issuance of a
shard. We'd still greatly prefer that CCADB be willing to
accept-and-dereference a URL to a JSON document, as it would allow our
systems to have fewer dependencies and fewer failure modes, but understand
that our arguments may not be persuasive enough :)

If Mozilla et al. do go forward with this proposal as-is, I'd like to
specifically request that CCADB surfaces an API to update this field before
any root programs require that it be populated, and does so with sufficient
lead time for development against the API to occur.

Thanks again,
Aaron

On Fri, Feb 26, 2021 at 8:47 AM Ryan Sleevi  wrote:

>
>
> On Fri, Feb 26, 2021 at 5:49 AM Rob Stradling  wrote:
>
>> > We already have automation for CCADB. CAs can and do use it for
>> disclosure of intermediates.
>>
>> Any CA representatives that are surprised by this statement might want to
>> go and read the "CCADB Release Notes" (click the hyperlink when you
>> login to the CCADB).  That's the only place I've seen the CCADB API
>> "announced".
>>
>> > Since we're talking Let's Encrypt, the assumption here is that the CRL
>> URLs
>> > will not be present within the crlDistributionPoints of the
>> certificates,
>> > otherwise, this entire discussion is fairly moot, since those
>> > crlDistributionPoints can be obtained directly from Certificate T
>> ransparency.
>>
>> AIUI, Mozilla is moving towards requiring that the CCADB holds all CRL
>> URLs, even the ones that also appear in crlDistributionPoints extensions.
>> Therefore, I think that this entire discussion is not moot at all.
>>
>
> Rob,
>
> I think you misparsed, but that's understandable, because I worded it
> poorly. The discussion is mooted by whether or not the CA includes the
> cRLDP within the certificate itself - i.e. that the CA has to allocate the
> shard at issuance time and that it's fixed for the lifetime of the
> certificate. That's not a requirement - EEs don't need cRLDPs - and so
> there's no inherent need to do static assignment, nor does it sound like LE
> is looking to go that route, since it would be incompatible with the design
> they outlined. Because of this, the dynamic sharding discussed seems
> significantly _less_ complex, both for producers and for consumers of this
> data, than the static sharding-and-immutability scheme proposed.
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CCADB Proposal: Add field called JSON Array of Partitioned CRLs Issued By This CA

2021-02-25 Thread Aaron Gable via dev-security-policy
Similarly, snipping and replying to portions of your message below:

On Thu, Feb 25, 2021 at 12:52 PM Ryan Sleevi  wrote:

> Am I understanding your proposal correctly that "any published JSON
> document be valid for a certain period of time" effectively means that each
> update of the JSON document also gets a distinct URL (i.e. same as the
> CRLs)?
>

No, the (poorly expressed) idea is this: suppose you fetch our
rapidly-changing document and get version X. Over the next five minutes,
you fetch every CRL URL in that document. But during that same five
minutes, we've published versions X+1 and X+2 of that JSON document at that
same URL. There should be a guarantee that, as long as you fetch the CRLs
in your document "fast enough" (for some to-be-determined value of "fast"),
all of those URLs will still be valid (i.e. not return a 404 or similar),
*even though* some of them are not referenced by the most recent version of
the JSON document.

This may seem like a problem that arises only in our rapidly-changing JSON
version of things. But I believe it should be a concern even in the system
as proposed by Kathleen: when a CA updates the JSON array contained in
CCADB, how long does a consumer of CCADB have to get a snapshot of the
contents of the previous set of URLs? To posit an extreme hypothetical, can
a CA hide misissuance of a CRL by immediately hosting their fixed CRL at a
new URL and updating their CCADB JSON list to include that new URL instead?
Not to put too fine a point on it, but I believe that this sort of
hypothetical is the underlying worry about having the JSON list live
outside CCADB where it can be changed on a whim, but I'm not sure that
having the list live inside CCADB without any requirements on the validity
of the URLs inside it provides significantly more auditability.

The issue I see with the "URL stored in CCADB" is that it's a reference,
> and the dereferencing operation (retrieving the URL) puts the onus on the
> consumer (e.g. root stores) and can fail, or result in different content
> for different parties, undetectably.
>

If I may, I believe that the problem is less that it is a reference (which
is true of every URL stored in CCADB), and more that it is a reference to
an unsigned object. URLs directly to CRLs don't have this issue, because
the CRL is signed. And storing the JSON array directly doesn't have this
issue, because it is implicitly signed by the credentials of the user who
signed in to CCADB to modify it. One possible solution here would be to
require that the JSON document be signed by the same CA certificate which
issued all of the CRLs contained in it. I don't think I like this solution,
but it is within the possibility space.


> If there is an API that allows you to modify the JSON contents directly
> (e.g. a CCADB API call you could make with an OAuth token), would that
> address your concern?
>

If Mozilla and the other stakeholders in CCADB decide to go with this
thread's proposal as-is, then I suspect that yes, we would develop
automation to talk to CCADB's API in exactly this way. This is undesired
from our perspective for a variety of reasons:
* I'm not aware of a well-maintained Go library for interacting with the
Salesforce API.
* I'm not aware of any other automation system with write-access to CCADB
(I may be very wrong!), and I imagine there would need to be some sort of
further design discussion with CCADB's maintainers about what it means to
give write credentials to an automated system, what sorts of protections
would be necessary around those credentials, how to scope those credentials
as narrowly as possible, and more.
* I'm not sure CCADB's maintainers want updates to it to be in the critical
path of ongoing issuance, as opposed to just in the critical path for
beginning issuance with a new issuer.

I think the question was with respect to the frequency of change of those
> documents.
>

Frankly, I think the least frequent creation of a new time-sharded CRL we
would be willing to do is once every 24 hours (that's still >60MB per CRL
in the worst case). That's going to require automation no matter what.


> There is one thing you mentioned that's also non-obvious to me, because I
> would expect you already have to deal with this exact issue with respect to
> OCSP, which is "overwriting files is a dangerous operation prone to many
> forms of failure". Could you expand more about what some of those
> top-concerns are? I ask, since, say, an OCSP Responder is frequently
> implemented as "Spool /ocsp/:issuerDN/:serialNumber", with the CA
> overwriting :serialNumber whenever they produce new responses. It sounds
> like you're saying that common design pattern may be problematic for y'all,
> and I'm curious to learn more.
>

Sure, happy to expand. For those following along at home, this last bit is
relatively off-topic compared to the other sections above, so skip if you
feel like it :)

OCSP consists of hundreds of millions of small entries. Thus our 

Re: CCADB Proposal: Add field called JSON Array of Partitioned CRLs Issued By This CA

2021-02-25 Thread Aaron Gable via dev-security-policy
amic JSON documents.

I hope this helps and that I addressed your questions,
Aaron

On Thu, Feb 25, 2021 at 9:53 AM Ryan Sleevi  wrote:

>
>
> On Thu, Feb 25, 2021 at 12:33 PM Aaron Gable via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> Obviously this plan may have changed due to other off-list conversations,
>> but I would like to express a strong preference for the original plan. At
>> the scale at which Let's Encrypt issues, it is likely that our JSON array
>> will contain on the order of 1000 CRL URLs, and will add a new one (and
>> age
>> out an entirely-expired one) every 6 hours or so. I am not aware of any
>> existing automation which updates CCADB at that frequency.
>>
>> Further, from a resiliency perspective, we would prefer that the CRLs we
>> generate live at fully static paths. Rather than overwriting CRLs with new
>> versions when they are re-issued prior to their nextUpdate time, we would
>> leave the old (soon-to-be-expired) CRL in place, offer its replacement at
>> an adjacent path, and update the JSON to point at the replacement. This
>> process would have us updating the JSON array on the order of minutes, not
>> hours.
>
>
> This seems like a very inefficient design choice, and runs contrary to how
> CRLs are deployed by, well, literally anyone using CRLs as specified, since
> the URL is fixed within the issued certificate.
>
> Could you share more about the design of why? Both for the choice to use
> sharded CRLs (since that is the essence of the first concern), and the
> motivation to use fixed URLs.
>
> We believe that earlier "URL to a JSON array..." approach makes room for
>> significantly simpler automation on the behalf of CAs without significant
>> loss of auditability. I believe it may be helpful for the CCADB field
>> description (or any upcoming portion of the MRSP which references it) to
>> include specific requirements around the cache lifetime of the JSON
>> document and the CRLs referenced within it.
>
>
> Indirectly, you’ve highlighted exactly why the approach you propose loses
> auditability. Using the URL-based approach puts the onus on the consumer to
> try and detect and record changes, introduces greater operational risks
> that evade detection (e.g. stale caches on the CAs side for the content of
> that URL), and encourages or enables designs that put greater burden on
> consumers.
>
> I don’t think this is suggested because of malice, but I do think it makes
> it significantly easier for malice to go undetected, for accurate historic
> information to be hidden or made too complex to maintain.
>
> This is already a known and, as of recent, studied problem with CRLs [1].
> Unquestionably, you are right for highlighting and emphasizing that this
> constrains and limits how CAs perform certain operations. You highlight it
> as a potential bug, but I’d personally been thinking about it as a
> potential feature. To figure out the disconnect, I’m hoping you could
> further expand on the “why” of the design factors for your proposed design.
>
> Additionally, it’d be useful to understand how you would suggest CCADB
> consumers maintain an accurate, CA attested log of changes. Understanding
> such changes is an essential part of root program maintenance, and it does
> seem reasonable to expect CAs to need to adjust to provide that, rather
> than give up on the goal.
>
> [1]
> https://arxiv.org/abs/2102.04288
>
>>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CCADB Proposal: Add field called JSON Array of Partitioned CRLs Issued By This CA

2021-02-25 Thread Aaron Gable via dev-security-policy
Hi Kathleen,

It was my impression from earlier discussions
 that
the plan was for the new CCADB field to contain a URL which points to a
document containing only a JSON array of partitioned CRL URLs, rather than
the new CCADB field containing such an array directly.

Obviously this plan may have changed due to other off-list conversations,
but I would like to express a strong preference for the original plan. At
the scale at which Let's Encrypt issues, it is likely that our JSON array
will contain on the order of 1000 CRL URLs, and will add a new one (and age
out an entirely-expired one) every 6 hours or so. I am not aware of any
existing automation which updates CCADB at that frequency.

Further, from a resiliency perspective, we would prefer that the CRLs we
generate live at fully static paths. Rather than overwriting CRLs with new
versions when they are re-issued prior to their nextUpdate time, we would
leave the old (soon-to-be-expired) CRL in place, offer its replacement at
an adjacent path, and update the JSON to point at the replacement. This
process would have us updating the JSON array on the order of minutes, not
hours.

We believe that earlier "URL to a JSON array..." approach makes room for
significantly simpler automation on the behalf of CAs without significant
loss of auditability. I believe it may be helpful for the CCADB field
description (or any upcoming portion of the MRSP which references it) to
include specific requirements around the cache lifetime of the JSON
document and the CRLs referenced within it.

Thanks,
Aaron

On Wed, Feb 24, 2021 at 12:36 PM Kathleen Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> All,
>
> As previously discussed, there is a section on root and intermediate
> certificate pages in the CCADB called ‘Pertaining to Certificates Issued
> by this CA’, and it currently has one field called 'Full CRL Issued By
> This CA'.
>
> Proposal: Add field called 'JSON Array of Partitioned CRLs Issued By
> This CA'
>
> Description of this proposed field:
> When there is no full CRL for certificates issued by this CA, provide a
> JSON array whose elements are URLs of partitioned, DER-encoded CRLs that
> when combined are the equivalent of a full CRL. The JSON array may omit
> obsolete partitioned CRLs whose scopes only include expired certificates.
>
> Example:
>
> [
>"http://cdn.example/crl-1.crl;,
>"http://cdn.example/crl-2.crl;
> ]
>
>
>
> Additionally, I propose adding a new section to
> https://www.ccadb.org/cas/fields called “Revocation Information”.
>
> The proposed draft for this new section is here:
>
> https://docs.google.com/document/d/1uVK0h4q5BSrFv6e86f2SwR5m2o9Kl1km74vG4HnkABw/edit?usp=sharing
>
>
> I will appreciate your input on this proposal.
>
> Thanks,
> Kathleen
>
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #218: Clarify CRL requirements for End Entity Certificates

2021-01-25 Thread Aaron Gable via dev-security-policy
I think that an explicit carve-out for time-scoped CRLs is a very good idea.

In the case that this change to the MRSP is adopted, I suspect that LE
would scope CRLs by notAfter quite tightly, with perhaps one CRL per 24 or
even 6 hours of issuance. We would pick a small interval such that we could
guarantee that each CRL would still be a reasonable size even in the face
of a mass revocation event.

Producing CRLs at that rate, it would be very valuable to be able to
gracefully age CRLs out once there is no possibility for a revocation
status update for any certificate in their scope.

Aaron

On Sun, Jan 24, 2021 at 10:22 AM Ben Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> All,
>
> Another suggestion came in for clarification that hasn't been raised on the
> list yet, so I'll try and explain the scenario here.
>
> Normally, a CA must publish and update its CRLs until the end of the CA
> certificate's expiration. However, I think that some CAs partition their
> CRLs based on issuance time, e.g., all certificates issued during January
> 2021. And all of those certificates would expire after the applicable
> validity period.  I think CAs don't continue to regenerate or reissue those
> types of partitioned CRLs which only contain certificates that have
> expired.  So maybe we need to add an express exception that allows CAs to
> omit those obsolete CRLs from the JSON file -- as long as the JSON file
> contains the equivalent of  a "full and complete" CRL.
>
> Thoughts?
>
> Thanks,
> Ben
>
>
> On Wed, Jan 13, 2021 at 8:57 AM Rob Stradling  wrote:
>
> > Hi Ben.
> >
> > > *A CA technically capable of issuing server certificates MUST ensure
> > that
> > > the CCADB field "Full CRL Issued By This CA" contains either the URL
> for
> > > the full and complete CRL or the URL for the JSON file containing all
> > URLs
> > > for CRLs that when combined are the equivalent of the full and complete
> > CRL*
> >
> > As a consumer of this data (crt.sh), I'd much prefer to see "Full CRL
> > Issued By This CA" and "the URL for the JSON file" as 2 separate fields
> in
> > the CCADB.  CAs would then be expected to fill in one field or the other,
> > but not both.  Is that possible?
> >
> > To ensure that these JSON files can be programmatically parsed, I suggest
> > specifying the requirement a bit more strictly.  Something like this:
> >   "...or the URL for a file that contains only a JSON Array, whose
> > elements are URLs of DER encoded CRLs that when combined are the
> equivalent
> > of a full and complete CRL"
> >
> > > I propose that this new CCADB field be populated in all situations
> where
> > the CA is enabled for server certificate issuance.
> >
> > Most Root Certificates are "enabled for server certificate issuance".
> > Obviously CAs shouldn't issue leaf certs directly from roots, but
> > nonetheless the technical capability does exist.  However, currently CAs
> > can't edit Root Certificate records in the CCADB, which makes populating
> > these new field(s) "in all situations" rather hard.
> >
> > Since OneCRL covers revocations of intermediate certs, may I suggest that
> > CAs should only be required to populate these new field(s) in
> intermediate
> > certificate records (and not in root certificate records)?
> >
> > Relatedly, "A CA technically capable of...that the CCADB field" seems
> > wrong.  CCADB "CA Owner" records don't/won't contain the new field(s).
> > Similar language elsewhere in the policy (section 5.3.2) says "All
> > certificates that are capable of being used to..." (rather than "All
> > CAs...").
> >
> > Technically-constrained intermediate certs don't have to be disclosed to
> > CCADB, but "in all situations where the CA is enabled for server
> > certificate issuance" clearly includes technically-constrained
> > intermediates.  How would a CA populate the "Full CRL Issued By This CA"
> > field for a technically-constrained intermediate cert that has
> > (legitimately) not been disclosed to CCADB?
> >
> > --
> > *From:* dev-security-policy <
> dev-security-policy-boun...@lists.mozilla.org>
> > on behalf of Ben Wilson via dev-security-policy <
> > dev-security-policy@lists.mozilla.org>
> > *Sent:* 08 January 2021 01:00
> > *To:* mozilla-dev-security-policy <
> > mozilla-dev-security-pol...@lists.mozilla.org>
> > *Subject:* Policy 2.7.1: MRSP Issue #218: Clarify CRL requirements for
> > End Entity Certificates
> >
> > CAUTION: This email originated from outside of the organization. Do not
> > click links or open attachments unless you recognize the sender and know
> > the content is safe.
> >
> >
> > This is the last issue that I have marked for discussion in relation to
> > version 2.7.1 of the Mozilla Root Store Policy
> > <
> >
> 

Re: Extending Android Device Compatibility for Let's Encrypt Certificates

2021-01-07 Thread Aaron Gable via dev-security-policy
In cases where we expect OpenSSL to be validating the chain, we expect that
ISRG Root X1 is also in the trust store (unlike older versions of Android,
where we know that it hasn't been added). As such, there will be two
certificates in the chain which are also in the local trust store: ISRG
Root X1 and the expired DST Root CA X3.

It is my understanding that OpenSSL 1.1.0+, with the `trusted_first` method
as the default chain-building method, will go through the following steps:
1) Receive the chain "EE <-- R3 <-- ISRG Root X1 (cross-signed by DST Root
CA X3)" from the server
2) Look to see if it can complete this chain using certificates from
`-CAfile`, `-CApath`, or `-trusted`
3) See that ISRG Root X1 is already trusted
4) Return this chain, which successfully verifies.

The evidence that this works on OpenSSL 1.1.0+ comes from the very similar
situation this past May. In that case, many servers were serving the chain
"EE <-- Sectigo RSA Domain Validation Secure Server CA <-- USERTrust RSA
Certification Authority <-- AddTrust External CA Root". In that situation,
both the USERTrust RSA Certification Authority and the AddTrust External CA
Root were in various trust stores, and then the AddTrust External CA Root
expired. Clients which were using OpenSSL 1.1.0+ did not begin to fail at
that time, because they were still able to trust the USERTrust RSA
Certification Authority. Clients using OpenSSL 1.0.x were failing, because
they couldn't recognize that one of the intermediates in the chain was in
their own trust store.

If this understanding is incorrect or missing something, we'd love to be
informed.

Thanks again,
Aaron

On Thu, Jan 7, 2021 at 1:10 AM Kurt Roeckx via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 2021-01-07 01:48, Aaron Gable wrote:
> > As mentioned in the blog post, and as we'll elaborate on further in an
> > upcoming post, one of the drawbacks of this arrangement is that there
> > actually is a class of clients for which chaining to an expired root
> > doesn't work: versions of OpenSSL prior to 1.1. This is the same failure
> > mode as various clients ran into on May 30th of 2020, when the AddTrust
> > External CA root expired.
>
> I'm not sure why you mention OpenSSL prior to 1.1. There was a bug in
> 1.1.1h that no longer checked for expired roots, but it was fixed in
> 1.1.1i. OpenSSL has no plan to allow expired roots by default.
>
>
> Kurt
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Extending Android Device Compatibility for Let's Encrypt Certificates

2021-01-06 Thread Aaron Gable via dev-security-policy
As mentioned in the blog post, and as we'll elaborate on further in an
upcoming post, one of the drawbacks of this arrangement is that there
actually is a class of clients for which chaining to an expired root
doesn't work: versions of OpenSSL prior to 1.1. This is the same failure
mode as various clients ran into on May 30th of 2020, when the AddTrust
External CA root expired.

For the sake of public feedback, the following is the profile which we
intend to have the new cross-sign issued with:
* Subject and Subject Public Key Info: Identical to the self-signed ISRG
Root X1 (https://crt.sh/?id=9314791) of course
* Validity: Three years from the date of issuance
* Basic Constraints: CA:TRUE, and no pathlen set (same as self-signed ISRG
Root X1)
* Key Usage: Cert Sign and CRL Sign (same as self-signed ISRG Root X1)
* EKUs: none, as this cross-sign shares the same name and pubkey as an
existing root certificate (BRs 7.1.2.2)
* AIA issuer url: http://apps.identrust.com/roots/dstrootcax3.p7c (same as
R3)
* CRL Distribution URL: http://crl.identrust.com/DSTROOTCAX3CRL.crl (same
as R3)
* Certificate Policies: 2.23.140.1.2.1 and 1.3.6.1.4.1.44947.1.1.1 (same as
R3)

Thank you,
Aaron

On Tue, Jan 5, 2021 at 7:34 PM Man Ho (Certizen) via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I'm curious whether this approach of cross-signing from a root
> certificate which has already expired is exceptional for Let's Encrypt.
> I'm not aware of any discussion on what conditions this approach could
> be accepted by Mozilla and other root certificate programs. Or, is it
> just an usual practice of CA? If yes, this approach may provide some new
> solutions in the CA ecosystem.
>
> Firstly, for those new CAs who do not have their root certificates
> included in the root certificate programs, they may acquire an expired
> root certificate from an existing CA who are probably more willing to
> sell the expired root certificate rather than an active root certificate.
>
> Secondly, for some CAs whose root certificates are going to expire, they
> may continue using the root certificates to issue intermediate CA
> certificates beyond its expiry. So, there will be no need for rollover
> of root certificates to new one.
>
> Are they good or bad things?
>
>
> On 22-Dec-20 7:42 AM, jo...--- via dev-security-policy wrote:
> > We (Let's Encrypt) just announced a new cross-sign from IdenTrust which
> is a bit unusual because it will extend beyond the expiration of the
> issuing root. More details can be found here:
> >
> > https://letsencrypt.org/2020/12/21/extending-android-compatibility.html
> >
> > Best,
> > Josh
> > ___
> > dev-security-policy mailing list
> > dev-security-policy@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-security-policy
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #211: Align OCSP requirements in Mozilla's policy with the BRs

2020-12-17 Thread Aaron Gable via dev-security-policy
As an individual, I'd prefer that the Mozilla root program requirements
incorporate the entirety of BR§4.9.10 by reference, i.e. I prefer option
(2).

I prefer (2) over (1) because it makes it easier to "diff" the respective
documents. Given that MRSP§6 appears to be strictly looser than BR§4.9.10,
retaining both causes every reader to have to prove that fact to
themselves. Incorporating by reference means readers can read a single
section and know they have all the relevant information.

I prefer (2) over (3) for much the same reason -- incorporating language
directly (which as you say, might diverge over time) creates additional
burden on readers without providing meaningful benefit.

I prefer (2) over (4) because of the structure of BR§4.9.10. Namely, that
"subsections (1) through (4)" is not well-defined, as there are two sets of
items labeled (1) through (3). In addition, the current subsections (1)
through (4) are encapsulated in a "Effective 2020-09-30" block, which
suggests that the next revision of BR§4.9.10 would likely include a similar
encapsulation, which may confuse the issue of "which subsections are you
referring to?" even further.

One potential option (5) would be to go even further than (2), and remove
the OCSP paragraph from the MRSP§6 entirely. Given that MRSP§2.3 says "CA
operations relating to issuance of certificates capable of being used for
SSL-enabled servers MUST also conform to the latest version of the [BRs]",
it seems clear that BR§4.9.10 is already included in its entirety. You
could update MRSP§2.3 to say "...relating to issuance and revocation..." if
you wanted to be even more explicit.

Thanks,
Aaron

On Wed, Dec 16, 2020 at 3:46 PM Ben Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> This discussion is related to Issue #211 on GitHub
> .
>
> Effective September 30, 2020, as a result of the Browser Alignment Ballot
> , section
> 4.9.10 of the CA/Browser Forum’s BaselineRequirements
>  (BR§4.9.10) says
> that a CA’s OCSP responses must meet certain requirements. The purpose of
> this email is to determine whether changes should be made to section 6 of
> the Mozilla Root Store Policy
> <
> https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/#6-revocation
> >
> (MRSP§6) to bring it closer to the Forum’s requirements for OCSP responses.
> There are a few possible paths forward, including:
>
> *Option 1* - Leave “as is” because MRSP§6 doesn’t conflict with the
> Baseline Requirements.  MRSP§6 currently says,
>
> “For end-entity certificates, if the CA provides revocation information via
> an Online Certificate Status Protocol (OCSP) service:
>
> · it MUST update that service at least every four days; and
>
> · responses MUST have a defined value in the nextUpdate field, and it MUST
> be no more than ten days after the thisUpdate field; and
>
> · the value in the nextUpdate field MUST be before or equal to the notAfter
> date of all certificates included within the BasicOCSPResponse.certs field
> or, if the certs field is omitted, before or equal to the notAfter date of
> the CA certificate which issued the certificate that the BasicOCSPResponse
> is for.”
>
> *Option 2* - MRSP§6 could simply incorporate by reference all of BR§4.9.10,
> but then quite a few new OCSP requirements would also be adopted, some of
> which people might find useful.
>
>
>
> *Option 3* - Paste only language from BR§4.9.10’s subsections (1) through
> (4) (for Subscriber/End-Entity Certificates) into MRSP§6.  Those four
> subsections state:  “(1) OCSP responses MUST have a validity interval
> greater than or equal to eight hours; (2) OCSP responses MUST have a
> validity interval less than or equal to ten days; (3) For OCSP responses
> with validity intervals less than sixteen hours, then the CA SHALL update
> the information provided via an Online Certificate Status Protocol prior to
> one-half of the validity period before the nextUpdate; and (4) For OCSP
> responses with validity intervals greater than or equal to sixteen hours,
> then the CA SHALL update the information provided via an Online Certificate
> Status Protocol at least eight hours prior to the nextUpdate, and no later
> than four days after the thisUpdate.”  The drawback of this approach would
> come when trying to synchronize the language—it would not be in lockstep
> with relevant changes to the BRs.
>
>
>
> *Option 4* - Amend MRSP§6 to just incorporate by reference the above
> subsections, i.e., “subsections (1) through (4) of BR§4.9.10, which deal
> with the OCSP status responses for Subscriber Certificates, are hereby
> incorporated by reference”.  This approach has a similar drawback if
> additional subsections are added, and it doesn’t include other language in
> BR§4.9.10 that some might find useful.

Re: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name verification to 398 days

2020-12-02 Thread Aaron Gable via dev-security-policy
One potential approach would be to make it so that issuances after July 1,
2021 require a validation no more than 398 days old. The currently-proposed
wording ("verify that each dNSName or IPAddress is current and correct at
intervals of 398 days or less") lends itself to that interpretation, it
just needs to be applied to issuances rather than validations. I would
propose wording like:

5.  verify that all of the information that is included in SSL certificates
remains current and correct at intervals of 825 days or less;
5.1. for SSL certificates issued on or after July 1, 2021, verify that each
dNSName or IPAddress is current and correct at intervals of 398 days or
less;

Aaron


On Wed, Dec 2, 2020 at 1:22 PM Ben Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> See my responses inline below.
>
> On Tue, Dec 1, 2020 at 1:34 PM Ryan Sleevi  wrote:
>
> >
> >
> > On Tue, Dec 1, 2020 at 2:22 PM Ben Wilson via dev-security-policy <
> > dev-security-policy@lists.mozilla.org> wrote:
> >
> >> See responses inline below:
> >>
> >> On Tue, Dec 1, 2020 at 11:40 AM Doug Beattie <
> doug.beat...@globalsign.com
> >> >
> >> wrote:
> >>
> >> > Hi Ben,
> >> >
> >> > For now I won’t comment on the 398 day limit or the date which you
> >> propose
> >> > this to take effect (July 1, 2021), but on the ability of CAs to
> re-use
> >> > domain validations completed prior to 1 July for their full 825 re-use
> >> > period.  I'm assuming that the 398 day limit is only for those domain
> >> > validated on or after 1 July, 2021.  Maybe that is your intent, but
> the
> >> > wording is not clear (it's never been all that clear)
> >> >
> >>
> >> Yes. (I agree that the wording is currently unclear and can be improved,
> >> which I'll work on as discussion progresses.)  That is the intent - for
> >> certificates issued beginning next July--new validations would be valid
> >> for
> >> 398 days, but existing, reused validations would be sunsetted and could
> be
> >> used for up to 825 days (let's say, until Oct. 1, 2023, which I'd advise
> >> against, given the benefits of freshness provided by re-performing
> methods
> >> in BR 3.2.2.4 and BR 3.2.2.5).
> >>
> >
> > Why? I have yet to see a compelling explanation from a CA about why
> > "grandfathering" old validations is good, and as you note, it undermines
> > virtually every benefit that is had by the reduction until 2023.
> >
>
> I am open to the idea of cutting off the tail earlier, at let's say,
> October 1, 2022, or earlier (see below).  I can work on language that does
> that.
>
>
> >
> > Ben, could you explain the rationale why this is better than the simpler,
> > clearer, and immediately beneficial for Mozilla users of requiring new
> > validations be conducted on-or-after 1 July 2021? Any CA that had
> concerns
> > would have ample time to ensure there is a fresh domain validation before
> > then, if they were concerned.
> >
>
> I don't have anything yet in particular with regard to a
> pros-cons/benefits-analysis-supported rationale, except that I expect
> push-back from SSL/TLS certificate subscribers and from CAs on their
> behalf. You're right, CAs could take the time between now and July 1st to
> obtain 398-day validations, but my concern is with the potential push-back.
>
> Also, as I mentioned before, I am interested in proposing this as a ballot
> in the CA/Browser Forum and seeing where it goes. I realize that this issue
> might come with added baggage from the history surrounding the
> validity-period changes that were previously defeated in the Forum, but it
> would still be good to see it adopted there first. Nonetheless, this issue
> is more than ripe enough to be resolved here by Mozilla as well.
>
>
>
> >
> > Doug, could you explain more about why it's undesirable to do that?
> >
> >
> >> >
> >> > Could you consider changing it to read more like this (feel free to
> edit
> >> > as needed):
> >> >
> >> > CAs may re-use domain validation for subjectAltName verifications of
> >> > dNSNames and IPAddresses done prior to July 1, 2021 for up to 825 days
> >>  >> > accordance with domain validation re-use in the BRs, section  4.2.1>.
> >> CAs
> >> > MUST limit domain re-use for subjectAltName verifications of dNSNames
> >> and
> >> > IPAddresses to 398 days for domains validated on or after July 1,
> 2021.
> >> >
> >>
> >> Thanks. I am open to all suggestions and improvements to the language.
> >> I'll
> >> see how this can be phrased appropriately to reduce the chance for
> >> misinterpretation.
> >>
> >
> > As noted above, I think adopting this wording would prevent much (any?)
> > benefit from being achieved until 2023. I can understand that 2023 is
> > better than "never", but I'm surprised to see an agreement so quickly to
> > that being desirable over 2021. I suspect there's ample context here,
> but I
> > think it would benefit from discussion.
> >
>
> The language needs to be worked on some more.  As I note above, we should
>