Re: CCADB Proposal: Add field called JSON Array of Partitioned CRLs Issued By This CA

2021-02-25 Thread Ryan Sleevi via dev-security-policy
On Thu, Feb 25, 2021 at 8:21 PM Aaron Gable  wrote:

> If I may, I believe that the problem is less that it is a reference (which
> is true of every URL stored in CCADB), and more that it is a reference to
> an unsigned object.
>

While that's a small part, it really is as I said: the issue of being a
reference. We've already had this issue with the other URL fields, and thus
there exists logic to dereference and archive those URLs within CCADB.
Issues like audit statements, CP, and CPSes are all things that are indeed
critical to understanding the posture of a CA over time, and so actually
having those materials in something stable and maintained (without a
dependence on the CA) is important.  It's the lesson from those
various past failure modes that had Google very supportive of the non-URL
based approach, putting the JSON directly in CCADB, rather than forcing yet
another "update-and-fetch" system.. You're absolutely correct
that the "configured by CA" element has the nice property of being assured
that the change came from the CA themselves, without requiring signing, but
I wouldn't want to reduce the concern to just that.

* I'm not aware of any other automation system with write-access to CCADB
> (I may be very wrong!), and I imagine there would need to be some sort of
> further design discussion with CCADB's maintainers about what it means to
> give write credentials to an automated system, what sorts of protections
> would be necessary around those credentials, how to scope those credentials
> as narrowly as possible, and more.
>

We already have automation for CCADB. CAs can and do use it for disclosure
of intermediates.


> * I'm not sure CCADB's maintainers want updates to it to be in the
> critical path of ongoing issuance, as opposed to just in the critical path
> for beginning issuance with a new issuer.
>

Without wanting to sound dismissive, whether or not it's in a critical path
of updating is the CA's choice on their design. I understand that there are
designs that could put it there, I think the question is whether it's
reasonable for the CA to have done that in the first place, which is why
it's important to drill down into these concerns. I know you merely
qualified it as undesirable, rather than actually being a blocker, and I
appreciate that, but I do think some of these concerns are perhaps less
grounded or persuasive than others :)

Taking a step back here, I think there's been a fundamental design error in
your proposed design, and I think that it, combined with the (existing)
automation, may make much of this not actually be the issue you anticipate.

Since we're talking Let's Encrypt, the assumption here is that the CRL URLs
will not be present within the crlDistributionPoints of the certificates,
otherwise, this entire discussion is fairly moot, since those
crlDistributionPoints can be obtained directly from Certificate
Transparency.

The purpose of this field is to help discover CRLs that are otherwise not
discoverable (e.g. from CT), but this also means that these CRLs do not
suffer from the same design limitations of PKI. Recall that there's nothing
intrinsic to a CRL that expresses its sharding algorithm (ignoring, for a
second, reasonCodes within the IDP extension). The only observability that
an external (not-the-CA) party has, whether the Subscriber or the RP, is
merely that "the CRL DP for this certificate is different from the CRLDP
for that certificate". It is otherwise opaque how the CA used it, even if
through a large enough corpus from CT, you can infer the algorithm from the
pattern. Further, when such shards are being used, you can observe that a
given CRL that you have (whose provenance may be unknown) can be known
whether or not it covers a given certificate by matching the CRLDP of the
cert against the IDP of the CRL.  We're talking about a scenario in which
the certificate lacks a CRLDP, and so there's no way to know that, indeed,
a given CRL "covers" the certificate unambiguously. The only thing we have
is the CRL having an IDP, because if it didn't, it'd have to be a full CRL,
and then you'd be back to only having one URL to worry about.

Because of all of this, it means that the consumers of this JSON are
expected to combine all of the CRLs present, union all the revoked serials,
and be done with it. However, it's that unioning that I think you've
overlooked here in working out your math. In the "classic" PKI sense (i.e.
CRLDP present), the CA has to plan for revocation for the lifetime of the
certificate, it's fixed when the certificate is created, and it's immutable
once created. Further, changes in revocation frequency mean you need to
produce new versions of that specific CRL. However, the scenario we're
discussing, in which these CRLs are unioned, you're entirely flexible at
all points in time for how you balance your CRLs. Further, in the 'ideal'
case (no revocations), you need only produce a single empty CRL. There's no
need to produce an 

Re: CCADB Proposal: Add field called JSON Array of Partitioned CRLs Issued By This CA

2021-02-25 Thread Aaron Gable via dev-security-policy
Similarly, snipping and replying to portions of your message below:

On Thu, Feb 25, 2021 at 12:52 PM Ryan Sleevi  wrote:

> Am I understanding your proposal correctly that "any published JSON
> document be valid for a certain period of time" effectively means that each
> update of the JSON document also gets a distinct URL (i.e. same as the
> CRLs)?
>

No, the (poorly expressed) idea is this: suppose you fetch our
rapidly-changing document and get version X. Over the next five minutes,
you fetch every CRL URL in that document. But during that same five
minutes, we've published versions X+1 and X+2 of that JSON document at that
same URL. There should be a guarantee that, as long as you fetch the CRLs
in your document "fast enough" (for some to-be-determined value of "fast"),
all of those URLs will still be valid (i.e. not return a 404 or similar),
*even though* some of them are not referenced by the most recent version of
the JSON document.

This may seem like a problem that arises only in our rapidly-changing JSON
version of things. But I believe it should be a concern even in the system
as proposed by Kathleen: when a CA updates the JSON array contained in
CCADB, how long does a consumer of CCADB have to get a snapshot of the
contents of the previous set of URLs? To posit an extreme hypothetical, can
a CA hide misissuance of a CRL by immediately hosting their fixed CRL at a
new URL and updating their CCADB JSON list to include that new URL instead?
Not to put too fine a point on it, but I believe that this sort of
hypothetical is the underlying worry about having the JSON list live
outside CCADB where it can be changed on a whim, but I'm not sure that
having the list live inside CCADB without any requirements on the validity
of the URLs inside it provides significantly more auditability.

The issue I see with the "URL stored in CCADB" is that it's a reference,
> and the dereferencing operation (retrieving the URL) puts the onus on the
> consumer (e.g. root stores) and can fail, or result in different content
> for different parties, undetectably.
>

If I may, I believe that the problem is less that it is a reference (which
is true of every URL stored in CCADB), and more that it is a reference to
an unsigned object. URLs directly to CRLs don't have this issue, because
the CRL is signed. And storing the JSON array directly doesn't have this
issue, because it is implicitly signed by the credentials of the user who
signed in to CCADB to modify it. One possible solution here would be to
require that the JSON document be signed by the same CA certificate which
issued all of the CRLs contained in it. I don't think I like this solution,
but it is within the possibility space.


> If there is an API that allows you to modify the JSON contents directly
> (e.g. a CCADB API call you could make with an OAuth token), would that
> address your concern?
>

If Mozilla and the other stakeholders in CCADB decide to go with this
thread's proposal as-is, then I suspect that yes, we would develop
automation to talk to CCADB's API in exactly this way. This is undesired
from our perspective for a variety of reasons:
* I'm not aware of a well-maintained Go library for interacting with the
Salesforce API.
* I'm not aware of any other automation system with write-access to CCADB
(I may be very wrong!), and I imagine there would need to be some sort of
further design discussion with CCADB's maintainers about what it means to
give write credentials to an automated system, what sorts of protections
would be necessary around those credentials, how to scope those credentials
as narrowly as possible, and more.
* I'm not sure CCADB's maintainers want updates to it to be in the critical
path of ongoing issuance, as opposed to just in the critical path for
beginning issuance with a new issuer.

I think the question was with respect to the frequency of change of those
> documents.
>

Frankly, I think the least frequent creation of a new time-sharded CRL we
would be willing to do is once every 24 hours (that's still >60MB per CRL
in the worst case). That's going to require automation no matter what.


> There is one thing you mentioned that's also non-obvious to me, because I
> would expect you already have to deal with this exact issue with respect to
> OCSP, which is "overwriting files is a dangerous operation prone to many
> forms of failure". Could you expand more about what some of those
> top-concerns are? I ask, since, say, an OCSP Responder is frequently
> implemented as "Spool /ocsp/:issuerDN/:serialNumber", with the CA
> overwriting :serialNumber whenever they produce new responses. It sounds
> like you're saying that common design pattern may be problematic for y'all,
> and I'm curious to learn more.
>

Sure, happy to expand. For those following along at home, this last bit is
relatively off-topic compared to the other sections above, so skip if you
feel like it :)

OCSP consists of hundreds of millions of small entries. Thus our 

The Ace Care Center Team has received your request []

2021-02-25 Thread Ace Care Center via dev-security-policy


……..

Please do not reply to this email.

……..

 

Hello from the Ace Care Center Team!

  

Thank you for your recent request. Because of added safety measures for our 
employees due to Coronavirus and increased call center traffic, there may be 
delays in response time. Please know that we are working as quickly as possible 
during these challenging circumstances.

 

Sincerely,

Ace Care Center Team

Helpful is our Business – Caring is our Commitment

 

 

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name verification to 398 days

2021-02-25 Thread Clint Wilson via dev-security-policy
I think it makes sense to separate out the date for domain validation 
expiration from the issuance of server certificates with previously validated 
domain names, but agree with Ben that the timeline doesn’t seem to need to be 
prolonged. What about something like this:

1. Domain name or IP address verifications performed on or after July 1, 2021 
may be reused for a maximum of 398 days.
2. Server certificates issued on or after September 1, 2021 must have completed 
domain name or IP address verification within the preceding 398 days.

This effectively stretches the “cliff” out across ~6 months (now through the 
end of August), which seems reasonable.

Cheers!
-Clint

> On Feb 25, 2021, at 11:40 AM, Ben Wilson via dev-security-policy 
>  wrote:
> 
> Yes - I think we could focus on the domain validations themselves and allow
> domain validations to be reused for 398 days (maybe even from December 6,
> 2019), and then combine that with certificate issuance, but I'm not sure I
> like pushing this out to Feb 1, 2022 or even Oct. 1, 2021.  Maybe someone
> else on the list has a suggested formula?
> 
> On Thu, Feb 25, 2021 at 12:29 PM Doug Beattie 
> wrote:
> 
>> Ben,
>> 
>> I'd prefer that we tie this to a date related to when the domain
>> validations are done, or perhaps 2 statements.  As it stands (and as others
>> have commented), on July 1 all customers will immediately need to validate
>> all domains that were done between 825 and 397 days ago, so a huge number
>> all at once for web site owners and for CAs.
>> 
>> I'd prefer that it says " Domain validations performed from July 1, 2021
>> may be reused for a maximum of 398 days ".  I understand that this
>> basically kick the can down the road for an extra year and that may not be
>> acceptable, so, maybe we specify 2 dates:
>> 
>> 1)  Domain validations performed on or after July 1, 2021 may be reused
>> for a maximum of 398 days.
>> 
>> 2)  for server certificates issued on or after Feb 1, 2022, each dNSName
>> or IPAddress in a SAN must have been validated within the prior 398 days
>> 
>> Is that a compromise you could consider?
>> 
>> Doug
>> 
>> 
>> -Original Message-
>> From: dev-security-policy 
>> On Behalf Of Ben Wilson via dev-security-policy
>> Sent: Thursday, February 25, 2021 2:08 PM
>> To: Mozilla 
>> Subject: Re: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name
>> verification to 398 days
>> 
>> All,
>> 
>> I continue to move this Issue #206 forward with a proposed change to
>> section 2.1 of the MRSP (along with an effort to modify section 3.2.2.4 or
>> section 4.2.1 of the CA/B Forum's Baseline Requirements).
>> 
>> Currently, I am still contemplating adding a subsection 5.1 to MRSP section
>> 2.1 that would read,
>> " 5.1. for server certificates issued on or after July 1, 2021, verify
>> each dNSName or IPAddress in a SAN or commonName at an interval of 398 days
>> or less;"
>> 
>> See draft language here
>> 
>> https://github.com/BenWilson-Mozilla/pkipolicy/commit/69bddfd96d1d311874c35c928abdfc13dc11aba3
>> 
>> 
>> Ben
>> 
>> On Wed, Dec 2, 2020 at 3:00 PM Ben Wilson  wrote:
>> 
>>> All,
>>> 
>>> I have started a similar, simultaneous discussion with the CA/Browser
>>> Forum, in order to gain traction.
>>> 
>>> 
>>> 
>>> https://lists.cabforum.org/pipermail/servercert-wg/2020-December/00238
>>> 2.html
>>> 
>>> Ben
>>> 
>>> On Wed, Dec 2, 2020 at 2:49 PM Jeremy Rowley
>>> 
>>> wrote:
>>> 
 Should this limit on reuse also apply to s/MIME? Right now, the 825
 day limit in Mozilla policy only applies to TLS certs with email
 verification of s/MIME being allowed for infinity time.  The first
 draft of the language looked like it may change this while the newer
 language puts back the TLS limitation. If it's not addressed in this
 update, adding clarification on domain verification reuse for SMIME
 would be a good improvement on the existing policy.
 
 -Original Message-
 From: dev-security-policy
 
 On Behalf Of Ben Wilson via dev-security-policy
 Sent: Wednesday, December 2, 2020 2:22 PM
 To: Ryan Sleevi 
 Cc: Doug Beattie ; Mozilla <
 mozilla-dev-security-pol...@lists.mozilla.org>
 Subject: Re: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain
 name verification to 398 days
 
 See my responses inline below.
 
 On Tue, Dec 1, 2020 at 1:34 PM Ryan Sleevi  wrote:
 
> 
> 
> On Tue, Dec 1, 2020 at 2:22 PM Ben Wilson via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
>> See responses inline below:
>> 
>> On Tue, Dec 1, 2020 at 11:40 AM Doug Beattie
>> >> 
>> wrote:
>> 
>>> Hi Ben,
>>> 
>>> For now I won’t comment on the 398 day limit or the date which
>>> you
>> propose
>>> this to take effect (July 1, 2021), but on the ability of CAs to
>>> re-use domain validations completed prior to 1 July for 

Re: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name verification to 398 days

2021-02-25 Thread Ryan Sleevi via dev-security-policy
On Thu, Feb 25, 2021 at 2:29 PM Doug Beattie via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I'd prefer that we tie this to a date related to when the domain
> validations are done, or perhaps 2 statements.  As it stands (and as others
> have commented), on July 1 all customers will immediately need to validate
> all domains that were done between 825 and 397 days ago, so a huge number
> all at once for web site owners and for CAs.
>

Isn't this only true if CAs use this discussion period to do nothing?

That is, can't CAs today (or even months ago) have started to more
frequently revalidate their customers, refreshing old validations, helping
transition customers to automated methods, etc?

That is, is the scenario you described inherently bad, or just a
consequence of CA inaction? And is the goal to have zero impact, or, which
your proposal seems to acknowledge, do we agree that some impact is both
reasonable and acceptable, and the only difference would be the degree?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CCADB Proposal: Add field called JSON Array of Partitioned CRLs Issued By This CA

2021-02-25 Thread Ryan Sleevi via dev-security-policy
Hugely useful! Thanks for sharing - this is incredibly helpful.

I've snipped a good bit, just to keep the thread small, and have some
further questions inline.

On Thu, Feb 25, 2021 at 2:15 PM Aaron Gable  wrote:

> I believe that there is an argument to be made here that this plan
> increases the auditability of the CRLs, rather than decreases it. Root
> programs could require that any published JSON document be valid for a
> certain period of time, and that all CRLs within that document remain
> available for that period as well. Or even that historical versions of CRLs
> remain available until every certificate they cover has expired (which is
> what we intend to do anyway). Researchers can crawl our history of CRLs and
> examine revocation events in more detail than previously available.
>

So I think unpacking this a little: Am I understanding your proposal
correctly that "any published JSON document be valid for a certain period
of time" effectively means that each update of the JSON document also gets
a distinct URL (i.e. same as the CRLs)? I'm not sure if that's what you
meant, because it would still mean regularly updating CCADB whenever your
shard-set changes (which seems to be the concern), but at the same time, it
would seem that any validity requirement imposes on you a lower-bound for
how frequently you can change or introduce new shards, right?

The issue I see with the "URL stored in CCADB" is that it's a reference,
and the dereferencing operation (retrieving the URL) puts the onus on the
consumer (e.g. root stores) and can fail, or result in different content
for different parties, undetectably. If it was your proposal to change to
distinct URLs, that issue would still unfortunately exist.

If there is an API that allows you to modify the JSON contents directly
(e.g. a CCADB API call you could make with an OAuth token), would that
address your concern? It would allow CCADB to still canonically record the
change history and contents, facilitating that historic research. It would
also facilitate better compliance tracking - since we know policies like
"could require that any published JSON" don't really mean anything in
practice for a number of CAs, unless the requirements are also monitored
and enforced.


> Regardless, even without statically-pathed, timestamped CRLs, I believe
> that the merits of rolling time-based shards are sufficient to be a strong
> argument in favor of dynamic JSON documents.
>

Right, I don't think there's any fundamental opposition to that. I'm very
much in favor of time-sharded CRLs over hash-sharded CRLs, for exactly the
reasons you highlight. I think the question was with respect to the
frequency of change of those documents (i.e. how often you introduce new
shards, and how those shards are represented).

There is one thing you mentioned that's also non-obvious to me, because I
would expect you already have to deal with this exact issue with respect to
OCSP, which is "overwriting files is a dangerous operation prone to many
forms of failure". Could you expand more about what some of those
top-concerns are? I ask, since, say, an OCSP Responder is frequently
implemented as "Spool /ocsp/:issuerDN/:serialNumber", with the CA
overwriting :serialNumber whenever they produce new responses. It sounds
like you're saying that common design pattern may be problematic for y'all,
and I'm curious to learn more.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name verification to 398 days

2021-02-25 Thread Ben Wilson via dev-security-policy
Yes - I think we could focus on the domain validations themselves and allow
domain validations to be reused for 398 days (maybe even from December 6,
2019), and then combine that with certificate issuance, but I'm not sure I
like pushing this out to Feb 1, 2022 or even Oct. 1, 2021.  Maybe someone
else on the list has a suggested formula?

On Thu, Feb 25, 2021 at 12:29 PM Doug Beattie 
wrote:

> Ben,
>
> I'd prefer that we tie this to a date related to when the domain
> validations are done, or perhaps 2 statements.  As it stands (and as others
> have commented), on July 1 all customers will immediately need to validate
> all domains that were done between 825 and 397 days ago, so a huge number
> all at once for web site owners and for CAs.
>
> I'd prefer that it says " Domain validations performed from July 1, 2021
> may be reused for a maximum of 398 days ".  I understand that this
> basically kick the can down the road for an extra year and that may not be
> acceptable, so, maybe we specify 2 dates:
>
> 1)  Domain validations performed on or after July 1, 2021 may be reused
> for a maximum of 398 days.
>
> 2)  for server certificates issued on or after Feb 1, 2022, each dNSName
> or IPAddress in a SAN must have been validated within the prior 398 days
>
> Is that a compromise you could consider?
>
> Doug
>
>
> -Original Message-
> From: dev-security-policy 
> On Behalf Of Ben Wilson via dev-security-policy
> Sent: Thursday, February 25, 2021 2:08 PM
> To: Mozilla 
> Subject: Re: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name
> verification to 398 days
>
> All,
>
> I continue to move this Issue #206 forward with a proposed change to
> section 2.1 of the MRSP (along with an effort to modify section 3.2.2.4 or
> section 4.2.1 of the CA/B Forum's Baseline Requirements).
>
> Currently, I am still contemplating adding a subsection 5.1 to MRSP section
> 2.1 that would read,
> " 5.1. for server certificates issued on or after July 1, 2021, verify
> each dNSName or IPAddress in a SAN or commonName at an interval of 398 days
> or less;"
>
> See draft language here
>
> https://github.com/BenWilson-Mozilla/pkipolicy/commit/69bddfd96d1d311874c35c928abdfc13dc11aba3
>
>
> Ben
>
> On Wed, Dec 2, 2020 at 3:00 PM Ben Wilson  wrote:
>
> > All,
> >
> > I have started a similar, simultaneous discussion with the CA/Browser
> > Forum, in order to gain traction.
> >
> > 
> >
> > https://lists.cabforum.org/pipermail/servercert-wg/2020-December/00238
> > 2.html
> >
> > Ben
> >
> > On Wed, Dec 2, 2020 at 2:49 PM Jeremy Rowley
> > 
> > wrote:
> >
> >> Should this limit on reuse also apply to s/MIME? Right now, the 825
> >> day limit in Mozilla policy only applies to TLS certs with email
> >> verification of s/MIME being allowed for infinity time.  The first
> >> draft of the language looked like it may change this while the newer
> >> language puts back the TLS limitation. If it's not addressed in this
> >> update, adding clarification on domain verification reuse for SMIME
> >> would be a good improvement on the existing policy.
> >>
> >> -Original Message-
> >> From: dev-security-policy
> >> 
> >> On Behalf Of Ben Wilson via dev-security-policy
> >> Sent: Wednesday, December 2, 2020 2:22 PM
> >> To: Ryan Sleevi 
> >> Cc: Doug Beattie ; Mozilla <
> >> mozilla-dev-security-pol...@lists.mozilla.org>
> >> Subject: Re: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain
> >> name verification to 398 days
> >>
> >> See my responses inline below.
> >>
> >> On Tue, Dec 1, 2020 at 1:34 PM Ryan Sleevi  wrote:
> >>
> >> >
> >> >
> >> > On Tue, Dec 1, 2020 at 2:22 PM Ben Wilson via dev-security-policy <
> >> > dev-security-policy@lists.mozilla.org> wrote:
> >> >
> >> >> See responses inline below:
> >> >>
> >> >> On Tue, Dec 1, 2020 at 11:40 AM Doug Beattie
> >> >>  >> >> >
> >> >> wrote:
> >> >>
> >> >> > Hi Ben,
> >> >> >
> >> >> > For now I won’t comment on the 398 day limit or the date which
> >> >> > you
> >> >> propose
> >> >> > this to take effect (July 1, 2021), but on the ability of CAs to
> >> >> > re-use domain validations completed prior to 1 July for their
> >> >> > full
> >> >> > 825 re-use period.  I'm assuming that the 398 day limit is only
> >> >> > for those domain validated on or after 1 July, 2021.  Maybe that
> >> >> > is your intent, but the wording is not clear (it's never been
> >> >> > all that
> >> >> > clear)
> >> >> >
> >> >>
> >> >> Yes. (I agree that the wording is currently unclear and can be
> >> >> improved, which I'll work on as discussion progresses.)  That is
> >> >> the intent - for certificates issued beginning next July--new
> >> >> validations would be valid for
> >> >> 398 days, but existing, reused validations would be sunsetted and
> >> >> could be used for up to 825 days (let's say, until Oct. 1, 2023,
> >> >> which I'd advise against, given the benefits of freshness provided
> >> >> by re-performing methods in BR 3.2.2.4 and BR 3.2.2.5).
> >> 

Re: Policy 2.7.1: MRSP Issue #153: Cradle-to-Grave Contiguous Audits

2021-02-25 Thread Ben Wilson via dev-security-policy
I haven't seen any response to my question about whether there is still a
concern over the language "as evidenced by a Qualified Auditor's key
destruction report".
I did add "This cradle-to-grave audit requirement applies equally to
subordinate CAs as it does to root CAs" to address the scenarios that were
raised.
So I am going to assume that this issue is resolved and that we can move
this proposed change forward.
See
https://github.com/BenWilson-Mozilla/pkipolicy/commit/c8bdb949020634b1f8fa31bc060229c600fe6f9d

On Fri, Feb 12, 2021 at 11:16 AM Ben Wilson  wrote:

> All,
>
> The proposed change currently reads,
>
> "Full-surveillance period-of-time audits MUST be conducted and updated
> audit information provided no less frequently than annually from the time
> of CA key pair generation until the CA certificate is no longer trusted by
> Mozilla's root store or until all copies of the CA private key have been
> completely destroyed, as evidenced by a Qualified Auditor's key destruction
> report, whichever occurs sooner. This cradle-to-grave audit requirement
> applies equally to subordinate CAs as it does to root CAs. Successive
> period-of-time audits MUST be contiguous (no gaps)."
> But is the argument that I should also add something along these
> lines--"This cradle-to-grave audit requirement applies equally to ...,  *and
> an audit would be required for all subordinate CAs until their private keys
> have been completely destroyed as well*."?  Is that still the issue
> here?  Or has it already been resolved with the language further above?
>
> Thanks,
>
> Ben
>
> On Sun, Jan 24, 2021 at 12:55 PM Ben Wilson  wrote:
>
>> I agree that we should add language that makes it more clear that the key
>> destruction exception for audit only applies to the CA certificates whose
>> key has been destroyed.  I'm also hoping that a CAO wouldn't destroy a Root
>> CA key if there were still valid subordinate CAs that the CAO might need to
>> revoke.
>>
>> On Fri, Nov 6, 2020 at 10:49 AM Jakob Bohm via dev-security-policy <
>> dev-security-policy@lists.mozilla.org> wrote:
>>
>>> On 2020-11-05 22:43, Tim Hollebeek wrote:
>>> > So, I'd like to drill down a bit more into one of the cases you
>>> discussed.
>>> > Let's assume the following:
>>> >
>>> > 1. The CAO [*] may or may not have requested removal of the CAC, but
>>> removal
>>> > has not been completed.  The CAC is still trusted by at least one
>>> public
>>> > root program.
>>> >
>>> > 2. The CAO has destroyed the CAK for that CAC.
>>> >
>>> > The question we've been discussing internally is whether destruction
>>> alone
>>> > should be sufficient to get you out of audits, and we're very skeptical
>>> > that's desirable.
>>> >
>>> > The problem is that destruction of the CAK does not prevent issuance by
>>> > subCAs, so issuance is still possible.  There is also the potential
>>> > possibility of undisclosed subCAs or cross relationships to consider,
>>> > especially since some of these cases are likely to be shutdown
>>> scenarios for
>>> > legacy, poorly managed hierarchies.  Removal may be occurring
>>> *precisely*
>>> > because there are doubts about the history, provenance, or scope of
>>> previous
>>> > operations and audits.
>>> >
>>> > We're basically questioning whether there are any scenarios where
>>> allowing
>>> > someone to escape audits just because they destroyed the key is likely
>>> to
>>> > lead to good outcomes as opposed to bad ones.  If there aren't
>>> reasonable
>>> > scenarios where it is necessary to be able to remove CACs from audit
>>> scope
>>> > through key destruction while they are still trusted by Mozilla, it's
>>> > probably best to require audits as long as the CACs are in scope for
>>> > Mozilla.
>>> >
>>> > Alternatively, if there really are cases where this needs to be done,
>>> it
>>> > would be wise to craft language that limits this exception to those
>>> > scenarios.
>>> >
>>>
>>> I believe that destruction of the Root CA Key should only end audit
>>> requirements for the corresponding Root CA itself, not for any of its
>>> still trusted SubCAs.
>>>
>>> One plausible (but hypothetical) sequence of events is this:
>>>
>>> 1. Begin Root ceremony with Auditors present.
>>>
>>> 1.1 Create Root CA Key pair
>>> 1.2 Sign Root CA SelfCert
>>> 1.3 Create 5 SubCA Key pairs
>>> 1.4 Sign 5 SubCA pre-certificates
>>> 1.5 Request CT Log entries for the 5 SubCA pre-certificates
>>> 1.6 Sign 5 SubCA certificates with embedded CTs
>>> 1.7 Sign, but do not publish a set of post-dated CRLs for various
>>> contingencies
>>> 1.8 Sign, but do not publish a set of post-dated revocation OCSP
>>> responses for those contingencies
>>> 1.9 Sign, but do not yet publish, a set of post-dated non-revocation
>>> OCSP responses confirming that the SubCAs have not been revoked on each
>>> date during their validity.
>>> 1.10 Destroy Root CA Key pair.
>>>
>>> 2. Initiate audited storage of the unreleased CRL and OCSP signatures.
>>>
>>> 3. End Root 

RE: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name verification to 398 days

2021-02-25 Thread Doug Beattie via dev-security-policy
Ben,

I'd prefer that we tie this to a date related to when the domain validations 
are done, or perhaps 2 statements.  As it stands (and as others have 
commented), on July 1 all customers will immediately need to validate all 
domains that were done between 825 and 397 days ago, so a huge number all at 
once for web site owners and for CAs.

I'd prefer that it says " Domain validations performed from July 1, 2021 may be 
reused for a maximum of 398 days ".  I understand that this basically kick the 
can down the road for an extra year and that may not be acceptable, so, maybe 
we specify 2 dates:

1)  Domain validations performed on or after July 1, 2021 may be reused for a 
maximum of 398 days.

2)  for server certificates issued on or after Feb 1, 2022, each dNSName or 
IPAddress in a SAN must have been validated within the prior 398 days

Is that a compromise you could consider?

Doug


-Original Message-
From: dev-security-policy  On 
Behalf Of Ben Wilson via dev-security-policy
Sent: Thursday, February 25, 2021 2:08 PM
To: Mozilla 
Subject: Re: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name 
verification to 398 days

All,

I continue to move this Issue #206 forward with a proposed change to section 
2.1 of the MRSP (along with an effort to modify section 3.2.2.4 or section 
4.2.1 of the CA/B Forum's Baseline Requirements).

Currently, I am still contemplating adding a subsection 5.1 to MRSP section
2.1 that would read,
" 5.1. for server certificates issued on or after July 1, 2021, verify each 
dNSName or IPAddress in a SAN or commonName at an interval of 398 days or less;"

See draft language here
https://github.com/BenWilson-Mozilla/pkipolicy/commit/69bddfd96d1d311874c35c928abdfc13dc11aba3


Ben

On Wed, Dec 2, 2020 at 3:00 PM Ben Wilson  wrote:

> All,
>
> I have started a similar, simultaneous discussion with the CA/Browser 
> Forum, in order to gain traction.
>
> 
>
> https://lists.cabforum.org/pipermail/servercert-wg/2020-December/00238
> 2.html
>
> Ben
>
> On Wed, Dec 2, 2020 at 2:49 PM Jeremy Rowley 
> 
> wrote:
>
>> Should this limit on reuse also apply to s/MIME? Right now, the 825 
>> day limit in Mozilla policy only applies to TLS certs with email 
>> verification of s/MIME being allowed for infinity time.  The first 
>> draft of the language looked like it may change this while the newer 
>> language puts back the TLS limitation. If it's not addressed in this 
>> update, adding clarification on domain verification reuse for SMIME 
>> would be a good improvement on the existing policy.
>>
>> -Original Message-
>> From: dev-security-policy 
>> 
>> On Behalf Of Ben Wilson via dev-security-policy
>> Sent: Wednesday, December 2, 2020 2:22 PM
>> To: Ryan Sleevi 
>> Cc: Doug Beattie ; Mozilla < 
>> mozilla-dev-security-pol...@lists.mozilla.org>
>> Subject: Re: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain 
>> name verification to 398 days
>>
>> See my responses inline below.
>>
>> On Tue, Dec 1, 2020 at 1:34 PM Ryan Sleevi  wrote:
>>
>> >
>> >
>> > On Tue, Dec 1, 2020 at 2:22 PM Ben Wilson via dev-security-policy < 
>> > dev-security-policy@lists.mozilla.org> wrote:
>> >
>> >> See responses inline below:
>> >>
>> >> On Tue, Dec 1, 2020 at 11:40 AM Doug Beattie 
>> >> > >> >
>> >> wrote:
>> >>
>> >> > Hi Ben,
>> >> >
>> >> > For now I won’t comment on the 398 day limit or the date which 
>> >> > you
>> >> propose
>> >> > this to take effect (July 1, 2021), but on the ability of CAs to 
>> >> > re-use domain validations completed prior to 1 July for their 
>> >> > full
>> >> > 825 re-use period.  I'm assuming that the 398 day limit is only 
>> >> > for those domain validated on or after 1 July, 2021.  Maybe that 
>> >> > is your intent, but the wording is not clear (it's never been 
>> >> > all that
>> >> > clear)
>> >> >
>> >>
>> >> Yes. (I agree that the wording is currently unclear and can be 
>> >> improved, which I'll work on as discussion progresses.)  That is 
>> >> the intent - for certificates issued beginning next July--new 
>> >> validations would be valid for
>> >> 398 days, but existing, reused validations would be sunsetted and 
>> >> could be used for up to 825 days (let's say, until Oct. 1, 2023, 
>> >> which I'd advise against, given the benefits of freshness provided 
>> >> by re-performing methods in BR 3.2.2.4 and BR 3.2.2.5).
>> >>
>> >
>> > Why? I have yet to see a compelling explanation from a CA about why 
>> > "grandfathering" old validations is good, and as you note, it 
>> > undermines virtually every benefit that is had by the reduction 
>> > until
>> 2023.
>> >
>>
>> I am open to the idea of cutting off the tail earlier, at let's say, 
>> October 1, 2022, or earlier (see below).  I can work on language that 
>> does that.
>>
>>
>> >
>> > Ben, could you explain the rationale why this is better than the 
>> > simpler, clearer, and immediately beneficial for Mozilla users of 
>> > requiring new validations be 

Re: CCADB Proposal: Add field called JSON Array of Partitioned CRLs Issued By This CA

2021-02-25 Thread Aaron Gable via dev-security-policy
Sure, happy to provide more details! The fundamental issue here is the
scale at which Let's Encrypt issues, and the automated nature by which
clients interact with Let's Encrypt.

LE currently has 150M certificates active, all (as of March 1st) signed by
the same issuer certificate, R3. In the event of a mass revocation, that
means a CRL with 150M entries in it. At an average of 38 bytes per entry in
a CRL, that means nearly 6GB worth of CRL. Passing around a single 6GB file
isn't good for reliability (it's much better to fail-and-retry downloading
one of a hundred 60MB files than fail-and-retry a single 6GB file), so
sharding seems like an operational necessity.

Even without a LE-initiated mass revocation event, one of our large
integrators (such as a hosting provider with millions of domains) could
decide for any reason to revoke every single certificate we have issued to
them. We need to be resilient to these kinds of events.

Once we've decided that sharding is necessary, the next question is "static
or dynamic sharding?". It's easy to imagine a world in which we usually
have only one or two CRL shards, but dynamically scale that number up to
keep individual CRL sizes small if/when revocation rises sharply. There are
a lot of "interesting" (read: difficult) engineering problems here, and
we've decided not to go the dynamic route, but even if we did it would
obviously require being able to change the list of URLs in the JSON array
on the fly.

For static sharding, we would need to constantly maintain a large set of
small CRLs, such that even in the worst case no individual CRL would become
too large. I see two main approaches: maintaining a fully static set of
shards into which our certificates are bucketed, or maintaining rolling
time-based shards (much like CT shards).

Maintaining a static set of shards has the primary advantage of "working
like CRLs usually work". A given CRL has a scope (e.g. "all certs issued by
R3 whose serial number is equal to 1 mod 500"), it has a nextUpdate, and a
new CRL with the same scope will be re-issued at the same path before that
nextUpdate is reached. However, it makes re-sharding difficult. If Let's
Encrypt's issuance rises enough that we want to have 1000 shards instead of
500, we'll have to re-shard every cert, re-issue every CRL, and update the
list of URLs in the JSON. And if we're updating the list, we should have
standards around how that list is updated and how its history is stored,
and then we'd prefer that those standards allow for rapid updates.

The alternative is to have rolling time-based shards. In this case, every X
hours we would create a new CRL, and every certificate we issue over the
next period would belong to that CRL. Similar to the above, these CRLs have
nice scopes: "all certs issued by R3 between AA:BB and XX:YY"). When every
certificate in one of these time-based shards has expired, we can simply
stop re-issuing it. This has the advantage of solving the resharding
problem: if we want to make our CRLs smaller, we just increase the
frequency at which we initialize a new one, and 90 days later we've fully
switched over to the new size. It has the disadvantage from your
perspective of requiring us to add a new URL to the JSON array every period
(and we get to drop an old URL from the array every period as well).

So why would we want to put each CRL re-issuance at a new path, and update
our JSON even more frequently? Because we have reason to believe that
various root programs will soon seek CRL re-issuance on the order of every
6 hours, not every 7 days as currently required; we will have many shards;
and overwriting files is a dangerous operation prone to many forms of
failure. Our current plan is to surface CRLs at paths like
`/crls/:issuerID/:shardID/:thisUpdate.der`, so that we never have to
overwrite a file. Similarly, our JSON document can always be written to a
new file, and the path in CCADB can point to a simple handler which always
serves the most recent file. Additionally, this means that anyone in
possession of one of our JSON documents can fetch all the CRLs listed in it
and get a *consistent* view of our revocation information as of that time.

I believe that there is an argument to be made here that this plan
increases the auditability of the CRLs, rather than decreases it. Root
programs could require that any published JSON document be valid for a
certain period of time, and that all CRLs within that document remain
available for that period as well. Or even that historical versions of CRLs
remain available until every certificate they cover has expired (which is
what we intend to do anyway). Researchers can crawl our history of CRLs and
examine revocation events in more detail than previously available.

Regardless, even without statically-pathed, timestamped CRLs, I believe
that the merits of rolling time-based shards are sufficient to be a strong
argument in favor of dynamic JSON documents.

I hope this helps and that I 

Re: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name verification to 398 days

2021-02-25 Thread Ben Wilson via dev-security-policy
All,

I continue to move this Issue #206 forward with a proposed change to
section 2.1 of the MRSP (along with an effort to modify section 3.2.2.4 or
section 4.2.1 of the CA/B Forum's Baseline Requirements).

Currently, I am still contemplating adding a subsection 5.1 to MRSP section
2.1 that would read,
" 5.1. for server certificates issued on or after July 1, 2021, verify each
dNSName or IPAddress in a SAN or commonName at an interval of 398 days or
less;"

See draft language here
https://github.com/BenWilson-Mozilla/pkipolicy/commit/69bddfd96d1d311874c35c928abdfc13dc11aba3


Ben

On Wed, Dec 2, 2020 at 3:00 PM Ben Wilson  wrote:

> All,
>
> I have started a similar, simultaneous discussion with the CA/Browser
> Forum, in order to gain traction.
>
> 
>
> https://lists.cabforum.org/pipermail/servercert-wg/2020-December/002382.html
>
> Ben
>
> On Wed, Dec 2, 2020 at 2:49 PM Jeremy Rowley 
> wrote:
>
>> Should this limit on reuse also apply to s/MIME? Right now, the 825 day
>> limit in Mozilla policy only applies to TLS certs with email verification
>> of s/MIME being allowed for infinity time.  The first draft of the language
>> looked like it may change this while the newer language puts back the TLS
>> limitation. If it's not addressed in this update, adding clarification on
>> domain verification reuse for SMIME would be a good improvement on the
>> existing policy.
>>
>> -Original Message-
>> From: dev-security-policy 
>> On Behalf Of Ben Wilson via dev-security-policy
>> Sent: Wednesday, December 2, 2020 2:22 PM
>> To: Ryan Sleevi 
>> Cc: Doug Beattie ; Mozilla <
>> mozilla-dev-security-pol...@lists.mozilla.org>
>> Subject: Re: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name
>> verification to 398 days
>>
>> See my responses inline below.
>>
>> On Tue, Dec 1, 2020 at 1:34 PM Ryan Sleevi  wrote:
>>
>> >
>> >
>> > On Tue, Dec 1, 2020 at 2:22 PM Ben Wilson via dev-security-policy <
>> > dev-security-policy@lists.mozilla.org> wrote:
>> >
>> >> See responses inline below:
>> >>
>> >> On Tue, Dec 1, 2020 at 11:40 AM Doug Beattie
>> >> > >> >
>> >> wrote:
>> >>
>> >> > Hi Ben,
>> >> >
>> >> > For now I won’t comment on the 398 day limit or the date which you
>> >> propose
>> >> > this to take effect (July 1, 2021), but on the ability of CAs to
>> >> > re-use domain validations completed prior to 1 July for their full
>> >> > 825 re-use period.  I'm assuming that the 398 day limit is only for
>> >> > those domain validated on or after 1 July, 2021.  Maybe that is
>> >> > your intent, but the wording is not clear (it's never been all that
>> >> > clear)
>> >> >
>> >>
>> >> Yes. (I agree that the wording is currently unclear and can be
>> >> improved, which I'll work on as discussion progresses.)  That is the
>> >> intent - for certificates issued beginning next July--new validations
>> >> would be valid for
>> >> 398 days, but existing, reused validations would be sunsetted and
>> >> could be used for up to 825 days (let's say, until Oct. 1, 2023,
>> >> which I'd advise against, given the benefits of freshness provided by
>> >> re-performing methods in BR 3.2.2.4 and BR 3.2.2.5).
>> >>
>> >
>> > Why? I have yet to see a compelling explanation from a CA about why
>> > "grandfathering" old validations is good, and as you note, it
>> > undermines virtually every benefit that is had by the reduction until
>> 2023.
>> >
>>
>> I am open to the idea of cutting off the tail earlier, at let's say,
>> October 1, 2022, or earlier (see below).  I can work on language that does
>> that.
>>
>>
>> >
>> > Ben, could you explain the rationale why this is better than the
>> > simpler, clearer, and immediately beneficial for Mozilla users of
>> > requiring new validations be conducted on-or-after 1 July 2021? Any CA
>> > that had concerns would have ample time to ensure there is a fresh
>> > domain validation before then, if they were concerned.
>> >
>>
>> I don't have anything yet in particular with regard to a
>> pros-cons/benefits-analysis-supported rationale, except that I expect
>> push-back from SSL/TLS certificate subscribers and from CAs on their
>> behalf. You're right, CAs could take the time between now and July 1st to
>> obtain 398-day validations, but my concern is with the potential push-back.
>>
>> Also, as I mentioned before, I am interested in proposing this as a
>> ballot in the CA/Browser Forum and seeing where it goes. I realize that
>> this issue might come with added baggage from the history surrounding the
>> validity-period changes that were previously defeated in the Forum, but it
>> would still be good to see it adopted there first. Nonetheless, this issue
>> is more than ripe enough to be resolved here by Mozilla as well.
>>
>>
>>
>> >
>> > Doug, could you explain more about why it's undesirable to do that?
>> >
>> >
>> >> >
>> >> > Could you consider changing it to read more like this (feel free to
>> >> > edit as needed):
>> >> >
>> >> > CAs may re-use 

Re: Policy 2.7.1: MRSP Issue #218: Clarify CRL requirements for End Entity Certificates

2021-02-25 Thread Ben Wilson via dev-security-policy
As placeholder in the Mozilla Root Store Policy, I'm proposing the
following sentence for section 6.1 - "A CA MUST ensure that it populates
the CCADB with the appropriate 'full CRL' in the CCADB revocation
information field pertaining to certificates issued by the CA
 for each
intermediate CA technically capable of issuing server certificates." (The
hyperlink isn't active yet until we have the CCADB language and
implementation clarified, per Kathleen's recent email and responses
thereto.)Here it is on GitHub -
https://github.com/BenWilson-Mozilla/pkipolicy/commit/26c1ee4ea8be1a07f86253e38fbf0cc043e12d48.
Caveat - other browsers, such as Apple, will likely have more encompassing
implementation requirements for when to populate these "full CRL" fields.

On Mon, Jan 25, 2021 at 10:16 AM Aaron Gable  wrote:

> I think that an explicit carve-out for time-scoped CRLs is a very good
> idea.
>
> In the case that this change to the MRSP is adopted, I suspect that LE
> would scope CRLs by notAfter quite tightly, with perhaps one CRL per 24 or
> even 6 hours of issuance. We would pick a small interval such that we could
> guarantee that each CRL would still be a reasonable size even in the face
> of a mass revocation event.
>
> Producing CRLs at that rate, it would be very valuable to be able to
> gracefully age CRLs out once there is no possibility for a revocation
> status update for any certificate in their scope.
>
> Aaron
>
> On Sun, Jan 24, 2021 at 10:22 AM Ben Wilson via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> All,
>>
>> Another suggestion came in for clarification that hasn't been raised on
>> the
>> list yet, so I'll try and explain the scenario here.
>>
>> Normally, a CA must publish and update its CRLs until the end of the CA
>> certificate's expiration. However, I think that some CAs partition their
>> CRLs based on issuance time, e.g., all certificates issued during January
>> 2021. And all of those certificates would expire after the applicable
>> validity period.  I think CAs don't continue to regenerate or reissue
>> those
>> types of partitioned CRLs which only contain certificates that have
>> expired.  So maybe we need to add an express exception that allows CAs to
>> omit those obsolete CRLs from the JSON file -- as long as the JSON file
>> contains the equivalent of  a "full and complete" CRL.
>>
>> Thoughts?
>>
>> Thanks,
>> Ben
>>
>>
>> On Wed, Jan 13, 2021 at 8:57 AM Rob Stradling  wrote:
>>
>> > Hi Ben.
>> >
>> > > *A CA technically capable of issuing server certificates MUST ensure
>> > that
>> > > the CCADB field "Full CRL Issued By This CA" contains either the URL
>> for
>> > > the full and complete CRL or the URL for the JSON file containing all
>> > URLs
>> > > for CRLs that when combined are the equivalent of the full and
>> complete
>> > CRL*
>> >
>> > As a consumer of this data (crt.sh), I'd much prefer to see "Full CRL
>> > Issued By This CA" and "the URL for the JSON file" as 2 separate fields
>> in
>> > the CCADB.  CAs would then be expected to fill in one field or the
>> other,
>> > but not both.  Is that possible?
>> >
>> > To ensure that these JSON files can be programmatically parsed, I
>> suggest
>> > specifying the requirement a bit more strictly.  Something like this:
>> >   "...or the URL for a file that contains only a JSON Array, whose
>> > elements are URLs of DER encoded CRLs that when combined are the
>> equivalent
>> > of a full and complete CRL"
>> >
>> > > I propose that this new CCADB field be populated in all situations
>> where
>> > the CA is enabled for server certificate issuance.
>> >
>> > Most Root Certificates are "enabled for server certificate issuance".
>> > Obviously CAs shouldn't issue leaf certs directly from roots, but
>> > nonetheless the technical capability does exist.  However, currently CAs
>> > can't edit Root Certificate records in the CCADB, which makes populating
>> > these new field(s) "in all situations" rather hard.
>> >
>> > Since OneCRL covers revocations of intermediate certs, may I suggest
>> that
>> > CAs should only be required to populate these new field(s) in
>> intermediate
>> > certificate records (and not in root certificate records)?
>> >
>> > Relatedly, "A CA technically capable of...that the CCADB field" seems
>> > wrong.  CCADB "CA Owner" records don't/won't contain the new field(s).
>> > Similar language elsewhere in the policy (section 5.3.2) says "All
>> > certificates that are capable of being used to..." (rather than "All
>> > CAs...").
>> >
>> > Technically-constrained intermediate certs don't have to be disclosed to
>> > CCADB, but "in all situations where the CA is enabled for server
>> > certificate issuance" clearly includes technically-constrained
>> > intermediates.  How would a CA populate the "Full CRL Issued By This CA"
>> > field for a technically-constrained intermediate cert that has
>> > (legitimately) not been 

Re: CCADB Proposal: Add field called JSON Array of Partitioned CRLs Issued By This CA

2021-02-25 Thread Ryan Sleevi via dev-security-policy
On Thu, Feb 25, 2021 at 12:33 PM Aaron Gable via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Obviously this plan may have changed due to other off-list conversations,
> but I would like to express a strong preference for the original plan. At
> the scale at which Let's Encrypt issues, it is likely that our JSON array
> will contain on the order of 1000 CRL URLs, and will add a new one (and age
> out an entirely-expired one) every 6 hours or so. I am not aware of any
> existing automation which updates CCADB at that frequency.
>
> Further, from a resiliency perspective, we would prefer that the CRLs we
> generate live at fully static paths. Rather than overwriting CRLs with new
> versions when they are re-issued prior to their nextUpdate time, we would
> leave the old (soon-to-be-expired) CRL in place, offer its replacement at
> an adjacent path, and update the JSON to point at the replacement. This
> process would have us updating the JSON array on the order of minutes, not
> hours.


This seems like a very inefficient design choice, and runs contrary to how
CRLs are deployed by, well, literally anyone using CRLs as specified, since
the URL is fixed within the issued certificate.

Could you share more about the design of why? Both for the choice to use
sharded CRLs (since that is the essence of the first concern), and the
motivation to use fixed URLs.

We believe that earlier "URL to a JSON array..." approach makes room for
> significantly simpler automation on the behalf of CAs without significant
> loss of auditability. I believe it may be helpful for the CCADB field
> description (or any upcoming portion of the MRSP which references it) to
> include specific requirements around the cache lifetime of the JSON
> document and the CRLs referenced within it.


Indirectly, you’ve highlighted exactly why the approach you propose loses
auditability. Using the URL-based approach puts the onus on the consumer to
try and detect and record changes, introduces greater operational risks
that evade detection (e.g. stale caches on the CAs side for the content of
that URL), and encourages or enables designs that put greater burden on
consumers.

I don’t think this is suggested because of malice, but I do think it makes
it significantly easier for malice to go undetected, for accurate historic
information to be hidden or made too complex to maintain.

This is already a known and, as of recent, studied problem with CRLs [1].
Unquestionably, you are right for highlighting and emphasizing that this
constrains and limits how CAs perform certain operations. You highlight it
as a potential bug, but I’d personally been thinking about it as a
potential feature. To figure out the disconnect, I’m hoping you could
further expand on the “why” of the design factors for your proposed design.

Additionally, it’d be useful to understand how you would suggest CCADB
consumers maintain an accurate, CA attested log of changes. Understanding
such changes is an essential part of root program maintenance, and it does
seem reasonable to expect CAs to need to adjust to provide that, rather
than give up on the goal.

[1]
https://arxiv.org/abs/2102.04288

>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CCADB Proposal: Add field called JSON Array of Partitioned CRLs Issued By This CA

2021-02-25 Thread Aaron Gable via dev-security-policy
Hi Kathleen,

It was my impression from earlier discussions
 that
the plan was for the new CCADB field to contain a URL which points to a
document containing only a JSON array of partitioned CRL URLs, rather than
the new CCADB field containing such an array directly.

Obviously this plan may have changed due to other off-list conversations,
but I would like to express a strong preference for the original plan. At
the scale at which Let's Encrypt issues, it is likely that our JSON array
will contain on the order of 1000 CRL URLs, and will add a new one (and age
out an entirely-expired one) every 6 hours or so. I am not aware of any
existing automation which updates CCADB at that frequency.

Further, from a resiliency perspective, we would prefer that the CRLs we
generate live at fully static paths. Rather than overwriting CRLs with new
versions when they are re-issued prior to their nextUpdate time, we would
leave the old (soon-to-be-expired) CRL in place, offer its replacement at
an adjacent path, and update the JSON to point at the replacement. This
process would have us updating the JSON array on the order of minutes, not
hours.

We believe that earlier "URL to a JSON array..." approach makes room for
significantly simpler automation on the behalf of CAs without significant
loss of auditability. I believe it may be helpful for the CCADB field
description (or any upcoming portion of the MRSP which references it) to
include specific requirements around the cache lifetime of the JSON
document and the CRLs referenced within it.

Thanks,
Aaron

On Wed, Feb 24, 2021 at 12:36 PM Kathleen Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> All,
>
> As previously discussed, there is a section on root and intermediate
> certificate pages in the CCADB called ‘Pertaining to Certificates Issued
> by this CA’, and it currently has one field called 'Full CRL Issued By
> This CA'.
>
> Proposal: Add field called 'JSON Array of Partitioned CRLs Issued By
> This CA'
>
> Description of this proposed field:
> When there is no full CRL for certificates issued by this CA, provide a
> JSON array whose elements are URLs of partitioned, DER-encoded CRLs that
> when combined are the equivalent of a full CRL. The JSON array may omit
> obsolete partitioned CRLs whose scopes only include expired certificates.
>
> Example:
>
> [
>"http://cdn.example/crl-1.crl;,
>"http://cdn.example/crl-2.crl;
> ]
>
>
>
> Additionally, I propose adding a new section to
> https://www.ccadb.org/cas/fields called “Revocation Information”.
>
> The proposed draft for this new section is here:
>
> https://docs.google.com/document/d/1uVK0h4q5BSrFv6e86f2SwR5m2o9Kl1km74vG4HnkABw/edit?usp=sharing
>
>
> I will appreciate your input on this proposal.
>
> Thanks,
> Kathleen
>
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy