Similarly, snipping and replying to portions of your message below:

On Thu, Feb 25, 2021 at 12:52 PM Ryan Sleevi <r...@sleevi.com> wrote:

> Am I understanding your proposal correctly that "any published JSON
> document be valid for a certain period of time" effectively means that each
> update of the JSON document also gets a distinct URL (i.e. same as the
> CRLs)?
>

No, the (poorly expressed) idea is this: suppose you fetch our
rapidly-changing document and get version X. Over the next five minutes,
you fetch every CRL URL in that document. But during that same five
minutes, we've published versions X+1 and X+2 of that JSON document at that
same URL. There should be a guarantee that, as long as you fetch the CRLs
in your document "fast enough" (for some to-be-determined value of "fast"),
all of those URLs will still be valid (i.e. not return a 404 or similar),
*even though* some of them are not referenced by the most recent version of
the JSON document.

This may seem like a problem that arises only in our rapidly-changing JSON
version of things. But I believe it should be a concern even in the system
as proposed by Kathleen: when a CA updates the JSON array contained in
CCADB, how long does a consumer of CCADB have to get a snapshot of the
contents of the previous set of URLs? To posit an extreme hypothetical, can
a CA hide misissuance of a CRL by immediately hosting their fixed CRL at a
new URL and updating their CCADB JSON list to include that new URL instead?
Not to put too fine a point on it, but I believe that this sort of
hypothetical is the underlying worry about having the JSON list live
outside CCADB where it can be changed on a whim, but I'm not sure that
having the list live inside CCADB without any requirements on the validity
of the URLs inside it provides significantly more auditability.

The issue I see with the "URL stored in CCADB" is that it's a reference,
> and the dereferencing operation (retrieving the URL) puts the onus on the
> consumer (e.g. root stores) and can fail, or result in different content
> for different parties, undetectably.
>

If I may, I believe that the problem is less that it is a reference (which
is true of every URL stored in CCADB), and more that it is a reference to
an unsigned object. URLs directly to CRLs don't have this issue, because
the CRL is signed. And storing the JSON array directly doesn't have this
issue, because it is implicitly signed by the credentials of the user who
signed in to CCADB to modify it. One possible solution here would be to
require that the JSON document be signed by the same CA certificate which
issued all of the CRLs contained in it. I don't think I like this solution,
but it is within the possibility space.


> If there is an API that allows you to modify the JSON contents directly
> (e.g. a CCADB API call you could make with an OAuth token), would that
> address your concern?
>

If Mozilla and the other stakeholders in CCADB decide to go with this
thread's proposal as-is, then I suspect that yes, we would develop
automation to talk to CCADB's API in exactly this way. This is undesired
from our perspective for a variety of reasons:
* I'm not aware of a well-maintained Go library for interacting with the
Salesforce API.
* I'm not aware of any other automation system with write-access to CCADB
(I may be very wrong!), and I imagine there would need to be some sort of
further design discussion with CCADB's maintainers about what it means to
give write credentials to an automated system, what sorts of protections
would be necessary around those credentials, how to scope those credentials
as narrowly as possible, and more.
* I'm not sure CCADB's maintainers want updates to it to be in the critical
path of ongoing issuance, as opposed to just in the critical path for
beginning issuance with a new issuer.

I think the question was with respect to the frequency of change of those
> documents.
>

Frankly, I think the least frequent creation of a new time-sharded CRL we
would be willing to do is once every 24 hours (that's still >60MB per CRL
in the worst case). That's going to require automation no matter what.


> There is one thing you mentioned that's also non-obvious to me, because I
> would expect you already have to deal with this exact issue with respect to
> OCSP, which is "overwriting files is a dangerous operation prone to many
> forms of failure". Could you expand more about what some of those
> top-concerns are? I ask, since, say, an OCSP Responder is frequently
> implemented as "Spool /ocsp/:issuerDN/:serialNumber", with the CA
> overwriting :serialNumber whenever they produce new responses. It sounds
> like you're saying that common design pattern may be problematic for y'all,
> and I'm curious to learn more.
>

Sure, happy to expand. For those following along at home, this last bit is
relatively off-topic compared to the other sections above, so skip if you
feel like it :)

OCSP consists of hundreds of millions of small entries. Thus our OCSP
infrastructure is backed by a database, and fronted by a caching CDN. So
the database and the CDN get to handle all the hard problems of overwriting
data, rather than having us reinvent the wheel. But CRL consists of
relatively-few  large entries, which is much better suited to a flat/static
file structure like that you describe for a naive implementation of OCSP.
For more on why we'd prefer to leave file overwriting to the experts rather
than risk getting it wrong ourselves, see this talk
<https://www.deconstructconf.com/2019/dan-luu-files>.

Thanks,
Aaron
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to