On Tue, Feb 14, 2017 at 5:47 AM, Steve Medin via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:
>
> -          The caching I’m talking about is not header directives, I mean
> how CAPI and NSS retain discovered path for the life of the intermediate.
> One fetch, per person, per CA, for the life of the CA certificate.
>

Right, which has problematic privacy issues, and is otherwise not advisable
- certainly not advisable in a world of technically constrained sub-CAs (in
which the subscriber can use such caching as a supercookie). So if a UA
doesn't do such 'permacache' and instead respects the HTTP cache, you get
those issues.

(Also, NSS doesn't do that behaviour by default; that was a Firefox-ism)


> -          Ever since Vista, CAPI’s root store has been pulled over a wire
> upon discovery. Only kernel mode driver code signing roots are shipped.
>

No, this isn't accurate.


> -          Once the mass market UAs enable dynamic path discovery as an
> option, server admins can opt in based on analytics.
>

Not really. Again, you're largely ignoring the ecosystem issues, so perhaps
this is where the tennis ball remark comes into play. There are effectively
two TLS communities that matter - the browser community, and the
non-browser community. Mozillan and curl maintainer Daniel Stenberg pretty
accurately captures this in
https://daniel.haxx.se/blog/2017/01/10/lesser-https-for-non-browsers/

-          PKCS#7 chains are indeed not a requirement, but see point 1.
> It’s probably no coincidence that IIS supports it given awareness of the
> demands placed on enterprise IT admins.
>

My point was that PKCS#7 is an abomination of a format (in the general
sense), but to the specific technical choice, is a poor technical choice
because the format lacks any structure of expressing order/relationship. A
server supporting PKCS#7 needs not just support PKCS#7, but the complexity
of chain building, in order to reorder the unstructured PKCS#7. And if the
server supports chain building, then it could be argued just as well, that
the server supports AIA. Indeed, if you're taking an ecosystem approach,
the set of clouds to argue at is arguably the TLS server market improving
their support to match IIS's (which, I agree, is quite good). That includes
basic things like OCSP stapling (e.g.
https://gist.github.com/sleevi/5efe9ef98961ecfb4da8 ) and potentially
support for AIA fetching, as you mention. Same effect, but instead of
offloading the issues to the clients, you centralize at the server. But
even if you set aside PKCS#7 as the technical delivery method and set aside
chain building support, you can accomplish the same goal, easier, by simply
utilizing a structured PEM-encoded file.

My point here is that you're advocating a specific technology here that's
regrettably poorly suited for the job. You're not wrong - that is, you can
deliver PKCS#7 certs - but you're not right either that it represents the
low-hanging fruit.


Philosophically, the discussion here is where the points of influence lie -
with a few hundred CAs, with a few thousand server software stacks (and a
few million deployments), and a few billion users. It's a question about
whether the solution only needs to consider the browser (which, by
definition, has a fully functioning HTTP stack and therefore _could_
support AIA) or the ecosystem (which, in many cases, lacks such a stack -
meaning no AIA, no OCSP, and no CRLs either). We're both right in that
these represent technical solutions to the issues, but we disagree on which
offers the best lever for impact - for end-users and for relying parties.
This doesn't mean it's a fruitless argument of intractable positions - it
just means we need to recognize our differences in philosophy and approach.

You've highlighted a fair point - which is that if CAs rotate intermediates
periodically, and if CAs do not (a) deliver the full chain (whether through
PEM or PKCS#7) or (b) subscribers are using software/hardware that does not
support configuring the full chain when installing a certificate, then
there's a possibility of increased errors for users due to servers sending
the wrong intermediate. That's a real problem, and what we've described are
different approaches to solving that problem, with different tradeoffs. The
question is whether that problem is significant enough to prevent or block
attempts to solve the problem Gerv highlighted - intermediates with
millions of certificates. We may also disagree here, but I don't believe
it's a blocker.
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to