On Wed, May 8, 2019 at 6:42 AM Fotis Loukos <fot...@ssl.com> wrote:

> I agree with you that technically verifiable controls are always better
> than procedural controls, however we are already relying on procedural
> controls for many of the requirements, such as CAA. In addition, this
> specific change can be monitored in the WebPKI ecosystem using CT

.
>

First, as a matter of philosophy, I don't believe we should use the
existence of CT to introduce fundamentally more risk to the ecosystem, or
to push back on much needed risk mitigations. While CT provides
considerable value in detecting process control failures, we should
recognize that the solution to these issues is not to add more supervision
(that is, externalizing the cost onto the ecosystem to both monitor and
respond to these issues), but instead improve the set of controls.

Second, as it relates to your example of CAA, I believe that's a mistake to
suggest it's a procedural control. It is, at its core, a technical control,
and one which can be both disputed and investigated. Yes, it's true,
there's no one from the browser side who is sitting there checking to make
sure a CA is checking every CAA record - but there exists a verifiable way
to do that. The equivalent does not exist for what you're proposing.


> > As I just stated above, you're fundamentally relying on a policy-level
> > control, which means increasing the reliance on audits, and creating
> > significantly greater risk of a policy-control failure, whether due to
> > misunderstanding, malice, or incompetence. We can and should work to
> > improve the systems so we have greater technical verifiability - whether
> > that's through EKU constraints or more ecosystem-wide things such as
> > pre-issuance linting and Certificate Transparency-based post-issuance
> > linting.
>
> I agree with you, but should we become "slaves" of technical controls
> and limit functionality in order to accomodate them?
>

Yes. We should have one joint, keep it well oiled. We have two decades of
history of procedural control failures, and so long as we introduce new
CAs, or don't promptly remove old CAs, we're always going to run the risk
of industry churn in which best practices and accumulated knowledge whither
over time. The existence of strong technical controls provides a safeguard
to the all-to-human aspect of getting complacent, stakeholders changing
roles, new parties trying to make sense of things, etc.

Just as it would be a misguided suggestion to suggest that the existence of
certificate linting has made us "slaves" of technical controls, it's
misguided here. This is fundamentally about mitigating risk from what is,
functionally, an undetectable and easy mistake to make, but which can have
grave and serious consequences.

The design I outlined provides for a variety of options to mitigate the
risk and harm to end-users if (when) a PCA screws up, whether it's through
the issuance of a leaf cert directly, or the failure to maintain the
appropriate audits and security controls.


> The scheme I'm proposing is the following:
>
> Org CA (serverAuth, emailProtection, and possibly others such as
> clientAuth)
>   \- Org SSL CA (serverAuth and possibly clientAuth)
>     \- End-entity cert
>   \- Org Mail CA (emailProtection and possibly others such as clientAuth)
>     \- End-entity cert
>
> The organization can deploy the "Org CA" as a trusted CA in its internal
> systems.
>

Thanks. This clarifies what you meant, and is demonstrably something we
should not be encouraging.

As you note from this example, the organization can deploy "Org CA" as a
trusted CA in its internal systems. There's no reason or benefit to use a
publicly trusted CA hierarchy for this, since you've established the
precondition that they can already manage their own enterprise CAs.

This example - what you term PCA - is what others might call a "Managed
CA", and there's no need to cross-certify this managed CA with the public
hierarchy, if the situation is as you describe.

Of course, if the organization cannot effectively deploy "Org CA", thus
incentivizing the use of public trust, then there is also no deployment
headache for the organization; they're relying on the public trust (of the
root) to confer it to their various applications.


> Under the current scheme, the "Org SSL CA" and the "Org Mail CA" must be
> issued either by the Root, or by other CAs that chain up to the Root as
> the least common denominator. The organization would have to either
> trust the Root as an internally trusted CA, which would mean that they
> would also place trust in other certificates issued by the same CA (CA
> as an organization issuing certificates) to different organizations, or
> deploy both CAs and duplicate controls, if possible (not all software
> supports this). Of course, they could deploy just a single CA depending
> on the usage. This adds more management cost, and may lead to other
> problems. For example, what if a service needs to authenticate users
> using certificates from both the "Org SSL CA" and the "Org Mail CA"
> (perfectly valid since both can issue certs having the clientAuth EKU)?
>
> Does this clarify why having a single "Org CA" would help in deployment
> in some enterprise environments?
>

Yes. Hopefully my response demonstrates why, based on the preconditions,
there is no necessity for the solution you propose, and if we alter those
conditions, then alternatives such as currently required are better suited.
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to