On Tue, Mar 5, 2019 at 1:58 PM Matthew Hardeman <mharde...@gmail.com> wrote:

> I suppose my initial response to the concern as presented is that it would
> seem to be a fairly trivial (just paperwork, really) matter for DarkMatter
> (or indeed any other applicant) to separate the CA into a fully separate
> legal entity with common ownership interest with any other business they
> may currently have going on.  I put forth the question as to whether or not
> the assurances you reference and the legal structuring you note are an
> actual, effective risk mitigation.
>

Oh, I don't think the corporate structure is really at all an effective
mitigation, certainly not in isolation. That's part of why I tried to
highlight the organization's response itself, rather than fixating on the
structure.

To briefly revisit what I highlighted in [1], in which the process of root
inclusion has, from its inception, involved elements of subjectivity that
are inextrictably linked with some of the objective process. This includes
the subjectivity of audits themselves - which, while the various auditing
professional standards try to minimize. While I admit my lack of deep
familiarity with the ISAE approach to professionally regulating and
addressing that subjectivity, or that of the Netherlands, where this
particular auditor is based, versus those of say AICPA, the general intent
of such professional standards are to minimize that subjectivity to avoid
bringing the profession into disrepute. However, beyond that audit
subjectivity, root inclusion or removal requests have long considered
factors one might argue are subjective due to the lack of prescriptive
guidance - such as how a CA files incident reports and how "timely" and
"thorough" they are perceived in their responsiveness.

With this context and framing, it hopefully explains why how the
organization responds, whether it be through inclusion request or incident
report, is an essential part in how one evaluates CAs' for continued trust.
It may be that restructuring organizationally, as a response, can
ameliorate concerns or participate in addressing them, but I don't think
the essential organizational separation alone guarantees. To put to an
extreme, even entirely distinct organizational entities, whose relationship
is only through business relationships, if such a relationship creates risk
or the appearance of risk, it may be grounds to prevent or remove trust.

I see two elements in this which might be said to be the real underlying
> risk mitigation:
>
> 1.  The legal structure and common ownership is truly the safety
> mechanism.  I find this...tenuous.  I'm not sure any piece of paper ever
> really kept a bad actor from acting bad.  This seems very much like "Meet
> the new boss [who's wholly owned by the old boss], same as the old boss."
>  In essence, I think if the matter on which the trust hangs is slightly
> different nuances to first party assertions, that this is so thin and
> consequence free in the violation that I regard it as not really material.
>
> 2.  Maybe the real risk mitigation is self-interested asset appreciation /
> asset protection.  What I mean by this is that quite simply the ownership
> of a hypothetical CA and a hypothetical "bad business" -- however we define
> it but construed such that the "bad business" has an apparent conflict in
> that they'd like to abuse their owner's CA's trust -- will act to defend
> their business interest (in this case the value of each of the business
> segments) by preventing one of their business segments from destroying the
> continued value of the other segment.  (We can agree, I believe, that a CA
> that no one trusts has essentially no value.)
>
> It's pretty clear that I put more faith in a business' "greedy"
> self-interest than I do in legal entity paperwork games.  Which, I believe,
> raises an intriguing concept.  What if the true test of a CA's
> trustworthiness is, in fact, a mutually understandable apparent value build
> / value preservation on a standalone basis of the asset that is the CA?  In
> other words, maybe we can only trust a CA whose value proposition to the
> ownership we can reasonably understand from the perspective of the
> ownership, if we limit that value only to the value that can be derived in
> a fully-legitimate use of the CA determined as a going value of the CA from
> a fully standalone basis in addition to the value of the CA in the overall
> scope of the larger business, constrained to only the legitimate synergies
> that may arise.
>

Indeed, I think your #2 approaches how some may be viewing this. There has
certainly been discussion in the context of government CAs about what their
'incentives' are perceived to be - and how government CAs, without the
financial incentive structure that might discourage misbehaviour, are
generally seen as 'less' trustworthy precisely because they have less to
lose and, at an extreme, the threat of force (as some theories of
governance go, whether it be through judicial or extrajudicial means) to
enact their will. That is, of course, a whole discussion in itself - but it
is not at all dissimilar to parallel conversations that happen in the space
of unwanted software (UwS), spam, and phishing - which have a host of
economic concerns just as much as technical.

>From that longer message in [1], I omitted a much longer discussion about
the goals of a root program, as I fear the discussion would become too
mired in confusion until this particular issue has reached a conclusion. I
will, however, highlight that Frank's metapolicy discussion in [2]. At the
time, there were many existing and disparate PKIs, which provided
everything from 'simple' domain authentication to more complex forms of
vetting or restricted to particular communities. In some ways, the policy
(as captured in #10 of the metapolicy) was about evaluating and comparing
these existing PKIs and whether they helped improve the security of Mozilla
users (again, #10). You can see that this view is reflected in the
contemporary work at the time - such as the PKI Assessment Guidelines [3],
which heavily influenced the audit regime and principles, or RFC 3647 [4],
as well as the discussion about the policy itself [5]. These views
basically said that there is a "Mozilla PKI", and there are existing PKIs
of a variety of types and assurances, and the goal of the policy is
effectively determining whether to enter a form of business relationship
with these PKIs and assert that their operations are functionally
equivalent to the expectations of the Mozilla community. Everything that
derived from that - audits, public review, the policy itself - was about
evaluating whether or not to enter in a business relationship with these
third-parties, with Mozilla attempting to represent as best possible its
users.

While I have much more to say on that for a later time, I draw attention to
those particular bits from my earlier message, because as with any
relationship that affects the security of users and products - whether it
be new Web Platform features, evaluating reports of Firefox security issues
(in code or extensions), or in this case, with a CA - Mozilla is behaving
as the user's agent (hence "User Agent"). The process of determining
whether or not to trust a CA is, in part, attempting to evaluate whether
the CA in question has goals, incentives, and structures that are aligned
with the "typical" Mozilla user's needs ([2] #6 and #7), such that they can
be entrusted by Mozilla, and its users, to promote and encourage the
adoption of TLS ([1], #10).

You can see similarity in how Mozilla evaluates plugins and prevents them
from working in Firefox due to security concerns [6], or prohibit
third-party software that might harm users [7][8][9]. Some users may
legitimate want this software or functionality, but it may be removed or
blocked if it might be abusive to the typical user [10]. Similarly, Mozilla
has taken steps to remove entire classes of functionality [11] in the name
of improving user security.

I mention this to highlight how the discussion that happens here is
intentionally similar ([2] #19) to how Mozilla treats other aspects of
behaving as User's Agent, and how every CA that it adds is functionally
being treated as an extension of Mozilla and similarly expected to behave
as agents of users' (in this case, all Firefox users'). This evaluation
necessarily looks at a wide variety of things, some of them 'soft' and
'subjective', such as financial incentives and motivations, and aren't
easily compiled onto purely mechanical checklists - whether they be
extension policies, third-party software injecting into Firefox, plugins,
or CAs. As much as possible, transparency is desired - but, much like
extensions blocked for abusiveness, absolute transparency or absolute
objectivity is not always possible nor in the best interests of typical
users. Discussions such as this try to capture the evaluation and thought
processes that inform the module owners and peers in making a decision, and
to balance as wide as possible the feedback and input about how these
decisions further both Mozilla's mission and the needs of its 'typical'
users.

[1]
https://groups.google.com/d/msg/mozilla.dev.security.policy/nnLVNfqgz7g/rNWEMEkUAQAJ

[2] http://hecker.org/mozilla/ca-certificate-metapolicy
[3]
https://www.americanbar.org/content/dam/aba/events/science_technology/2013/pki_guidelines.pdf
[4] https://tools.ietf.org/html/rfc3647
[5] http://hecker.org/mozilla/cert-policy-draft-10
[6] https://support.mozilla.org/en-US/kb/flash-blocklists
[7]
https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/AMO/Policy/Reviews
[8] https://wiki.mozilla.org/Security/Safe_Browsing
[9]
https://support.mozilla.org/en-US/products/firefox/fix-problems/problems-add-ons-plugins-or-unwanted-software
[10]
https://www.bleepingcomputer.com/news/software/firefox-gets-privacy-boost-by-disabling-proximity-and-ambient-light-sensor-apis/
[11]
https://developer.mozilla.org/en-US/docs/Archive/Add-ons/Working_with_multiprocess_Firefox
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to