On Thu, Jan 28, 2021 at 7:33 PM Ryan Sleevi <r...@sleevi.com> wrote:

>
>
> On Thu, Jan 28, 2021 at 1:32 PM Burton <j...@0.me.uk> wrote:
>
>> Hi Ryan,
>>
>> The answer to your questions.
>>
>> A remediation plan is only useful in cases of slight CA non-compliance to
>> the rules set forth by the root store policy.
>>
>> A remediation plans in cases of slight CA non-compliance provides
>> assurance of CA commitment to compliance.
>>
>
> Sure, and I think (and hopefully I'm fairly stating), that the goal is
> these should be provided in the Incident Reports themselves. That is, the
> remediation should address both the immediate and systemic issues, and
> future incidents of the CA will be judged against this.
>
> The intent is certainly that anyone in the community participates and
> reviews these, and I think we see a lot of fantastic activity on the bug
> reports from people who do, which is a healthy sign, even though they're
> often calling out concerns with the remediation or highlighting how it
> fails to meet the expectations.
>
>
>> A CA under investigation of serious non-compliance with detailed
>> documented evidence of non-compliance incidents has reach the stage of no
>> return.
>>
>> A remediation plan in the cases of serious non-compliance is a reference
>> document in the case of new root inclusion as documented evidence of
>> commitment to compliance.
>>
>
>> The CA roots should be removed in the case of  serious non-compliance and
>> asked to reapply for inclusion again to the root store with new roots and
>> new commitment to compliance with new audits from a different auditor and
>> reformed practices and management.
>>
>
> Right, and I think this might be premature or giving false hope, at least
> to CAs that assume every CA, once removed, can simply reapply with a
> remediation plan. I agree with you, it's incredibly valuable to understand
> how the CA plans to address the issues, and just like incident reports,
> it's useful to understand how the CA views the incidents that might lead up
> to distrust and how it plans to mitigate them before reapplying. Yet we've
> often seen CAs believe that because a remediation plan exists for the
> identified issues, it's sufficient to apply for new roots, when really,
> such CAs are working from a serious trust deficit, and so not only need to
> remediate the identified issues, but show how they're going above and
> beyond addressing the systemic issues, in order to justify the risk of
> trusting them again. Understandably, this depends on a case-by-case basis.
>
> To your original point, historically CA actions (generally) worked in
> three phases:
>
> 1) A pattern is believed to exist (of incidents), or an incident is so
> severe it warrants immediate public discussion. The community is asked to
> provide details - e.g. of incidents that were overlooked, of other relevant
> data - to ensure that a full and comprehensive picture of relevant facts
> are gathered and understood. The CA is invited to share details (e.g. how
> they mitigated such issues) or to respond to the facts, if they believe
> they're not accurate.
>
> 2) A discussion about the issues themselves, to evaluate the nature of the
> incidents, as well as solicit proposals from the community in particular
> (rather than the CA, although the CA is welcome to contribute) about how to
> mitigate the risks these issues and incidents highlight.
>
> 3) At least for Mozilla, a proposed plan for Mozilla products, which is
> often based on suggestions from the community (in #2) as well as Mozilla's
> own product and security considerations. Mozilla may solicit further
> feedback on their plan, from the community and the CA, to make sure they've
> balanced the concerns and considerations raised in #2 accurately, or may
> decide it warrants immediate action.
>
> This is a rough guide, obviously there are exceptions. For example,
> Mozilla and other browsers blocking MITM roots hasn't always involved all
> three stages. Similarly, in CA compromise events, Step 2 and 3 may be
> skipped entirely, because the only viable solution is obvious.
>
> Other programs, whether Apple, Google, or Microsoft, don't necessarily
> operate the same way. For example, Google, Apple and Microsoft don't
> provide any statement at all about public engagement, although they may
> closely monitor the discussions in #1 and #2.
>
> Step #1 has, intentionally and by design, largely been replaced by the
> Incident Reporting requirements incorporated into the Root Policies of both
> Mozilla and Google Chrome. That is, the incident reports, and the public
> discussions of the incidents, serve to contemporaneously address issues,
> identify remediations, and understand and identify how well the CA
> understands the risks and is able to take meaningful corrective action.
> These days, Step #1 is merely summarizing the incidents based on the
> information in the incidents, and thus may not need the same lengthy
> discussion in the past, prior to the incident disclosure requirements (e.g.
> StartCom, WoSign).
>
> Step #2 is still widely practiced, as we've seen throughout a number of
> recent and past events. Without wanting to put words into Mozilla's mouth,
> certainly it's a reflection of the principles of Mozilla's policy. Browsers
> like Google Chrome, Apple Safari, and Microsoft Edge don't require #2 to
> happen, although it can often provide valuable insight into their own root
> programs and evaluation of the CA. Chrome comes the closest, that I'm aware
> of, of calling this out, at https://g.co/chrome/root-policy, as something
> they consider (they also consider discussions for inclusion, but that's
> separate from this discussion)
>
> Step #3 is fairly unique to Mozilla. I think you're right for highlighting
> the community benefits from a timely transition from Step #2 to Step #3,
> although that's often situational, depending on the nature and complexity
> of incidents, compatibility risks between browsers, etc.
>
> In some cases, Step #3 has called out next steps if the CA wants to pursue
> "#4) Reapply" - or at least, an absolute minimum set of goals that must be
> met (rather than a necessary and sufficient set of goals). But that doesn't
> require an explicit/formal remediation plan - it may be a product decision
> for Mozilla up-front, or it might be something that's deferred to if/when a
> CA decides to reapply.
>
> This is, at least, historically how things have worked in the time I've
> been here, but of course, that's always subject to change, and has changed
> as well throughout the time I've participated (e.g. transitioning #1 to
> primarily formal incident reporting)
>
> It sounds like the main thrust of your suggestion, then, is providing
> clearer timelines about the transitions to these stages. Is that fair to
> say?
>

That's right!  My suggestion was about providing cleaner timelines of
stages.

Thank you

Burton
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to