Re: Summary of Camerfirma's Compliance Issues

2021-01-28 Thread Eric Mill via dev-security-policy
Just to build on what Ryan said, and to clarify any confusion around the scope 
of Chrome’s action here - Chrome is no longer accepting Camerfirma certificates 
that are specifically used for *TLS server authentication* for websites. 

Our planned action is related to the certificates Chrome uses and verifies, 
which are only those used for TLS server authentication. This does include any 
type of certificate used in Chrome for TLS server authentication, including 
Qualified Website Authentication Certificates (QWACs) and certificates used to 
comply with the Revised Payment Services Directive (PSD2). However, it does not 
cover other use cases, such as TLS client certificates or the use of Qualified 
Certificates for digital signatures.

In order to ensure Chrome’s response is comprehensive, the list of affected 
roots includes all of those operated by Camerfirma that have the technical 
capability to issue TLS server authentication certificates, even if those roots 
are not currently being used to issue TLS server authentication certificates. 
But please note that the changes we announced for Chrome will not impact the 
validity of these roots for other types of authentication, only current and 
future use of those roots for TLS server authentication in Chrome.


On Monday, January 25, 2021 at 12:01:42 AM UTC-8, Ryan Sleevi wrote:
> (Writing in a Google capacity) 
> 
> I personally want to say thanks to everyone who has contributed to this 
> discussion, who have reviewed or reported past incidents, and who have 
> continued to provide valuable feedback on current incidents. When 
> considering CAs and incidents, we really want to ensure we’re considering 
> all relevant information, as well as making sure we’ve got a correct 
> understanding of the details. 
> 
> After full consideration of the information available, and in order to 
> protect and safeguard Chrome users, certificates issued by AC Camerfirma SA 
> will no longer be accepted in Chrome, beginning with Chrome 90. 
> 
> This will be implemented via our existing mechanisms to respond to CA 
> incidents, via an integrated blocklist. Beginning with Chrome 90, users 
> that attempt to navigate to a website that uses a certificate that chains 
> to one of the roots detailed below will find that it is not considered 
> secure, with a message indicating that it has been revoked. Users and 
> enterprise administrators will not be able to bypass or override this 
> warning. 
> 
> This change will be integrated into the Chromium open-source project as 
> part of a default build. Questions about the expected behavior in specific 
> Chromium-based browsers should be directed to their maintainers. 
> 
> To ensure sufficient time for testing and for the replacement of affected 
> certificates by website operators, this change will be incorporated as part 
> of the regular Chrome release process. Information about timetables and 
> milestones is available at https://chromiumdash.appspot.com/schedule. 
> 
> Beginning approximately the week of Thursday, March 11, 2021, website 
> operators will be able to preview these changes in Chrome 90 Beta. Website 
> operators will also be able to preview the change sooner, using our Dev and 
> Canary channels, while the majority of users will not encounter issues 
> until the release of Chrome 90 to the Stable channel, currently targeting 
> the week of Tuesday, April 13, 2021. 
> 
> When responding to CA incidents in the past, there have been a variety of 
> approaches taken by different browser vendors, determined both by the facts 
> of the incident and the technical options available to the browsers. Our 
> particular decision to actively block all certificates, old and new alike, 
> is based on consideration of the details available, the present technical 
> implementation, and a desire to have a consistent, predictable, 
> cross-platform experience for Chrome users and site operators. 
> 
> For the list of affected root certificates, please see below. Note that we 
> have included a holistic set of root certificates in order to ensure 
> consistency across the various platforms Chrome supports, even when they 
> may not be intended for TLS usage. However, please note that the 
> restrictions are placed on the associated subjectPublicKeyInfo fields of 
> these certificates. 
> 
> Affected Certificates (SHA-256 fingerprint) 
> 
> - 04F1BEC36951BC1454A904CE32890C5DA3CDE1356B7900F6E62DFA2041EBAD51 
> - 063E4AFAC491DFD332F3089B8542E94617D893D7FE944E10A7937EE29D9693C0 
> - 0C258A12A5674AEF25F28BA7DCFAECEEA348E541E6F5CC4EE63B71B361606AC3 
> - C1D80CE474A51128B77E794A98AA2D62A0225DA3F419E5C7ED73DFBF660E7109 
> - 136335439334A7698016A0D324DE72284E079D7B5220BB8FBD747816EEBEBACA 
> - EF3CB417FC8EBF6F97876C9E4ECE39DE1EA5FE649141D1028B7D11C0B2298CED
> On Thu, Dec 3, 2020 at 1:01 PM Ben Wilson via dev-security-policy < 
> dev-secur...@lists.mozilla.org> wrote: 
> 
> > All, 
> > 
> > We have prepared an issues list as a 

Re: Policy 2.7.1: MRSP Issue #192: Require information about auditor qualifications in the audit report

2021-01-28 Thread Ryan Sleevi via dev-security-policy
On Thu, Jan 28, 2021 at 3:05 PM Ben Wilson  wrote:

> Thanks.  My current thinking is that we can leave the MRSP "as is" and
> that we write up what we want in
> https://wiki.mozilla.org/CA/Audit_Statements#Auditor_Qualifications,
> which is, as you note, information about members of the audit team and how
> individual members meet #2, #3, and #6.
>

Is this intended as a temporary fix until the issue is meaningfully
addressed? Or are you seeing this as a long-term resolution of the issue?

I thought the goal was to make the policy clearer on the expectations, and
my worry is that it would be creating more work for you and Kathleen, and
the broader community, because it puts the onus on you to chase down CAs to
provide the demonstration because they didn't pay attention to it in the
policy. This was the complaint previously raised about "CA Problematic
Practices" and things that are forbidden, so I'm not sure I understand the
distinction/benefit here from moving it out?

I think the relevance to MRSP is trying to clarify whether Mozilla thinks
of auditors as individuals (as it originally did), or whether it thinks of
auditors as organizations. I think that if MRSP was clarified regarding
that, then the path you're proposing may work (at the risk of creating more
work for y'all to request that CAs provide the information that they're
required to provide, but didn't know that).

If the issue you're trying to solve is one about whether it's in the audit
letter vs communicated to Mozilla, then I think it should be possible to
achieve that within the MRSP and explicitly say that (i.e. not require it
in the audit letter, but still requiring it).

Just trying to make sure I'm not overlooking or misunderstanding your
concerns there :)

>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #187: Require disclosure of incidents in Audit Reports

2021-01-28 Thread Ryan Sleevi via dev-security-policy
On Sun, Jan 24, 2021 at 11:33 PM Ben Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> All,
>
> Based on the comments received, I am inclined to clarify the proposed
> language under Issues #154 and #187 with reference to a CA's Bugzilla
> compliance bugs rather than "incidents".  The existing language in section
> 2.4 of the MRSP already requires the CA to promptly file an Incident Report
> in Bugzilla for all incidents.
>
> My proposal for Issue #154 is to add a final sentence to MRSP section 2.4
> which would say, "If being audited according to the WebTrust criteria, the
> CA’s Management Assertion letter MUST include a complete list of the CA's
> Bugzilla compliance bugs that were unresolved at any time during the audit
> period."
>
> Under Issue #187, I propose that new item 11 in MRSP section 3.1.4
> (required
> publicly-available audit documentation) would read:  "11.  a complete list
> of the CA’s Bugzilla compliance bugs that were unresolved at any time
> during the audit period."
>

I don't think this is a good change, and doesn't meet the intent of the
problem.

This implies that if Mozilla believed an incident resolved (or, as we've
seen certain CAs do, the CA themselves mark their issue as resolved), that
there's no requirement to disclose this to the auditor other than "Hope the
CA is nice" (which, sadly, is not reasonable).

I explicitly think incident is the right approach, and disagree that
flagging it as compliance bugs is good or useful for the ecosystem. I
further think that even matters flagged as "Duplicate" or "Invalid" _are_
useful to ensure that the auditor is aware of the relevant discussion. For
example, if evidence contrary to the facts stated on the bug (i.e. it was
*not* a duplicate), this is absolutely relevant.

So I guess I'm disagreeing with Jeff and Clemens here, by suggesting that
incident should be any known or reported violation of Mozilla policy, which
may be manifested as bugs, in order to ensure transparency and confirmation
that the auditor had the necessary information and facts available and that
it was considered as part of the statement. This still permits auditors to,
for example, consider the issue as a duplicate/remediated, but given that
the whole goal is to receive confirmation from the auditors that they were
aware of all the same things the community is, I don't think the proposed
language gets to that.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Root Store Policy Suggestion

2021-01-28 Thread Burton via dev-security-policy
On Thu, Jan 28, 2021 at 7:33 PM Ryan Sleevi  wrote:

>
>
> On Thu, Jan 28, 2021 at 1:32 PM Burton  wrote:
>
>> Hi Ryan,
>>
>> The answer to your questions.
>>
>> A remediation plan is only useful in cases of slight CA non-compliance to
>> the rules set forth by the root store policy.
>>
>> A remediation plans in cases of slight CA non-compliance provides
>> assurance of CA commitment to compliance.
>>
>
> Sure, and I think (and hopefully I'm fairly stating), that the goal is
> these should be provided in the Incident Reports themselves. That is, the
> remediation should address both the immediate and systemic issues, and
> future incidents of the CA will be judged against this.
>
> The intent is certainly that anyone in the community participates and
> reviews these, and I think we see a lot of fantastic activity on the bug
> reports from people who do, which is a healthy sign, even though they're
> often calling out concerns with the remediation or highlighting how it
> fails to meet the expectations.
>
>
>> A CA under investigation of serious non-compliance with detailed
>> documented evidence of non-compliance incidents has reach the stage of no
>> return.
>>
>> A remediation plan in the cases of serious non-compliance is a reference
>> document in the case of new root inclusion as documented evidence of
>> commitment to compliance.
>>
>
>> The CA roots should be removed in the case of  serious non-compliance and
>> asked to reapply for inclusion again to the root store with new roots and
>> new commitment to compliance with new audits from a different auditor and
>> reformed practices and management.
>>
>
> Right, and I think this might be premature or giving false hope, at least
> to CAs that assume every CA, once removed, can simply reapply with a
> remediation plan. I agree with you, it's incredibly valuable to understand
> how the CA plans to address the issues, and just like incident reports,
> it's useful to understand how the CA views the incidents that might lead up
> to distrust and how it plans to mitigate them before reapplying. Yet we've
> often seen CAs believe that because a remediation plan exists for the
> identified issues, it's sufficient to apply for new roots, when really,
> such CAs are working from a serious trust deficit, and so not only need to
> remediate the identified issues, but show how they're going above and
> beyond addressing the systemic issues, in order to justify the risk of
> trusting them again. Understandably, this depends on a case-by-case basis.
>
> To your original point, historically CA actions (generally) worked in
> three phases:
>
> 1) A pattern is believed to exist (of incidents), or an incident is so
> severe it warrants immediate public discussion. The community is asked to
> provide details - e.g. of incidents that were overlooked, of other relevant
> data - to ensure that a full and comprehensive picture of relevant facts
> are gathered and understood. The CA is invited to share details (e.g. how
> they mitigated such issues) or to respond to the facts, if they believe
> they're not accurate.
>
> 2) A discussion about the issues themselves, to evaluate the nature of the
> incidents, as well as solicit proposals from the community in particular
> (rather than the CA, although the CA is welcome to contribute) about how to
> mitigate the risks these issues and incidents highlight.
>
> 3) At least for Mozilla, a proposed plan for Mozilla products, which is
> often based on suggestions from the community (in #2) as well as Mozilla's
> own product and security considerations. Mozilla may solicit further
> feedback on their plan, from the community and the CA, to make sure they've
> balanced the concerns and considerations raised in #2 accurately, or may
> decide it warrants immediate action.
>
> This is a rough guide, obviously there are exceptions. For example,
> Mozilla and other browsers blocking MITM roots hasn't always involved all
> three stages. Similarly, in CA compromise events, Step 2 and 3 may be
> skipped entirely, because the only viable solution is obvious.
>
> Other programs, whether Apple, Google, or Microsoft, don't necessarily
> operate the same way. For example, Google, Apple and Microsoft don't
> provide any statement at all about public engagement, although they may
> closely monitor the discussions in #1 and #2.
>
> Step #1 has, intentionally and by design, largely been replaced by the
> Incident Reporting requirements incorporated into the Root Policies of both
> Mozilla and Google Chrome. That is, the incident reports, and the public
> discussions of the incidents, serve to contemporaneously address issues,
> identify remediations, and understand and identify how well the CA
> understands the risks and is able to take meaningful corrective action.
> These days, Step #1 is merely summarizing the incidents based on the
> information in the incidents, and thus may not need the same lengthy
> discussion in the past, prior to the incident 

Re: Policy 2.7.1: MRSP Issue #192: Require information about auditor qualifications in the audit report

2021-01-28 Thread Ben Wilson via dev-security-policy
Thanks.  My current thinking is that we can leave the MRSP "as is" and that
we write up what we want in
https://wiki.mozilla.org/CA/Audit_Statements#Auditor_Qualifications, which
is, as you note, information about members of the audit team and how
individual members meet #2, #3, and #6.




On Thu, Jan 28, 2021 at 12:44 PM Ryan Sleevi  wrote:

>
>
> On Thu, Jan 28, 2021 at 1:43 PM Ben Wilson via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> On second thought, I think that Mozilla can accomplish what we want
>> without
>> modifying the MRSP
>> <
>> https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/#32-auditors
>> >
>> (which says audits MUST be performed by a Qualified Auditor, as defined in
>> the Baseline Requirements section 8.2), and instead adding language to
>> https://wiki.mozilla.org/CA/Audit_Statements#Auditor_Qualifications that
>> explains what additional information we need submitted to determine that
>> an
>> auditor is "qualified" under Section 8.2 of the Baseline Requirements.
>>
>> In other words (paraphrasing from BR 8.2), we would need evidence that the
>> persons or entities:
>> 1. Are independent from the subject of the audit;
>> 2. Have the ability to conduct an audit that addresses the criteria;
>> 3. Have proficiency in examining Public Key Infrastructure technology,
>> information security tools and techniques, information technology and
>> security auditing, and the third-party attestation function;
>> 4. Are accredited in accordance with ISO 17065 applying the requirements
>> specified in ETSI EN 319 403  *OR*   5. Are licensed by WebTrust;
>> 6. Are bound by law, government regulation, or professional code of ethics
>> (to render an honest and objective opinion); and
>> 7. Maintain Professional Liability/Errors & Omissions insurance with
>> policy
>> limits of at least one million US dollars in coverage.
>>
>> We do some of this already when we check on an auditor's status to bring
>> an
>> auditor's record current in the CCADB.  The edits that we'll make will
>> just
>> make it easier for us to go through the list above.
>>
>> Thoughts?
>>
>
> I'm not sure this approach is very clear about the edits you're making,
> and whether pull requests or commits might be clearer, as Wayne did in the
> past. If there is a commit, happy to look at it and apologies if I missed
> it.
>
> I'm not sure this addresses the issue as raised, or at least, "or
> entities" seems to create the same issues that are trying to be addressed,
> by thinking in terms of "legal entities" rather than qualified persons.
>
> Your discussion about "auditor's" and "auditor's status" might be misread
> as "Audit firm", when I think the issue raised was thinking about "person
> performing the audit". The individual persons aren't necessarily licensed
> or accredited (e.g. #4/ #5), and may not be the ones that retain PL/E
> insurance (#7). Further, the individuals might be independent, but the firm
> not (#1)
>
> So I think you're really just left with wanting to have a demonstration as
> to the members of the audit team and how individual members meet (#2, #3,
> #6). Is that right? I think Kathleen's proposal from November got close to
> that, and then the remainder is clarifying the language that you've
> proposed for 2.7.1, namely "Individuals have competence, partnerships and
> corporations do not".
>
> I think the expectation goal is that "Individually, and as an audit team,
> they are independent (#1)" (e.g. you can't have a non-independent party
> running the audit with a bunch of independent parties reporting to them,
> since they're no longer independent), while that collectively the audit
> team meets #2/#3, with the burden being to demonstrate how the individuals
> on the team meet that.
>
> Is that what you were thinking? Or is my explanation a jumbled mess :)
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #192: Require information about auditor qualifications in the audit report

2021-01-28 Thread Ryan Sleevi via dev-security-policy
On Thu, Jan 28, 2021 at 1:43 PM Ben Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On second thought, I think that Mozilla can accomplish what we want without
> modifying the MRSP
> <
> https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/#32-auditors
> >
> (which says audits MUST be performed by a Qualified Auditor, as defined in
> the Baseline Requirements section 8.2), and instead adding language to
> https://wiki.mozilla.org/CA/Audit_Statements#Auditor_Qualifications that
> explains what additional information we need submitted to determine that an
> auditor is "qualified" under Section 8.2 of the Baseline Requirements.
>
> In other words (paraphrasing from BR 8.2), we would need evidence that the
> persons or entities:
> 1. Are independent from the subject of the audit;
> 2. Have the ability to conduct an audit that addresses the criteria;
> 3. Have proficiency in examining Public Key Infrastructure technology,
> information security tools and techniques, information technology and
> security auditing, and the third-party attestation function;
> 4. Are accredited in accordance with ISO 17065 applying the requirements
> specified in ETSI EN 319 403  *OR*   5. Are licensed by WebTrust;
> 6. Are bound by law, government regulation, or professional code of ethics
> (to render an honest and objective opinion); and
> 7. Maintain Professional Liability/Errors & Omissions insurance with policy
> limits of at least one million US dollars in coverage.
>
> We do some of this already when we check on an auditor's status to bring an
> auditor's record current in the CCADB.  The edits that we'll make will just
> make it easier for us to go through the list above.
>
> Thoughts?
>

I'm not sure this approach is very clear about the edits you're making, and
whether pull requests or commits might be clearer, as Wayne did in the
past. If there is a commit, happy to look at it and apologies if I missed
it.

I'm not sure this addresses the issue as raised, or at least, "or entities"
seems to create the same issues that are trying to be addressed, by
thinking in terms of "legal entities" rather than qualified persons.

Your discussion about "auditor's" and "auditor's status" might be misread
as "Audit firm", when I think the issue raised was thinking about "person
performing the audit". The individual persons aren't necessarily licensed
or accredited (e.g. #4/ #5), and may not be the ones that retain PL/E
insurance (#7). Further, the individuals might be independent, but the firm
not (#1)

So I think you're really just left with wanting to have a demonstration as
to the members of the audit team and how individual members meet (#2, #3,
#6). Is that right? I think Kathleen's proposal from November got close to
that, and then the remainder is clarifying the language that you've
proposed for 2.7.1, namely "Individuals have competence, partnerships and
corporations do not".

I think the expectation goal is that "Individually, and as an audit team,
they are independent (#1)" (e.g. you can't have a non-independent party
running the audit with a bunch of independent parties reporting to them,
since they're no longer independent), while that collectively the audit
team meets #2/#3, with the burden being to demonstrate how the individuals
on the team meet that.

Is that what you were thinking? Or is my explanation a jumbled mess :)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Root Store Policy Suggestion

2021-01-28 Thread Ryan Sleevi via dev-security-policy
On Thu, Jan 28, 2021 at 1:32 PM Burton  wrote:

> Hi Ryan,
>
> The answer to your questions.
>
> A remediation plan is only useful in cases of slight CA non-compliance to
> the rules set forth by the root store policy.
>
> A remediation plans in cases of slight CA non-compliance provides
> assurance of CA commitment to compliance.
>

Sure, and I think (and hopefully I'm fairly stating), that the goal is
these should be provided in the Incident Reports themselves. That is, the
remediation should address both the immediate and systemic issues, and
future incidents of the CA will be judged against this.

The intent is certainly that anyone in the community participates and
reviews these, and I think we see a lot of fantastic activity on the bug
reports from people who do, which is a healthy sign, even though they're
often calling out concerns with the remediation or highlighting how it
fails to meet the expectations.


> A CA under investigation of serious non-compliance with detailed
> documented evidence of non-compliance incidents has reach the stage of no
> return.
>
> A remediation plan in the cases of serious non-compliance is a reference
> document in the case of new root inclusion as documented evidence of
> commitment to compliance.
>

> The CA roots should be removed in the case of  serious non-compliance and
> asked to reapply for inclusion again to the root store with new roots and
> new commitment to compliance with new audits from a different auditor and
> reformed practices and management.
>

Right, and I think this might be premature or giving false hope, at least
to CAs that assume every CA, once removed, can simply reapply with a
remediation plan. I agree with you, it's incredibly valuable to understand
how the CA plans to address the issues, and just like incident reports,
it's useful to understand how the CA views the incidents that might lead up
to distrust and how it plans to mitigate them before reapplying. Yet we've
often seen CAs believe that because a remediation plan exists for the
identified issues, it's sufficient to apply for new roots, when really,
such CAs are working from a serious trust deficit, and so not only need to
remediate the identified issues, but show how they're going above and
beyond addressing the systemic issues, in order to justify the risk of
trusting them again. Understandably, this depends on a case-by-case basis.

To your original point, historically CA actions (generally) worked in three
phases:

1) A pattern is believed to exist (of incidents), or an incident is so
severe it warrants immediate public discussion. The community is asked to
provide details - e.g. of incidents that were overlooked, of other relevant
data - to ensure that a full and comprehensive picture of relevant facts
are gathered and understood. The CA is invited to share details (e.g. how
they mitigated such issues) or to respond to the facts, if they believe
they're not accurate.

2) A discussion about the issues themselves, to evaluate the nature of the
incidents, as well as solicit proposals from the community in particular
(rather than the CA, although the CA is welcome to contribute) about how to
mitigate the risks these issues and incidents highlight.

3) At least for Mozilla, a proposed plan for Mozilla products, which is
often based on suggestions from the community (in #2) as well as Mozilla's
own product and security considerations. Mozilla may solicit further
feedback on their plan, from the community and the CA, to make sure they've
balanced the concerns and considerations raised in #2 accurately, or may
decide it warrants immediate action.

This is a rough guide, obviously there are exceptions. For example, Mozilla
and other browsers blocking MITM roots hasn't always involved all three
stages. Similarly, in CA compromise events, Step 2 and 3 may be skipped
entirely, because the only viable solution is obvious.

Other programs, whether Apple, Google, or Microsoft, don't necessarily
operate the same way. For example, Google, Apple and Microsoft don't
provide any statement at all about public engagement, although they may
closely monitor the discussions in #1 and #2.

Step #1 has, intentionally and by design, largely been replaced by the
Incident Reporting requirements incorporated into the Root Policies of both
Mozilla and Google Chrome. That is, the incident reports, and the public
discussions of the incidents, serve to contemporaneously address issues,
identify remediations, and understand and identify how well the CA
understands the risks and is able to take meaningful corrective action.
These days, Step #1 is merely summarizing the incidents based on the
information in the incidents, and thus may not need the same lengthy
discussion in the past, prior to the incident disclosure requirements (e.g.
StartCom, WoSign).

Step #2 is still widely practiced, as we've seen throughout a number of
recent and past events. Without wanting to put words into Mozilla's mouth,
certainly 

Re: Policy 2.7.1: MRSP Issue #192: Require information about auditor qualifications in the audit report

2021-01-28 Thread Ben Wilson via dev-security-policy
On second thought, I think that Mozilla can accomplish what we want without
modifying the MRSP

(which says audits MUST be performed by a Qualified Auditor, as defined in
the Baseline Requirements section 8.2), and instead adding language to
https://wiki.mozilla.org/CA/Audit_Statements#Auditor_Qualifications that
explains what additional information we need submitted to determine that an
auditor is "qualified" under Section 8.2 of the Baseline Requirements.

In other words (paraphrasing from BR 8.2), we would need evidence that the
persons or entities:
1. Are independent from the subject of the audit;
2. Have the ability to conduct an audit that addresses the criteria;
3. Have proficiency in examining Public Key Infrastructure technology,
information security tools and techniques, information technology and
security auditing, and the third-party attestation function;
4. Are accredited in accordance with ISO 17065 applying the requirements
specified in ETSI EN 319 403  *OR*   5. Are licensed by WebTrust;
6. Are bound by law, government regulation, or professional code of ethics
(to render an honest and objective opinion); and
7. Maintain Professional Liability/Errors & Omissions insurance with policy
limits of at least one million US dollars in coverage.

We do some of this already when we check on an auditor's status to bring an
auditor's record current in the CCADB.  The edits that we'll make will just
make it easier for us to go through the list above.

Thoughts?

Ben

On Tue, Jan 26, 2021 at 1:36 PM Ben Wilson  wrote:

> Thanks, Clemens. I'll take a look.
>
> Also, apparently my redlining was lost when my message was saved to the
> newsgroup.
>
> I'll see if I can re-post without the text formatting of strikeouts and
> underlines.
>
> On Tue, Jan 26, 2021 at 10:24 AM Clemens Wanko via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> Hi Ben,
>> looking at what was suggested so far for section 3.2, it seems that the
>> BR combine and summarize under "qualified" in the BR section 8.2 what you
>> and Kathleen describe with the definitions for "competent" and
>> "independent" parties.
>>
>> Based upon that, MRSP section 3.2 could be structured in the following
>> way:
>>
>> * 1st: definition of "competent party" **
>> By "competent party" we mean...
>>
>> * 2nd: definition of "independency" **
>> By "independent party" we mean...
>>
>> * 3rd: now refer to the BR summarizing 1 and 2 up in the term
>> "qualified assessor/auditor" *
>> By "qualified party" we mean a person or other entity or group of persons
>> who meet *is meeting * the combination of the requirements defined above
>> for a "competent party" and an "independent party" and as such meets
>> *meeting * the requirements of section 8.2 of the Baseline Requirements.
>>
>>
>> Further following that idea and syncing it with the wording also used by
>> the BR, the current suggestion for MRSP section 3.2 could be
>> revised/amended as follows:
>>
>> *
>> 3.2 Auditors
>> Mozilla requires that audits MUST be performed by a competent,
>> independent and herewith qualified party.
>> [...]
>> By "competent party" we mean a person or other entity *group of persons*
>> who has the proficiency and is authorized to perform audits according to
>> the stated criteria (e.g., by the organization responsible for the criteria
>> or by a relevant agency) and for whom is sufficient public information
>> available to determine and evidence that the party is competent *has
>> sufficient education, experience, and ability* to judge the CA’s
>> conformance to the stated criteria.
>> In the latter case, "Public information" referred to SHOULD *** -> SHALL
>> - Why not being more strict here?*** include information regarding the
>> party’s:
>> - evidence of being bound by law, government regulation, or professional
>> code of ethics;
>> - knowledge of CA-related technical issues such as public key
>> cryptography and related standards;
>> - experience in performing security-related audits, evaluations, and risk
>> analyses; and
>> - honesty and objectivity *ability to deliver an opinion as to the CA’s
>> compliance with applicable requirements*.
>> [...]
>> *
>>
>> Best regards
>> Clemens
>>
>> ___
>> dev-security-policy mailing list
>> dev-security-policy@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-security-policy
>>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Root Store Policy Suggestion

2021-01-28 Thread Burton via dev-security-policy
Hi Ryan,

The answer to your questions.

A remediation plan is only useful in cases of slight CA non-compliance to
the rules set forth by the root store policy.

A remediation plans in cases of slight CA non-compliance provides assurance
of CA commitment to compliance.

A CA under investigation of serious non-compliance with detailed documented
evidence of non-compliance incidents has reach the stage of no return.

A remediation plan in the cases of serious non-compliance is a reference
document in the case of new root inclusion as documented evidence of
commitment to compliance.

The CA roots should be removed in the case of  serious non-compliance and
asked to reapply for inclusion again to the root store with new roots and
new commitment to compliance with new audits from a different auditor and
reformed practices and management.

Thank you

Burton

On Wed, 27 Jan 2021, 19:54 Ryan Sleevi,  wrote:

>
>
> On Wed, Jan 27, 2021 at 2:45 PM Burton  wrote:
>
>> I included the remediation plan in the proposal because a CA will mostly
>> always include a remediation plan when they reach the stage of serious
>> non-compliance investigation by root store policy owners.
>>
>
> Sure, but I was more asking: are you aware of any point in the past where
> the remediation plan has been valuable, useful or appropriate? I'm not.
>

> The expectation is continuous remediation, so any remediation plan at a
> later stage seems too little, too late, right? The very intentional goal of
> the incident reporting was to transition to a continuous improvement
> process, where the CA was evaluated based on their
> contemporaneous remediation to incidents, rather than waiting until things
> get so bad they pile up and a remediation plan is used.
>
> So I'm trying to understand what a remediation plan would include, during
> discussion, that wouldn't (or, more explicitly, shouldn't) have been
> included in the incident reports as they happened?
>
>>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #187: Require disclosure of incidents in Audit Reports

2021-01-28 Thread Clemens Wanko via dev-security-policy
Hi Ben, 
that works fine for me from the ETSI auditors perspective. 
REM: The ETSI Audit Attestation template requires the auditor to include a full 
list of Bugzilla compliance bugs – resolved or unresolved – which are relevant 
for the past audit period.

Best regards
Clemens
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy