RE: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name verification to 398 days

2020-12-02 Thread Jeremy Rowley via dev-security-policy
Should this limit on reuse also apply to s/MIME? Right now, the 825 day limit 
in Mozilla policy only applies to TLS certs with email verification of s/MIME 
being allowed for infinity time.  The first draft of the language looked like 
it may change this while the newer language puts back the TLS limitation. If 
it's not addressed in this update, adding clarification on domain verification 
reuse for SMIME would be a good improvement on the existing policy.

-Original Message-
From: dev-security-policy  On 
Behalf Of Ben Wilson via dev-security-policy
Sent: Wednesday, December 2, 2020 2:22 PM
To: Ryan Sleevi 
Cc: Doug Beattie ; Mozilla 

Subject: Re: Policy 2.7.1: MRSP Issue #206: Limit re-use of domain name 
verification to 398 days

See my responses inline below.

On Tue, Dec 1, 2020 at 1:34 PM Ryan Sleevi  wrote:

>
>
> On Tue, Dec 1, 2020 at 2:22 PM Ben Wilson via dev-security-policy < 
> dev-security-policy@lists.mozilla.org> wrote:
>
>> See responses inline below:
>>
>> On Tue, Dec 1, 2020 at 11:40 AM Doug Beattie 
>> > >
>> wrote:
>>
>> > Hi Ben,
>> >
>> > For now I won’t comment on the 398 day limit or the date which you
>> propose
>> > this to take effect (July 1, 2021), but on the ability of CAs to 
>> > re-use domain validations completed prior to 1 July for their full 
>> > 825 re-use period.  I'm assuming that the 398 day limit is only for 
>> > those domain validated on or after 1 July, 2021.  Maybe that is 
>> > your intent, but the wording is not clear (it's never been all that 
>> > clear)
>> >
>>
>> Yes. (I agree that the wording is currently unclear and can be 
>> improved, which I'll work on as discussion progresses.)  That is the 
>> intent - for certificates issued beginning next July--new validations 
>> would be valid for
>> 398 days, but existing, reused validations would be sunsetted and 
>> could be used for up to 825 days (let's say, until Oct. 1, 2023, 
>> which I'd advise against, given the benefits of freshness provided by 
>> re-performing methods in BR 3.2.2.4 and BR 3.2.2.5).
>>
>
> Why? I have yet to see a compelling explanation from a CA about why 
> "grandfathering" old validations is good, and as you note, it 
> undermines virtually every benefit that is had by the reduction until 2023.
>

I am open to the idea of cutting off the tail earlier, at let's say, October 1, 
2022, or earlier (see below).  I can work on language that does that.


>
> Ben, could you explain the rationale why this is better than the 
> simpler, clearer, and immediately beneficial for Mozilla users of 
> requiring new validations be conducted on-or-after 1 July 2021? Any CA 
> that had concerns would have ample time to ensure there is a fresh 
> domain validation before then, if they were concerned.
>

I don't have anything yet in particular with regard to a 
pros-cons/benefits-analysis-supported rationale, except that I expect push-back 
from SSL/TLS certificate subscribers and from CAs on their behalf. You're 
right, CAs could take the time between now and July 1st to obtain 398-day 
validations, but my concern is with the potential push-back.

Also, as I mentioned before, I am interested in proposing this as a ballot in 
the CA/Browser Forum and seeing where it goes. I realize that this issue might 
come with added baggage from the history surrounding the validity-period 
changes that were previously defeated in the Forum, but it would still be good 
to see it adopted there first. Nonetheless, this issue is more than ripe enough 
to be resolved here by Mozilla as well.



>
> Doug, could you explain more about why it's undesirable to do that?
>
>
>> >
>> > Could you consider changing it to read more like this (feel free to 
>> > edit as needed):
>> >
>> > CAs may re-use domain validation for subjectAltName verifications 
>> > of dNSNames and IPAddresses done prior to July 1, 2021 for up to 
>> > 825 days
>> > > accordance with domain validation re-use in the BRs, section  4.2.1>.
>> CAs
>> > MUST limit domain re-use for subjectAltName verifications of 
>> > dNSNames
>> and
>> > IPAddresses to 398 days for domains validated on or after July 1, 2021.
>> >
>>
>> Thanks. I am open to all suggestions and improvements to the language.
>> I'll
>> see how this can be phrased appropriately to reduce the chance for 
>> misinterpretation.
>>
>
> As noted above, I think adopting this wording would prevent much 
> (any?) benefit from being achieved until 2023. I can understand that 
> 2023 is better than "never", but I'm surprised to see an agreement so 
> quickly to that being desirable over 2021. I suspect there's ample 
> context here, but I think it would benefit from discussion.
>

The language needs to be worked on some more.  As I note above, we should come 
up with a cutoff date that is before 2023. It seems that two years is too long 
to wait for the last 825-day validation to expire when there are domain 
validation methods that work in a matter of seconds.


>
>
>> > From a CA perspect

RE: Mandatory reasonCode analysis

2020-09-30 Thread Jeremy Rowley via dev-security-policy
That's probably true since CRL entries are published instead of issued and they 
don't have a notBefore date.

Regardless, I can see why someone would read it as requiring an update for all 
next published CRLs/OCSP given the historical way the BRs worked.

To be safe, we did update all of the DigiCert CRLs/OCSP for ICAs capable of 
issuing TLS. Looks like your report is flagging the legacy Symantec ICAs that 
are no longer trusted for TLS and are part of a root removal request.

From: Rob Stradling 
Sent: Wednesday, September 30, 2020 10:56 AM
To: Mozilla ; Jeremy Rowley 

Subject: Re: Mandatory reasonCode analysis

> I also read this language:
> If a CRL entry is for a Certificate not subject to these Requirements and was 
> either issued on-or-after 2020-09-30 or has a notBefore on-or-after 
> 2020-09-30, the CRLReason MUST NOT be certificateHold (6).

I think "was either issued on-or-after 2020-09-30 or has a notBefore 
on-or-after 2020-09-30" is talking about "a Certificate not subject to these 
Requirements", not about when the CRL was issued.


From: dev-security-policy 
mailto:dev-security-policy-boun...@lists.mozilla.org>>
 on behalf of Jeremy Rowley via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
Sent: 30 September 2020 17:41
To: Mozilla 
mailto:mozilla-dev-security-pol...@lists.mozilla.org>>
Subject: RE: Mandatory reasonCode analysis

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.


This is a good question. I read the requirements as applying only to CRLs and 
OCSP published after the effective date since the BRs always say explicitly 
when they apply to items before the effective date.

I also read this language:
If a CRL entry is for a Certificate not subject to these Requirements and was 
either issued on-or-after 2020-09-30 or has a notBefore on-or-after 2020-09-30, 
the CRLReason MUST NOT be certificateHold (6).

Which made me think the language applied only to CRLs and OCSP issued after 
9-30. However, the language does only reference certificateHold and not the 
inclusion of reasonCode language.

That was the analysis I had anyway - that any CRLs and OCSP published after 
9-30 had to have  reasonCode.

-Original Message-
From: dev-security-policy 
mailto:dev-security-policy-boun...@lists.mozilla.org>>
 On Behalf Of Rob Stradling via dev-security-policy
Sent: Wednesday, September 30, 2020 9:59 AM
To: 
dev-security-policy@lists.mozilla.org<mailto:dev-security-policy@lists.mozilla.org>
Subject: Mandatory reasonCode analysis

Starting today, the BRs require a reasonCode in CRLs and OCSP responses for 
revoked CA certificates.  Since crt.sh already monitors CRLs and keeps track of 
reasonCodes, I thought I would conduct some analysis to determine the level of 
(non)compliance with these new rules.

It's not clear to me if (1) the new BR rules should be applied only to CRLs and 
OCSP responses with thisUpdate timestamps dated today or afterwards, or if (2) 
every CRL and OCSP response currently being served by distribution points and 
responders (regardless of the thisUpdate timestamps) is required to comply.  
(I'd be interested to hear folks' opinions on this).

This gist contains my crt.sh query, the results as .tsv, and a .zip containing 
all of the referenced CRLs:
https://gist.github.com/robstradling/3088dd622df8194d84244d4dd65ffd5f


--
Rob Stradling
Senior Research & Development Scientist
Email: r...@sectigo.com<mailto:r...@sectigo.com>
Bradford, UK
Office: +441274024707
Sectigo Limited

This message and any files associated with it may contain legally privileged, 
confidential, or proprietary information. If you are not the intended 
recipient, you are not permitted to use, copy, or forward it, in whole or in 
part without the express consent of the sender. Please notify the sender by 
reply email, disregard the foregoing messages, and delete it immediately.


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org<mailto:dev-security-policy@lists.mozilla.org>
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org<mailto:dev-security-policy@lists.mozilla.org>
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Mandatory reasonCode analysis

2020-09-30 Thread Jeremy Rowley via dev-security-policy
This is a good question. I read the requirements as applying only to CRLs and 
OCSP published after the effective date since the BRs always say explicitly 
when they apply to items before the effective date.

I also read this language: 
If a CRL entry is for a Certificate not subject to these Requirements and was 
either issued on-or-after 2020-09-30 or has a notBefore on-or-after 2020-09-30, 
the CRLReason MUST NOT be certificateHold (6).

Which made me think the language applied only to CRLs and OCSP issued after 
9-30. However, the language does only reference certificateHold and not the 
inclusion of reasonCode language. 

That was the analysis I had anyway - that any CRLs and OCSP published after 
9-30 had to have  reasonCode.  

-Original Message-
From: dev-security-policy  On 
Behalf Of Rob Stradling via dev-security-policy
Sent: Wednesday, September 30, 2020 9:59 AM
To: dev-security-policy@lists.mozilla.org
Subject: Mandatory reasonCode analysis

Starting today, the BRs require a reasonCode in CRLs and OCSP responses for 
revoked CA certificates.  Since crt.sh already monitors CRLs and keeps track of 
reasonCodes, I thought I would conduct some analysis to determine the level of 
(non)compliance with these new rules.

It's not clear to me if (1) the new BR rules should be applied only to CRLs and 
OCSP responses with thisUpdate timestamps dated today or afterwards, or if (2) 
every CRL and OCSP response currently being served by distribution points and 
responders (regardless of the thisUpdate timestamps) is required to comply.  
(I'd be interested to hear folks' opinions on this).

This gist contains my crt.sh query, the results as .tsv, and a .zip containing 
all of the referenced CRLs:
https://gist.github.com/robstradling/3088dd622df8194d84244d4dd65ffd5f


--
Rob Stradling
Senior Research & Development Scientist
Email: r...@sectigo.com
Bradford, UK
Office: +441274024707
Sectigo Limited

This message and any files associated with it may contain legally privileged, 
confidential, or proprietary information. If you are not the intended 
recipient, you are not permitted to use, copy, or forward it, in whole or in 
part without the express consent of the sender. Please notify the sender by 
reply email, disregard the foregoing messages, and delete it immediately.


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Question about the issuance of OCSP Responder Certificates by technically constrained CAs

2020-07-02 Thread Jeremy Rowley via dev-security-policy
Thank you for the clarification – and I definitely agree with you that the time 
to talk about this is now, one the Mozilla list, and not in the incident 
reports. It’s not helpful to have the discussion there since it lacks the 
public benefit.

From: Ryan Sleevi 
Sent: Thursday, July 2, 2020 11:51 AM
To: Jeremy Rowley 
Cc: Mozilla 
Subject: Re: Question about the issuance of OCSP Responder Certificates by 
technically constrained CAs

On Thu, Jul 2, 2020 at 1:33 PM Jeremy Rowley via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
 wrote:
Threatening distrust over a discussion about applicability of requirements 
seems contrary to Mozilla's open discussion policy, and I don't think that 
should be an answer while there are still open questions about the language in 
4.9.9.

That isn't what was said at all, and it doesn't do anyone any good to 
misrepresent it so egregiously.

A CA using the arguments Corey made as part of their CA incident response? 
Absolutely. Because it's a terrible incident response, and a CA arguing that as 
part of an incident response is a CA that is arguing in bad faith, because it's 
the same argument over "intent" that has been shut down over and over.

Separating out the security risk from the applicability of the BRs is useful 
because it highlights potentially poor language in the BRs. For example:

Section 4.9.9:
OCSP responses MUST conform to RFC6960 and/or RFC5019. OCSP responses MUST 
either:
1. Be signed by the CA that issued the Certificates whose revocation status is 
being checked, or
2. Be signed by an OCSP Responder whose Certificate is signed by the CA that 
issued the Certificate whose revocation status is being checked.
In the latter case, the OCSP signing Certificate MUST contain an extension of 
type id-pkix-ocsp-nocheck, as defined by RFC6960
The requirement for no-check only applies in the latter case, which is if the 
OCSP response is signed by an OCSP responder. How would the no-check 
requirement apply if no OCSP responses are being signed by the responder.  If 
the ICAs aren't signing, why does it apply?

Should the OCSP issue be fixed? Definitely. The security issues are apparent.

Good, that's the focus, and for some CAs, based on discussions had before 
filing this incident report, they did not see it as apparent, even after 
Robin's educational highlighting of the security issues nearly a year ago.

Should the BR language be modified for clarity? I think that conversation is 
still ongoing and shutting that down with threats of distrust is counter 
productive.

And I'm not shutting down that discussion. My examination of this incident in 
the first place was triggered by CAs, among others including GlobalSign and 
HARICA, not realizing that this was an existing requirement when I made an 
explicit proposal to clarify this in the BRs, by prohibiting 
`id-kp-OCSPSigning` from being combined with other EKUs. Would that have fixed 
4.9.9? No, and so I'm making sure to also add that to the "Cleanups and 
Clarifications" ballot. And I'm sure the CABF Validation WG will no doubt, in 
light of this, realize that we need an "OCSP Responder" profile to go with the 
other profiles being worked on.

The threat of distrust isn't over discussing how to make this *better*. That's 
of course welcome to highlight where requirements aren't clear. It would be 
entirely appropriate a CA would, as part of their incident response, argue this 
is *not an incident*. I would hate if CAs, particularly those in the CA 
Security Council, were to try and coordinate and argue it's not an issue. I 
would especially hate if CAs were to point to Corey's arguments in particular, 
as a means of trying to create a parallel construction for why they did what 
they originally did (which is, more likely, explained by just not reading/being 
aware of the requirements), especially if trying to avoid the need to come up 
with a plan to revoke these CAs.

Corey's not making this argument as part of an incident response, and so I *do* 
appreciate his attempt to highlight issues to improve. However, I'm trying to 
make it clear that the argument for why this is not an issue is not valid, and 
if a CA were to try to argue WontFix/Invalid (or, more aptly, "won't revoke, we 
don't think it's relevant"), that'd be an absolutely awful response.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Question about the issuance of OCSP Responder Certificates by technically constrained CAs

2020-07-02 Thread Jeremy Rowley via dev-security-policy
Threatening distrust over a discussion about applicability of requirements 
seems contrary to Mozilla's open discussion policy, and I don't think that 
should be an answer while there are still open questions about the language in 
4.9.9. Separating out the security risk from the applicability of the BRs is 
useful because it highlights potentially poor language in the BRs. For example: 

Section 4.9.9:
OCSP responses MUST conform to RFC6960 and/or RFC5019. OCSP responses MUST 
either:
1. Be signed by the CA that issued the Certificates whose revocation status is 
being checked, or
2. Be signed by an OCSP Responder whose Certificate is signed by the CA that 
issued the Certificate whose revocation status is being checked.
In the latter case, the OCSP signing Certificate MUST contain an extension of 
type id-pkix-ocsp-nocheck, as defined by RFC6960
The requirement for no-check only applies in the latter case, which is if the 
OCSP response is signed by an OCSP responder. How would the no-check 
requirement apply if no OCSP responses are being signed by the responder.  If 
the ICAs aren't signing, why does it apply?

Should the OCSP issue be fixed? Definitely. The security issues are apparent. 
Should the BR language be modified for clarity? I think that conversation is 
still ongoing and shutting that down with threats of distrust is counter 
productive. 

-Original Message-
From: dev-security-policy  On 
Behalf Of Ryan Sleevi via dev-security-policy
Sent: Thursday, July 2, 2020 11:05 AM
To: Rob Stradling 
Cc: Mozilla 
Subject: Re: Question about the issuance of OCSP Responder Certificates by 
technically constrained CAs

On Thu, Jul 2, 2020 at 12:51 PM Rob Stradling via dev-security-policy < 
dev-security-policy@lists.mozilla.org> wrote:

> On 02/07/2020 17:13, Ryan Sleevi via dev-security-policy wrote:
> > On Thu, Jul 2, 2020 at 11:36 AM Corey Bonnell wrote:
> 
> >> If there’s no digitalSignature KU, then the certificate is not a 
> >> OCSP responder certificate due to the technical inability to sign 
> >> OCSP
> responses
> >> that would be accepted by clients conforming to RFC 5280, section
> 4.2.1.12.
> >> Therefore, section 4.9.9 is not applicable for those certificates 
> >> that
> not
> >> express the digitalSignature KU. This is directly relevant to the 
> >> topic
> at
> >> hand.
> >
> > Alternatively: If the OCSPSigning EKU is present, and it lacks 
> > DigitalSignature, it's misissued by containing an invalidEKU.
>
> As Ryan already mentioned, RFC6960 very clearly says:
>"OCSP signing delegation SHALL be designated by the inclusion of
> id-kp-OCSPSigning in an extended key usage certificate extension
> included in the OCSP response signer's certificate."
>
> The presence or absence of the DigitalSignature Key Usage bit does not 
> alter this fact.
>
> RFC6960 doesn't mention the Key Usage extension at all, AFAICT.
>
> https://tools.ietf.org/html/rfc5280#section-4.2.1.12 doesn't use any
> RFC2119 keywords when it says:
>"id-kp-OCSPSigningOBJECT IDENTIFIER ::= { id-kp 9 }
> -- Signing OCSP responses
> -- Key usage bits that may be consistent: digitalSignature
> -- and/or nonRepudiation"
>
> The BRs say...
>"If the Root CA Private Key is used for signing OCSP responses, then
> the digitalSignature bit MUST be set."
>and
>"If the Subordinate CA Private Key is used for signing OCSP responses,
> then the digitalSignature bit MUST be set"
> ...but this is obviously intended to refer to OCSP responses signed 
> directly by the CA (rather than OCSP responses signed by a CA that 
> also masquerades as a delegated OCSP response signer!)
>

Even if a CA wanted to argue that there's no 4.9.9 BR violation (which, as I 
suggested, I would strongly advocate for their distrust, due to the logical 
consequences of such an argument), the KU violation itself can be argued a 
Mozilla violation, using the exact language Corey highlighted.

Recall Section 5.2 of Mozilla Root Policy 2.7:
https://github.com/mozilla/pkipolicy/blob/66ac6b888965aefc88a8015b37d2ee6b5b095fba/rootstore/policy.md#52-forbidden-and-required-practices

"""
CAs MUST NOT issue certificates that have:
...
* incorrect extensions
"""

While a list of possible incorrect extensions is included, if the argument is 
that the EKU doesn't matter because the KU is incorrect for that EKU, then it's 
an argument that the CA has issued a certificate with incorrect extensions. 
Which is still a Mozilla Root Store Policy violation.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Clear definition of a "locality"

2020-06-26 Thread Jeremy Rowley via dev-security-policy
That's accurate, but the real question goes back to a discussion we previously 
had at the CAB forum that I don't think was answered - what is a locality vs. a 
state vs. an address? 

In Sept 2019, we put in code that requires this be checked against however map 
software defines it, allowing locality = city or county. However, before that 
the guidance was that locality = locality= whatever the address lookup during 
verification confirmed was locality.  Before revoking this particular cert, I 
figured we needed a clear working definition on locality since I didn't want a 
subjective review of all certs issued prior to Sept 2019.  Does anyone know of 
a definition to use? We (DigiCert) currently uses ISO 3166-2 to define states, 
but I know even that is not universally held (based on the previous discussion 
about adopting it as the definition).



-Original Message-
From: dev-security-policy  On 
Behalf Of George via dev-security-policy
Sent: Friday, June 26, 2020 3:05 PM
To: dev-security-policy@lists.mozilla.org
Subject: Clear definition of a "locality"

I sent a problem report to rev...@digicert.com regarding the locality field in:

https://crt.sh/?q=12EC8C05667173603367E8F93B7FDCA7EC60F9838EF3B72A4483BAF48DE48F4B

Jeremy Rowley replied stating that he believed the locality was correct as 
there was no clear definition of a locality, can we get a clear definition of 
this?

If these are considered localities then the streetAddress field seems to be 
obsolete? 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: crt.sh: CA Issuers monitor (was Re: CA Issuer AIA URL content types)

2020-06-17 Thread Jeremy Rowley via dev-security-policy
Doh - how did I miss that?! Thanks Ryan

From: Ryan Sleevi 
Sent: Wednesday, June 17, 2020 4:11:46 PM
To: Jeremy Rowley 
Cc: Mozilla 
Subject: Re: crt.sh: CA Issuers monitor (was Re: CA Issuer AIA URL content 
types)

It's right there under "Trust Filter" . Very top of the page ;)

e.g. 
https://crt.sh/ca-issuers?trustedExclude=expired%2Conecrl&trustedBy=Mozilla&trustedFor=Server+Authentication&dir=v&sort=2&rootOwner=&url=&content=&contentType=

On Wed, Jun 17, 2020 at 5:18 PM Jeremy Rowley via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
 wrote:
Is there a way to filter out the revoked and non-TLS/SMIME ICAs?

-Original Message-
From: dev-security-policy 
mailto:dev-security-policy-boun...@lists.mozilla.org>>
 On Behalf Of Rob Stradling via dev-security-policy
Sent: Wednesday, June 17, 2020 5:07 AM
To: dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
Subject: crt.sh: CA Issuers monitor (was Re: CA Issuer AIA URL content types)

Inspired by last month's email threads and Bugzilla issues relating to CA 
Issuers misconfigurations, I've just finished adding a new feature to crt.sh...

https://crt.sh/ca-issuers

Sadly, this highlights plenty of misconfigurations and other problems: PEM 
instead of DER, certs for the wrong CAs, wrong Content-Types, 404s, 
non-existent domain names, connection timeouts.  I encourage CAs to take a look 
and see what they can fix.  (Also, comments welcome :-) ).

While I'm here, here's a quick reminder of some other crt.sh features relating 
to CA compliance issues:
https://crt.sh/ocsp-responders
https://crt.sh/test-websites
https://crt.sh/mozilla-disclosures


From: dev-security-policy 
mailto:dev-security-policy-boun...@lists.mozilla.org>>
 on behalf of Ryan Sleevi via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
Sent: 22 May 2020 21:52
To: Hanno Böck mailto:ha...@hboeck.de>>
Cc: r...@sleevi.com<mailto:r...@sleevi.com> 
mailto:r...@sleevi.com>>; 
dev-security-policy@lists.mozilla.org<mailto:dev-security-policy@lists.mozilla.org>
 
mailto:dev-security-policy@lists.mozilla.org>>
Subject: Re: CA Issuer AIA URL content types

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.


I believe you've still implied, even in this reply, that this is something 
serious or important. I see no reason to believe that is the case, and I wasn't 
sure if there was anything more than a "Here's a SHOULD and here's people not 
doing it," which doesn't seem that useful to me.

On Fri, May 22, 2020 at 2:52 PM Hanno Böck 
mailto:ha...@hboeck.de>> wrote:

> Hi,
>
> On Fri, 22 May 2020 09:55:22 -0400
> Ryan Sleevi via dev-security-policy
> mailto:dev-security-policy@lists.mozilla.org>>
>  wrote:
>
> > Could you please cite more specifically what you believe is wrong
> > here? This is only a SHOULD level requirement.
>
> I think I said that more or less:
>
> > > I'm not going to file individual reports for the CAs. Based on
> > > previous threads I don't believe these are strictly speaking rule
> > > violations.
>
> I'm not claiming this is a severe issue or anything people should be
> worried about.
> It's merely that while analyzing some stuff I observed that AIA fields
> aren't as reliable as one might want (see also previous mails) and the
> mime types are one more observation I made where things aren't what
> they probably SHOULD be.
> I thought I'd share this observation with the community.
>
> --
> Hanno Böck
> https://hboeck.de/
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org<mailto:dev-security-policy@lists.mozilla.org>
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org<mailto:dev-security-policy@lists.mozilla.org>
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org<mailto:dev-security-policy@lists.mozilla.org>
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: crt.sh: CA Issuers monitor (was Re: CA Issuer AIA URL content types)

2020-06-17 Thread Jeremy Rowley via dev-security-policy
Is there a way to filter out the revoked and non-TLS/SMIME ICAs?  

-Original Message-
From: dev-security-policy  On 
Behalf Of Rob Stradling via dev-security-policy
Sent: Wednesday, June 17, 2020 5:07 AM
To: dev-security-policy 
Subject: crt.sh: CA Issuers monitor (was Re: CA Issuer AIA URL content types)

Inspired by last month's email threads and Bugzilla issues relating to CA 
Issuers misconfigurations, I've just finished adding a new feature to crt.sh...

https://crt.sh/ca-issuers

Sadly, this highlights plenty of misconfigurations and other problems: PEM 
instead of DER, certs for the wrong CAs, wrong Content-Types, 404s, 
non-existent domain names, connection timeouts.  I encourage CAs to take a look 
and see what they can fix.  (Also, comments welcome :-) ).

While I'm here, here's a quick reminder of some other crt.sh features relating 
to CA compliance issues:
https://crt.sh/ocsp-responders
https://crt.sh/test-websites
https://crt.sh/mozilla-disclosures


From: dev-security-policy  on 
behalf of Ryan Sleevi via dev-security-policy 

Sent: 22 May 2020 21:52
To: Hanno Böck 
Cc: r...@sleevi.com ; dev-security-policy@lists.mozilla.org 

Subject: Re: CA Issuer AIA URL content types

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.


I believe you've still implied, even in this reply, that this is something 
serious or important. I see no reason to believe that is the case, and I wasn't 
sure if there was anything more than a "Here's a SHOULD and here's people not 
doing it," which doesn't seem that useful to me.

On Fri, May 22, 2020 at 2:52 PM Hanno Böck  wrote:

> Hi,
>
> On Fri, 22 May 2020 09:55:22 -0400
> Ryan Sleevi via dev-security-policy
>  wrote:
>
> > Could you please cite more specifically what you believe is wrong 
> > here? This is only a SHOULD level requirement.
>
> I think I said that more or less:
>
> > > I'm not going to file individual reports for the CAs. Based on 
> > > previous threads I don't believe these are strictly speaking rule 
> > > violations.
>
> I'm not claiming this is a severe issue or anything people should be 
> worried about.
> It's merely that while analyzing some stuff I observed that AIA fields 
> aren't as reliable as one might want (see also previous mails) and the 
> mime types are one more observation I made where things aren't what 
> they probably SHOULD be.
> I thought I'd share this observation with the community.
>
> --
> Hanno Böck
> https://hboeck.de/
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: GoDaddy: Failure to revoke certificate with compromised key within 24 hours

2020-05-21 Thread Jeremy Rowley via dev-security-policy
Yes - that's been well established. See 
https://bugzilla.mozilla.org/show_bug.cgi?id=1639801 (where Ryan reminded me 
that this has been discussed and resolved with actual language in the BRs)

-Original Message-
From: dev-security-policy  On 
Behalf Of Kurt Roeckx via dev-security-policy
Sent: Thursday, May 21, 2020 3:25 PM
To: Daniela Hood 
Cc: Mozilla 
Subject: Re: GoDaddy: Failure to revoke certificate with compromised key within 
24 hours

On Thu, May 21, 2020 at 02:01:49PM -0700, Daniela Hood via dev-security-policy 
wrote:
> Hello Sandy,
> 
> GoDaddy received an email on Friday, May 7, 2020 12:06 UTC, reporting a key 
> compromise, by Sandy. Once received our team started working on making sure 
> that the certificate had indeed a compromised key, the investigation on the 
> certificate finished at that same day Friday, May 7th between 16:54 UTC and 
> 16:55 UTC. 
> After that we followed the Baseline Requirements 4.9.1 That says: "The CA 
> obtains evidence that the Subscriber's Private Key corresponding to the 
> Public Key in the Certificate suffered a Key Compromise;" We obtained the 
> evidence that the key was compromised when we finished our investigation at 
> 16:55 UTC, that was the time we set 24 hours revocation of the certificate, 
> the same was revoked at May 8th at 16:55 UTC.
> We communicated with the reporter as soon as we completed our 
> investigation and informed that the affected certificate would be 
> revoked strictly within 24 hours which we have done and can be 
> confirmed here: https://crt.sh/?id=2366734355

>From what I understand, you received the evidence at May 7, 2020
12:06 UTC, but it took you until 16:55 UTC to confirm that the evidence you've 
received was valid.

I think the 24 hour starts at the time you receive the evidence, not the time 
that you confirm the evidence is valid. Otherwise you can just delay looking at 
the mail for say a week, and still claim that you revoked it in 24 hours.


Kurt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Status of the bugzilla bug list

2020-05-18 Thread Jeremy Rowley via dev-security-policy
There are others in this same group of pending Mozilla closure:
https://bugzilla.mozilla.org/show_bug.cgi?id=1496616
https://bugzilla.mozilla.org/show_bug.cgi?id=1463975
https://bugzilla.mozilla.org/show_bug.cgi?id=1532559
https://bugzilla.mozilla.org/show_bug.cgi?id=1502957 (Waiting on Wayne)




-Original Message-
From: dev-security-policy  On 
Behalf Of Jeremy Rowley via dev-security-policy
Sent: Monday, May 18, 2020 1:52 PM
To: Mozilla 
Subject: RE: Status of the bugzilla bug list

I think your list of 23 is wrong. For example, bug 1550645 is just waiting for 
Mozilla closure. It looks like 1605804 is in the same boat.

-Original Message-
From: dev-security-policy  On 
Behalf Of Matthias van de Meent via dev-security-policy
Sent: Monday, May 18, 2020 1:04 PM
To: MDSP 
Subject: Status of the bugzilla bug list

All,

I have looked at the list of open bugs in the CA compliance dashboard [0], and 
I was unpleasantly suprised. There's a total of 75 open issues at the moment of 
writing, of which 31 have not seen an update in 4 weeks, and of which again 23 
[1] are not waiting for a planned future CA or Mozilla action; 30% of the open 
issues, spread over 14 CAs. (These 23 include issues that end with actions like 
"A: We will do this" and "B: We will do that at 'date-long-gone'" when there is 
no indication the action has been taken, and no update since.)

Of those 23, 17 have not seen interactions for over 2 months. (!)

The MRSP (v2.7) requires regular updates for incident reports until the bug is 
marked as resolved. This means that a CA MUST actively keep track of the issue, 
even though this is not always understood by CAs [2]. I can understand that it 
is not always clear what information is still needed to close a bug, but please 
ask for this information on the issue when this is not known, so that there are 
no 'zombie'
tickets.

To remedy the issue of 'many long-standing open CA-Compliance issues with 
unclear state', I would like - as a concerned individual and end user of the 
root store - to ask the relevant CAs and Mozilla to check their issues in the 
ca-compliance board [0], check whether the issues are 'solved' or what 
information they need, and update the relevant issues with the updated 
information or ask for said missing information, so that there is a clear 
understanding which issues are resolved and which issues need more information 
/ actions by some party in the issue. As stated before, this process is not 
always clear to all CAs [2], and in my experience explicit communication helps 
a lot in checking what is needed to solve an issue.


Kind regards,

Matthias van de Meent


[0] 
https://bugzilla.mozilla.org/buglist.cgi?product=NSS&component=CA%20Certificate%20Compliance&bug_status=__open__
[1] 
https://bugzilla.mozilla.org/buglist.cgi?product=NSS&component=CA%20Certificate%20Compliance&bug_id=1593776%2C1605804%2C1623356%2C1550645%2C1625767%2C1502957%2C1620561%2C1575022%2C1590810%2C1578505%2C1463975%2C1496616%2C1614448%2C1559765%2C1606380%2C1532559%2C1599916%2C1551372%2C1610767%2C1575530%2C1597950%2C1597947%2C1597948&bug_id_type=anyexact&list_id=15253621&query_format=advanced
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1613409
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Status of the bugzilla bug list

2020-05-18 Thread Jeremy Rowley via dev-security-policy
I think your list of 23 is wrong. For example, bug 1550645 is just waiting for 
Mozilla closure. It looks like 1605804 is in the same boat.

-Original Message-
From: dev-security-policy  On 
Behalf Of Matthias van de Meent via dev-security-policy
Sent: Monday, May 18, 2020 1:04 PM
To: MDSP 
Subject: Status of the bugzilla bug list

All,

I have looked at the list of open bugs in the CA compliance dashboard [0], and 
I was unpleasantly suprised. There's a total of 75 open issues at the moment of 
writing, of which 31 have not seen an update in 4 weeks, and of which again 23 
[1] are not waiting for a planned future CA or Mozilla action; 30% of the open 
issues, spread over 14 CAs. (These 23 include issues that end with actions like 
"A: We will do this" and "B: We will do that at 'date-long-gone'" when there is 
no indication the action has been taken, and no update since.)

Of those 23, 17 have not seen interactions for over 2 months. (!)

The MRSP (v2.7) requires regular updates for incident reports until the bug is 
marked as resolved. This means that a CA MUST actively keep track of the issue, 
even though this is not always understood by CAs [2]. I can understand that it 
is not always clear what information is still needed to close a bug, but please 
ask for this information on the issue when this is not known, so that there are 
no 'zombie'
tickets.

To remedy the issue of 'many long-standing open CA-Compliance issues with 
unclear state', I would like - as a concerned individual and end user of the 
root store - to ask the relevant CAs and Mozilla to check their issues in the 
ca-compliance board [0], check whether the issues are 'solved' or what 
information they need, and update the relevant issues with the updated 
information or ask for said missing information, so that there is a clear 
understanding which issues are resolved and which issues need more information 
/ actions by some party in the issue. As stated before, this process is not 
always clear to all CAs [2], and in my experience explicit communication helps 
a lot in checking what is needed to solve an issue.


Kind regards,

Matthias van de Meent


[0] 
https://bugzilla.mozilla.org/buglist.cgi?product=NSS&component=CA%20Certificate%20Compliance&bug_status=__open__
[1] 
https://bugzilla.mozilla.org/buglist.cgi?product=NSS&component=CA%20Certificate%20Compliance&bug_id=1593776%2C1605804%2C1623356%2C1550645%2C1625767%2C1502957%2C1620561%2C1575022%2C1590810%2C1578505%2C1463975%2C1496616%2C1614448%2C1559765%2C1606380%2C1532559%2C1599916%2C1551372%2C1610767%2C1575530%2C1597950%2C1597947%2C1597948&bug_id_type=anyexact&list_id=15253621&query_format=advanced
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1613409
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Digicert issued certificate with let's encrypts public key

2020-05-18 Thread Jeremy Rowley via dev-security-policy
It was just the one system and situation-specific.  

-Original Message-
From: dev-security-policy  On 
Behalf Of Peter Gutmann via dev-security-policy
Sent: Monday, May 18, 2020 6:31 AM
To: Matt Palmer ; Mozilla 
; Jeremy Rowley 

Subject: Re: Digicert issued certificate with let's encrypts public key

Jeremy Rowley via dev-security-policy  
writes:

>For those interested, the short of what happened is that we had an old 
>service where you could replace existing certificates by having 
>DigiCert connect to a site and replace the certificate with a key taken 
>from the site after a TLS connection. No requirement for a CSR since we 
>obtained proof of key control through a TLS connection with the 
>website. Turned out the handshake didn't actually take the key, but 
>allowed the customer to submit a different public key without a CSR. We 
>took down the service a while ago - back in November I think. I plan to 
>put it back up when we work out the kink with it not forcing the key to match 
>the key used in the handshake.

Thanks, that was the info I was after: was this a general problem that we need 
to check other systems for as well, or a situation-specific issue that affected 
just one site/system but no others.  Looks like other systems are unaffected.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Digicert issued certificate with let's encrypts public key

2020-05-17 Thread Jeremy Rowley via dev-security-policy
I thought I posted on this a while ago, but I can't seem to find the post. It 
may have been CAB Forum (where the archives are nearly useless). The conclusion 
from that is the CSR isn't required as part of the issuance process because 
there isn't a risk without having actual control over the private key. The 
worst someone can do isdemonstrate the existence of a cert with a bad 
public key?

For those interested, the short of what happened is that we had an old service 
where you could replace existing certificates by having DigiCert connect to a 
site and replace the certificate with a key taken from the site after a TLS 
connection. No requirement for a CSR since we obtained proof of key control 
through a TLS connection with the website. Turned out the handshake didn't 
actually take the key, but allowed the customer to submit a different public 
key without a CSR. We took down the service a while ago - back in November I 
think. I plan to put it back up when we work out the kink with it not forcing 
the key to match the key used in the handshake. 

-Original Message-
From: dev-security-policy  On 
Behalf Of Matt Palmer via dev-security-policy
Sent: Sunday, May 17, 2020 10:37 PM
To: Mozilla 
Subject: Re: Digicert issued certificate with let's encrypts public key

On Mon, May 18, 2020 at 03:46:46AM +, Peter Gutmann via dev-security-policy 
wrote:
> I assume this is ACME that allows a key to be certified without any 
> proof that the entity requesting the certificate controls it?

ACME requires a CSR to be submitted in order to get the certificate issued. 
A quick scan doesn't show anything like "the signature on the CSR MUST be 
validated against the key", but it does talk about policy considerations around 
weak signatures on CSRs and such, suggesting that it was at least the general 
intention of ACME to require signatures on CSRs to be validated.

In any event, given that the certs involved were issued by Digicert, not Let's 
Encrypt, and Digicert's ACME issuance pipeline is somewhat of a niche thing at 
present, I think it's more likely the problem lies elsewhere.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CT2 log signing key compromise

2020-05-03 Thread Jeremy Rowley via dev-security-policy
Yes, but only for embedded SCTs. For revocation or extension served SCTs, you 
could end up with different timestamps depending on how the CA is set up. On 
top of that, for embedded SCTs, you'd need to route the cert through a separate 
singing service that used the compromised key. For that to happen, either 
another CA compromised the log or another CA was compromised in a manner that 
an attacker could direct issuance through a log running the compromised key. 

Digging into our logs, I think the log should be distrusted for everything 
after 17:00:02 on May 2. This was the last known good treehead. That head was 
published at 5:00:00 on Sunday, May 3. All SCTs after this don't appear in a 
reliable tree.

Jeremy
-Original Message-
From: dev-security-policy  On 
Behalf Of Corey Bonnell via dev-security-policy
Sent: Sunday, May 3, 2020 6:42 PM
To: Mozilla 
Subject: Re: CT2 log signing key compromise

On Sunday, May 3, 2020 at 7:35:44 PM UTC-4, Alex Cohn wrote:
> Thank you for the clarification. This would appear to introduce a new 
> requirement for clients verifying SCTs from CT2: a get-proof-by-hash 
> call to the log server (or a mirror) is now required to know if a SCT 
> from before May 2 is valid. Do CT-enforcing clients do this in practice today?
> (I suspect the answer is "no" but don't know off the top of my head)

Alternatively, if SCTs from other trusted CT logs are available for a given 
certificate (which would be the case with certificates that comply with 
Google/Apple policy by embedding the requisite number of SCTs in the final 
certificate), the timestamps of those SCTs could be used to determine if the 
CT2 SCT was signed before the key was compromised.

Thanks,
Corey
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CT2 log signing key compromise

2020-05-03 Thread Jeremy Rowley via dev-security-policy
The key could be easily used if the attacked exported the key and started 
signing SCTs. However, they would be able to use it to sign SCTs in DigiCert’s 
log for fake certs without knowing the full infrastructure.

We will definitely have a full post-mortem on the issue. However, I wanted to 
post early to give everyone a head’s up about the incident and allow the 
browsers to take any action required in protecting relying parties.

I can say we reacted to the vulnerability when we were notified by Salt that it 
impacted our system. However, I’m not sure why we were not notified and did not 
react to the media publication when it first came out. That is a question we 
are digging into.

From: Ian Carroll 
Sent: Sunday, May 3, 2020 5:55 PM
To: Jeremy Rowley 
Cc: Mozilla 
Subject: Re: CT2 log signing key compromise

Hi Jeremy,

Can you clarify why you believe the signing key cannot be easily used? Is there 
a cryptographic limitation in what was disclosed?

Also, do you have plans for a more formal post-mortem? Since vulnerability 
management is usually an organization-wide process, it would be useful to 
understand why it failed here, in the event it could have carried over to other 
DigiCert infrastructure.

Thanks,
Ian Carroll

On Sun, May 3, 2020 at 4:19 PM Jeremy Rowley via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
 wrote:
Hey all,

The key used to sign SCTs for the CT2 log was compromised yesterday at 7pm 
through the Salt root bug. The remaining logs remain uncompromised and run on 
separate infrastructure.  We discovered the compromise today and are working to 
turn that log into read only mode so that no new SCTs are issued. We doubt the 
key was used to sign anything as you'd need to know the CT build to do so. 
However, as a precaution, we ask that you consider all SCTs invalid if the SCT 
was issued from CT2 after 7pm MST on May 2nd . Please let me know what 
questions you have.

Jeremy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org<mailto:dev-security-policy@lists.mozilla.org>
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CT2 log signing key compromise

2020-05-03 Thread Jeremy Rowley via dev-security-policy
Emails crossed paths – I meant 6pm for the last signed head, but I’m double 
checking as I’m not 100% sure on that time.  And you are right – since a 
compromised SCT can have any time it wants, only a real time check on the last 
known good log would be proof of a valid CT. That real time check doesn’t 
exist.  However, there is still the multiple log requirement.


From: Alex Cohn 
Sent: Sunday, May 3, 2020 5:35 PM
To: Jeremy Rowley 
Cc: Mozilla 
Subject: Re: CT2 log signing key compromise

Thank you for the clarification. This would appear to introduce a new 
requirement for clients verifying SCTs from CT2: a get-proof-by-hash call to 
the log server (or a mirror) is now required to know if a SCT from before May 2 
is valid. Do CT-enforcing clients do this in practice today? (I suspect the 
answer is "no" but don't know off the top of my head)

Alex



On Sun, May 3, 2020 at 6:27 PM Jeremy Rowley 
mailto:jeremy.row...@digicert.com>> wrote:
They would already appear in a previous tree where the head was signed by us.

From: Alex Cohn mailto:a...@alexcohn.com>>
Sent: Sunday, May 3, 2020 5:22 PM
To: Jeremy Rowley 
mailto:jeremy.row...@digicert.com>>
Cc: Mozilla 
mailto:mozilla-dev-security-pol...@lists.mozilla.org>>
Subject: Re: CT2 log signing key compromise

The timestamp on a SCT is fully controlled by the signer, so why should SCTs 
bearing a timestamp before May 2 still be considered trusted?

Alex

On Sun, May 3, 2020 at 6:19 PM Jeremy Rowley via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
 wrote:
Hey all,

The key used to sign SCTs for the CT2 log was compromised yesterday at 7pm 
through the Salt root bug. The remaining logs remain uncompromised and run on 
separate infrastructure.  We discovered the compromise today and are working to 
turn that log into read only mode so that no new SCTs are issued. We doubt the 
key was used to sign anything as you'd need to know the CT build to do so. 
However, as a precaution, we ask that you consider all SCTs invalid if the SCT 
was issued from CT2 after 7pm MST on May 2nd . Please let me know what 
questions you have.

Jeremy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org<mailto:dev-security-policy@lists.mozilla.org>
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CT2 log signing key compromise

2020-05-03 Thread Jeremy Rowley via dev-security-policy
That is a good question though - I think the last signed head was 7pm. That 
would be the actual time when all other certs shouldn't be trusted... 

There is a problem though if you have a bad-acting CA since the notBefore date 
could be before 7pm and the browsers don't check to see if it was included in 
the tree before that time. However, that is the reason to include multiple SCTs 
in  the same log.

-Original Message-
From: dev-security-policy  On 
Behalf Of Jeremy Rowley via dev-security-policy
Sent: Sunday, May 3, 2020 5:27 PM
To: Alex Cohn 
Cc: Mozilla 
Subject: RE: CT2 log signing key compromise

They would already appear in a previous tree where the head was signed by us.

From: Alex Cohn 
Sent: Sunday, May 3, 2020 5:22 PM
To: Jeremy Rowley 
Cc: Mozilla 
Subject: Re: CT2 log signing key compromise

The timestamp on a SCT is fully controlled by the signer, so why should SCTs 
bearing a timestamp before May 2 still be considered trusted?

Alex

On Sun, May 3, 2020 at 6:19 PM Jeremy Rowley via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
 wrote:
Hey all,

The key used to sign SCTs for the CT2 log was compromised yesterday at 7pm 
through the Salt root bug. The remaining logs remain uncompromised and run on 
separate infrastructure.  We discovered the compromise today and are working to 
turn that log into read only mode so that no new SCTs are issued. We doubt the 
key was used to sign anything as you'd need to know the CT build to do so. 
However, as a precaution, we ask that you consider all SCTs invalid if the SCT 
was issued from CT2 after 7pm MST on May 2nd . Please let me know what 
questions you have.

Jeremy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org<mailto:dev-security-policy@lists.mozilla.org>
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CT2 log signing key compromise

2020-05-03 Thread Jeremy Rowley via dev-security-policy
They would already appear in a previous tree where the head was signed by us.

From: Alex Cohn 
Sent: Sunday, May 3, 2020 5:22 PM
To: Jeremy Rowley 
Cc: Mozilla 
Subject: Re: CT2 log signing key compromise

The timestamp on a SCT is fully controlled by the signer, so why should SCTs 
bearing a timestamp before May 2 still be considered trusted?

Alex

On Sun, May 3, 2020 at 6:19 PM Jeremy Rowley via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
 wrote:
Hey all,

The key used to sign SCTs for the CT2 log was compromised yesterday at 7pm 
through the Salt root bug. The remaining logs remain uncompromised and run on 
separate infrastructure.  We discovered the compromise today and are working to 
turn that log into read only mode so that no new SCTs are issued. We doubt the 
key was used to sign anything as you'd need to know the CT build to do so. 
However, as a precaution, we ask that you consider all SCTs invalid if the SCT 
was issued from CT2 after 7pm MST on May 2nd . Please let me know what 
questions you have.

Jeremy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org<mailto:dev-security-policy@lists.mozilla.org>
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


CT2 log signing key compromise

2020-05-03 Thread Jeremy Rowley via dev-security-policy
Hey all,

The key used to sign SCTs for the CT2 log was compromised yesterday at 7pm 
through the Salt root bug. The remaining logs remain uncompromised and run on 
separate infrastructure.  We discovered the compromise today and are working to 
turn that log into read only mode so that no new SCTs are issued. We doubt the 
key was used to sign anything as you'd need to know the CT build to do so. 
However, as a precaution, we ask that you consider all SCTs invalid if the SCT 
was issued from CT2 after 7pm MST on May 2nd . Please let me know what 
questions you have.

Jeremy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Auditing of CA facilities in lockdown because of an environmental disaster/pandemic

2020-03-23 Thread Jeremy Rowley via dev-security-policy
Although I’m sure every CA has business continuity plans, I think that extended 
blocked access to every data center they have may not be part of that plan.  
I’m not sure, but I think if the required shelter’s are in place for long 
periods you may start to see problems. Early disclosure sounds like the best 
policy, but I thought the early disclosure requirement may be worth calling out 
in the Mozilla policy. Then again, that really should be standard procedure at 
that point.

From: Ryan Sleevi 
Sent: Friday, March 20, 2020 2:57 PM
To: Jeremy Rowley 
Cc: Kathleen Wilson ; Mozilla 

Subject: Re: Auditing of CA facilities in lockdown because of an environmental 
disaster/pandemic

On Fri, Mar 20, 2020 at 4:15 PM Jeremy Rowley via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
 wrote:
What about issues other than audits? For example, with certain locations 
closing, key ceremonies may become impossible, leading to downed CRLs/OCSP for 
intermediates. There's also a potential issue with trusted roles even being 
able to access the data center if something goes down and Sub CAs can't be 
revoked. Should that be mentioned, requiring CAs to file an incident report as 
soon as the event becomes likely?

Yes. I think those are, quite honestly, much more concerning, because that's 
not about a CA's relationship with an external party, but about a CA's own 
preparedness for disaster. In any event, as with /any/ incident, the sooner 
it's filed, and the more information and context is provided, the more 
effective a response can be.


For the location issue, I think including the locations audited and the 
locations not audited (to the full criteria) as an emphasis of matter would be 
helpful. So maybe an emphasis like we audited the offices in x, y, and z. 
Office z was inaccessible to evaluate criteria 1-n. It give you the list of 
locations and where there were issues in getting access due t o he emergency.

Yup. That is the model WebTrust is using, and that reasonably meets the 
objective here of informing relying parties when the auditor faced limitations 
that should be considered when evaluating their report.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Digicert: failure to revoke certificate with previously compromised key

2020-03-23 Thread Jeremy Rowley via dev-security-policy
Yeah  - that’s about the sum of it. I’ll file an incident report.

There are two things worth discussing in general:

  1.  I’m very interested in seeing the Let’s Encrypt response to this issue 
since the biggest obstacle in trying to find all of the keys with the same 
private key is the sheer volume of the certs. Trying to do a comprehensive 
search when a private key is provided leaves some window between when we start 
the analysis and when we revoke.


  1.  Another issue in trying to report keys that aren’t affiliated with any 
cert is that the process becomes subject to abuse. Without knowing a cert 
affiliated with a key, someone can continuously generate keys and submit them 
as compromised. You end up just blacklisting random keys, DDOSing the 
revocation system as it kicks off another request to  search for those keys.  I 
don’t think it’s feasible. This is why the disclosures need to be affiliated 
with actual certs.




From: Ryan Sleevi 
Sent: Monday, March 23, 2020 10:54 AM
To: Jeremy Rowley 
Cc: Matt Palmer ; Mozilla 

Subject: Re: Digicert: failure to revoke certificate with previously 
compromised key



On Mon, Mar 23, 2020 at 11:01 AM Jeremy Rowley via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
 wrote:
Hey Matt,

Ryan's post was the part I thought was relevant, but I understood it 
differently. The cert was issued, but we should have now revoked it (24 hours 
after receiving notice). I do see your interpretation though, and the language 
does support 24 hours after issuing the new cert.  What I need is a tool that 
scans after revocation to ensure there are no additional certs with the same 
key.  The frustration is that this was where the cert was issued after our scan 
of all keys but just before revocation.  As a side note, our system blacklists 
the keys when a cert is revoked for key compromise, which means I don't have a 
way to blacklist a key before a cert is ever issued.

Matt, Jeremy,

To make sure I understand the timeline correctly:
2020-03-20 02:05:49 UTC - Matt reports SPKI 
4310b6bc0841efd7fcec6ba0ed1f36e7a28bf9a707ae7f7771e2cd4b6f31b5af, associated 
with https://crt.sh/?id=1760024320 , as compromised
2020-03-21 01:56:31 UTC - DigiCert issues https://crt.sh/?id=2606438724 with 
that same SPKI
2020-03-21 02:09:12 UTC - DigiCert revokes https://crt.sh/?id=1760024320
2020-03-23 03:16:18 UTC - DigiCert revokes https://crt.sh/?id=2606438724

Is that roughly correct?

If so, it does seem like an Incident Report is warranted here, so we can 
understand why:
a) https://crt.sh/?id=2606438724 wasn't revoked when 
https://crt.sh/?id=1760024320 was revoked (assuming those timestamps in the CRL 
are accurate)
b) The key wasn't blocklisted as known compromised (if the timestamps are 
incorrect)

That is, it doesn't seem unreasonable that, for situations of key compromise, 
the CA has the necessary data to scan their systems for potential reuse of that 
key. Given DigiCert's data 
lake<https://bugzilla.mozilla.org/show_bug.cgi?id=1526154>, it should be 
possible to scan for issues.

If I've misunderstood the timing here, please feel free to correct. This is 
where the incident report process is useful, and Resolved/Invalid is a 
perfectly fine state to end in.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Digicert: failure to revoke certificate with previously compromised key

2020-03-23 Thread Jeremy Rowley via dev-security-policy
lly do the investigation to show the key is compromised. I think 
there's ambiguity in when that timer starts, and clearing up that ambiguity in 
the CAB forum would be good.  As you pointed out "obtains" can mean different 
things. 

>> The time to say "hey, while we *can* process these SPKIs and CSRs, they are 
>> kinda teh suck, what else you got?" would have been, ideally, when I sent 
>> that first trickle of compromise reports that were acted upon.  At worst, 
>> when the Big Bomb of Keys dropped, you stick your hand up and say "uhm, 
>> these kinda suck to process, any chance it's easy for you to send us X 
>> instead?", while simultaneously working on the pile as best you can.  Worst 
>> case I've gone to bed, and you do what you did anyway, but depending on what 
>> X was, I might have been able to oblige.

To be clear, I'm not complaining about the format. I'm wondering when we 
obtained the private key for the 24 hour purposes. With automation, the time 
between when we get the email and when we confirm key compromise should be 
nearly zero. However, with a more manual process, that time is not 
insignificant. What I don't like about the interpretation that  the revocation 
event is 24 hours from when we get an email is some emails are very vague about 
key compromise.  With that reading, if we get an email without proof that is 
later followed up by proof, the 24 hour period could start when we get the 
initial email even if the proof is provided 25 hours later.  That does happen, 
which is why I think the time period should be 24 hours from after the CA 
receives proof of key compromise. But even that is ambiguous. When did we 
receive proof of key compromise? I'd say it's when all the CSRs finished 
downloading. If that's not the case, then you are encouraging CAs to be myopic 
in the way they accept key compromise information.

Jeremy


-Original Message-
From: dev-security-policy  On 
Behalf Of Matt Palmer via dev-security-policy
Sent: Monday, March 23, 2020 4:45 AM
To: Mozilla 
Subject: Re: Digicert: failure to revoke certificate with previously 
compromised key

On Mon, Mar 23, 2020 at 06:14:29AM +, Jeremy Rowley wrote:
> That's not the visible consensus IMO.  The visible consensus is we 
> need to revoke a cert that is key compromised once we're informed the 
> key is compromised for that cert 
> (https://groups.google.com/forum/m/#!topic/mozilla.dev.security.policy/1ftkqbsnEU4).

I think that link might not be doing what you expect, as it (at least for
me) is collapsing all the replies in that topic before Doug Beattie's post. 
The only response that seems relevant in that topic to was Ryan's reply to me 
up-thread from Doug's post, which was, in (I believe) relevant part, when I 
asked the question:

> 3. Can a CA be deemed to have "obtained evidence" of key compromise prior 
>to the issuance of a certificate, via a previously-submitted key
>compromise problem report for the same private key?  If so, it would
>seem that, even if the issuance of the certificate is OK, it is a
>failure-to-revoke incident if the cert doesn't get revoked within 24
>hours...

To which Ryan replied:

> Correct, that was indeed the previous conclusion around this. The CA 
> can issue, but then are obligated to revoke within 24 hours. There’s 
> not a statute of limitation on “obtains evidence” here, precisely 
> because it could allow a host of shenanigans, such as CAs arguing the 
> require per-cert evidence rather than systemic demonstrations.

I don't think that supports your point, though, so I wonder if I've got the 
wrong part.  That last part of Ryan's: "shenanigans, such as CAs arguing 
the[y?] require per-cert evidence rather than systemic demonstrations", seems 
to me like it's describing your statement, above, that you (only?) "need to 
revoke a cert that is key compromised once we're the key is compromised *for 
that cert*" (emphasis added).  I don't read Ryan's use of "shenanigans" as 
approving of that sort of thing.

> The certificate you mentioned was issued before the keys were 
> blacklisted and not part of a certificate problem report.  When 
> revoking a cert we scan to see if additional certs are issued with the 
> same key t, but this particular cert one was issued after the scan but 
> before the revocation, largely because the way you are submitting 
> certificate problem reports breaks automation.  We currently don't 
> have a way to blacklist private keys until a certificate is revoked, 
> although that would be a nice enhancement for us to add in the future.  
> Anyway, I don't think anything reported violated the  BR since 1) this 
> cert was no

RE: Digicert: failure to revoke certificate with previously compromised key

2020-03-22 Thread Jeremy Rowley via dev-security-policy
That's not the visible consensus IMO. The visible consensus is we need to 
revoke a cert that is key compromised once we're informed the key is 
compromised for that cert 
(https://groups.google.com/forum/m/#!topic/mozilla.dev.security.policy/1ftkqbsnEU4).
 The certificate you mentioned was issued before the keys were blacklisted and 
not part of a certificate problem report.  When revoking a cert we scan to see 
if additional certs are issued with the same key t, but this particular cert 
one was issued after the scan but before the revocation, largely because the 
way you are submitting certificate problem reports breaks automation. We 
currently don't have a way to blacklist private keys until a certificate is 
revoked, although that would be a nice enhancement for us to add in the future. 
 Anyway, I don't think anything reported violated the  BR since 1) this cert 
was not part of a certificate problem report and 2) we will be revoking within 
24 hours of your Mozilla posting. 

I support the idea of swift revocation of compromised private keys and do 
appreciate you reporting them. I think this is helpful in ensuring the safety 
of users online. However, using the SPKI to submit information breaks our 
automation, making finding and revoking certs difficult. The more standards way 
(IMO) is the SHA2 thumbprint or serial number or a good old CSR.  Because 
submitting the SPKI breaks automation, getting evidence of key compromise took 
an additional 5 hours after you submitted the report. We still revoked all of 
the current certs with submitted keys within 24 hours of the report (since 
compromised private keys are bad and there is nothing that says we can't revoke 
earlier than 24 hours), but I did want to clarify that I don't think the time 
starts until we can actually get the information necessary to do an 
investigation (because there is not sufficient evidence of a key compromise 
until then). 

Going to the previous discussion, I'd definitely support seeing a standardized 
way to report key compromise. Trying to account for the various formats they 
come in and through the various channels creates a lot of manual work on a 
process that can easily be automated. 

Jeremy


-Original Message-
From: dev-security-policy  On 
Behalf Of Matt Palmer via dev-security-policy
Sent: Saturday, March 21, 2020 11:01 PM
To: Mozilla 
Subject: Digicert: failure to revoke certificate with previously compromised key

Certificate https://crt.sh/?id=2606438724, issued either at 2020-03-21
00:00:00 UTC (going by notBefore) or 2020-03-21 01:56:31 UTC (going by SCTs), 
is using a private key with SPKI 
4310b6bc0841efd7fcec6ba0ed1f36e7a28bf9a707ae7f7771e2cd4b6f31b5af, which was 
reported to Digicert as compromised on 2020-03-20 02:05:49 UTC (and for which 
https://crt.sh/?id=1760024320 was revoked for keyCompromise soon after 
certificate 2606438724 was issued).

As previously discussed on this list, the visible consensus is that, according 
to the BRs, certificates for which the CA already had evidence of key 
compromise must be revoked within 24 hours of issuance.  That 24 hour period 
has passed for the above certificate, and thus it would appear that Digicert 
has failed to abide by the BRs.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Auditing of CA facilities in lockdown because of an environmental disaster/pandemic

2020-03-20 Thread Jeremy Rowley via dev-security-policy
What about issues other than audits? For example, with certain locations 
closing, key ceremonies may become impossible, leading to downed CRLs/OCSP for 
intermediates. There's also a potential issue with trusted roles even being 
able to access the data center if something goes down and Sub CAs can't be 
revoked. Should that be mentioned, requiring CAs to file an incident report as 
soon as the event becomes likely? 

For the location issue, I think including the locations audited and the 
locations not audited (to the full criteria) as an emphasis of matter would be 
helpful. So maybe an emphasis like we audited the offices in x, y, and z. 
Office z was inaccessible to evaluate criteria 1-n. It give you the list of 
locations and where there were issues in getting access due t o he emergency. 
Same city is harder. For example, we have two locations in Utah. You could say 
Utah office 1 and Utah office 2 to obfuscate the information a little.

Jeremy

-Original Message-
From: dev-security-policy  On 
Behalf Of Kathleen Wilson via dev-security-policy
Sent: Friday, March 20, 2020 2:07 PM
To: Mozilla 
Subject: Re: Auditing of CA facilities in lockdown because of an environmental 
disaster/pandemic

All,

I will greatly appreciate your ideas about the following.

In the Minimum Expectations section in
https://wiki.mozilla.org/CA/Audit_Statements#Audit_Delay
I added:
""
* Both ETSI and WebTrust Audits must:
** Disclose each location that was included in the scope of the audit, as well 
as whether the inspection was physically carried out in person.
""

My question: What should "location" mean in the above requirement?

The problem is that we require public-facing audit statements, so I do not want 
sensitive or confidential information in the audit statements, such as the 
exact physical addresses of CA Operations and root cert private key storage.

What information could be added to audit statements to give us a clear sense 
about which CA facilities were and were not audited?

For example, if a CA happens to have two facilities in the same city that 
should be audited, how can the audit statement clearly indicate if all of that 
CA's facilities were audited without providing the exact physical addresses?

Thanks,
Kathleen



___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: About upcoming limits on trusted certificates

2020-03-17 Thread Jeremy Rowley via dev-security-policy
Yeah - I've wanted to do this for a long time. If the domain is only good for 
30 days, why would we issue even a 1-year cert? If it's good for 13 months, why 
not tie the cert validity to that? I guess because they could have transferred 
the domain (which just means you need additional caps)? It's odd not to have 
the domain registration as the maximum cap on the range since that's when you 
know the domain is most at risk for transfer. 

Jeremy

-Original Message-
From: dev-security-policy  On 
Behalf Of Tim Hollebeek via dev-security-policy
Sent: Tuesday, March 17, 2020 10:00 AM
To: Kathleen Wilson ; Mozilla 

Subject: RE: About upcoming limits on trusted certificates


> On 3/11/20 3:51 PM, Paul Walsh wrote:
> > Can you provide some insight to why you think a shorter frequency in
> domain validation would be beneficial?
>
> To start with, it is common for a domain name to be purchased for one year.
> A certificate owner that was able to prove ownership/control of the 
> domain name last year might not have renewed the domain name. So why 
> should they be able to get a renewal cert without having that re-checked?

This has been a favorite point of Jeremy's for as long as I've been 
participating in the CA/Browser Forum and on this list.  Tying certificate 
lifetimes more closely to the lifetime and validity of the domains they are 
protecting would actually make a lot of sense, and we'd support any efforts to 
do so.

-Tim
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Terms and Conditions that use technical measures to make it difficult to change CAs

2020-03-17 Thread Jeremy Rowley via dev-security-policy
Yes  - please share the details with me as I am very surprised to hear that. I 
know the DigiCert agreements I've seen don't permit revocation because of 
termination so whoever (if anyone) is saying that is contradicting the actual 
agreement. Threatening revocation because of termination or revoking upon 
termination also violates our internal policies - certs issued are good for the 
duration of the cert, even if the console agreement terminates.  

Since I'm sure we haven't actually revoked because of termination, please send 
me the details of the threats and I'll take care of them. 


-Original Message-
From: dev-security-policy  On 
Behalf Of Nick France via dev-security-policy
Sent: Tuesday, March 17, 2020 11:27 AM
To: Mozilla 
Subject: Re: Terms and Conditions that use technical measures to make it 
difficult to change CAs

On Monday, March 16, 2020 at 9:06:33 PM UTC, Tim Hollebeek wrote:
> Hello,
> 
>  
> 
> I'd like to start a discussion about some practices among other 
> commercial CAs that have recently come to my attention, which I 
> personally find disturbing.  While it's perfectly appropriate to have 
> Terms and Conditions associated with digital certificates, in some 
> circumstances, those Terms and Conditions seem explicitly designed to 
> prevent or hinder customers who wish to switch to a different 
> certificate authority.  Some of the most disturbing practices include 
> the revocation of existing certificates if a customer does not renew 
> an agreement, which can really hinder a smooth transition to a new 
> provider of digital certificates, especially since the customer may 
> not have anticipated the potential impact of such a clause when they 
> first signed the agreement.  I'm particularly concerned about this 
> behavior because it seems to be an abuse of the revocation system, and 
> imposes costs on everyone who is trying to generate accurate and efficient 
> lists of revoked certificates (e.g. Firefox).
> 
>  
> 
> I'm wondering what the Mozilla community thinks about such practices.
> 
>  
> 
> -Tim

Tim,

Completely agree on your statement that it's a disturbing practice. We've sadly 
come across it several times in the past 12-18 months, leading to problems for 
the customer and of course lost business for us as they inevitably decide to 
remain with the incumbent CA when faced with a hard deadline for certificate 
revocation - regardless of the natural expiry dates.
Your points about the impact and costs to the wider ecosystem ring true, as 
well.
Revocation should not be used to punish those wishing to migrate CAs. We 
certainly don't do it.

More troubling is that each time it's either been mentioned early in 
discussions or has caused a business discussion to cease at a late stage - it's 
been DigiCert that was the current CA and they/you participated in this 
practice of threatening revocation of certificates well before expiry due to 
contract termination.

I have at least 5 major global enterprises that this has happened to recently.

Am happy to share more details privately if you wish to discuss.


Nick
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: About upcoming limits on trusted certificates

2020-03-12 Thread Jeremy Rowley via dev-security-policy
I think this statement is not accurate: "As a result, CAs don’t pursue 
automation, or when they support it, neither promote nor require it." I know 
very few CAs who want to spend extra resources on manual validations and just 
as few who don't support some level of automation. The manual methods are 
generally used because so many companies haven't automated this process or 
refuse to. The companies who contribute to these threads tend to be far more 
technical than most other companies (or the majority of cert requesters). The 
assumption that each company has the resources to set up automation ignores 
this. The opposition to shorter lifecycles and validation periods stems from 
knowing these work flows and the painful exercise of changing them. 

The automated methods weren't even codified until ballot 169 which was late 
2016. We're at less than 4 years for automation being a real option. Although I 
don't have empirical data for other CAs, the LE adoption rate (a billion certs 
since indicates a fairly rapid adoption of automated methods compared to other 
changes in the industry. 

Jeremy

-Original Message-
From: dev-security-policy  On 
Behalf Of Ryan Sleevi via dev-security-policy
Sent: Thursday, March 12, 2020 7:30 AM
To: Julien Cristau 
Cc: Mozilla ; Kathleen Wilson 

Subject: Re: About upcoming limits on trusted certificates

The Baseline Requirements allow a number of methods that aren’t easily 
automated, such as validation via email. As a result, CAs don’t pursue 
automation, or when they support it, neither promote nor require it. This leads 
CAs to be opposed to efforts to shorten the reuse time, as they have 
historically treated it as the same complexity as identity validation, even 
when it doesn’t need to be.

There’s nothing intrinsically preventing it, although the practical effect is 
it would encourage technically automatable methods, as opposed to manual 
methods.

On Thu, Mar 12, 2020 at 4:45 AM Julien Cristau via dev-security-policy < 
dev-security-policy@lists.mozilla.org> wrote:

> Hi Kathleen, all,
>
> Is there a reason domain validation information needs to be reused for 
> more than, say, 30 days?  For the manual parts of identity validation 
> I understand you don't want to repeat the process too often, but 
> domain validation can be entirely automated so it doesn't seem like 
> long reuse periods are warranted. (It's entirely possible I'm missing 
> something and there are significant hurdles to overcome for CAs and/or 
> applicants in confirming domain ownership more than once a year.)
>
> Thanks,
> Julien
>
> On Wed, Mar 11, 2020 at 11:39 PM Kathleen Wilson via 
> dev-security-policy < dev-security-policy@lists.mozilla.org> wrote:
>
> > All,
> >
> > First, I would like to say that my preference would have been for 
> > this type of change (limit SSL cert validity period to 398 days) to 
> > be agreed to in the CA/Browser Forum and added to the BRs. However, 
> > the ball is already rolling, and discussion here in m.d.s.p is 
> > supportive of updating Mozilla's Root Store Policy to incorporate 
> > the shorter validity period. So...
> >
> > What do you all think about also limiting the re-use of domain
> validation?
> >
> > BR section 3.2.2.4 currently says: "Completed validations of 
> > Applicant authority may be valid for the issuance of multiple 
> > Certificates over time."
> > And BR section 4.2.1 currently says: "The CA MAY use the documents 
> > and data provided in Section 3.2 to verify certificate information, 
> > or may reuse previous validations themselves, provided that the CA 
> > obtained the data or document from a source specified under Section 
> > 3.2 or completed the validation itself no more than 825 days prior 
> > to issuing the Certificate."
> >
> > In line with that, section 2.1 of Mozilla's Root Store Policy 
> > currently
> > says:
> > "CAs whose certificates are included in Mozilla's root program MUST: ...
> > "5. verify that all of the information that is included in SSL 
> > certificates remains current and correct at time intervals of 825 
> > days or less;"
> >
> > When we update Mozilla's Root Store Policy, should we shorten the 
> > domain validation frequency to be in line with the shortened 
> > certificate validity period? i.e. change item 5 in section 2.1 of 
> > Mozilla's Root Store Policy to:
> > "5. limit the validity period and re-use of domain validation for 
> > SSL certificates to 398 days or less if the certificate is issued on 
> > or after September 1, 2020;"
> >
> > I realize that in order to enforce shorter frequency in domain 
> > validation we will need to get this change into the BRs and into the 
> > audit criteria. But CAs are expected to follow Mozilla's Root Store 
> > Policy regardless of enforcement mechanisms, and having this in our 
> > policy would make Mozilla's intentions clear.
> >
> > As always, I will greatly appreciate your thoughtful and 
> > constructive input on this.
> >
> > Thanks,
> > 

RE: DRAFT January 2020 CA Communication

2019-12-19 Thread Jeremy Rowley via dev-security-policy
Should anything be mentioned about the allowed algorithms? That's the largest 
change to the policy and  confirming the AlgorithmIdentifiers in each case may 
take some time.

-Original Message-
From: dev-security-policy  On 
Behalf Of Wayne Thayer via dev-security-policy
Sent: Thursday, December 19, 2019 10:10 AM
To: mozilla-dev-security-policy 
Subject: DRAFT January 2020 CA Communication

All,

I've drafted a new email and survey that I plan to send to all CAs in the 
Mozilla program in early January. it focuses on compliance with the new
(2.7) version of our Root Store Policy. I will appreciate your review and 
feedback on the draft:
https://ccadb-public.secure.force.com/mozillacommunications/CACommunicationSurveySample?CACommunicationId=a051J3waNOW

Note that two deadlines have been added to the communication:
* Action 3 specifies that CAs must agree to update their CP/CPS, if needed to 
comply, prior to April 1, 2020. This is intended to prevent responses that we 
have found unacceptable in the past, e.g. waiting for an annual audit before 
updating the CP/CPS.
* Action 5 requires CAs with failed Intermediate ALV results to publish a plan 
to correct these problems no later than Feb 15, 2020. Kathleen announced that 
we have begun validating audit letters for intermediate certificates back in 
October [1], and the requirement for audit statements to contain the SHA256 
fingerprint of each root and intermediate certificate that was in scope of the 
audit dates back to 2017. CAs should have already taken action to resolve these 
issues, so this deadline is intended to convey the need for an immediate 
response.

- Wayne

[1]
https://groups.google.com/d/msg/mozilla.dev.security.policy/M7NGwCh14DI/ZPDMRvDzBQAJ
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Certificate OU= fields with missing O= field

2019-11-01 Thread Jeremy Rowley via dev-security-policy
My view is that the OU field is a subject distinguished name field and that a 
CA must have a process to prevent unverified information from being included in 
the field.

Subject Identity Information is defined as information that identifies the 
Certificate Subject.

I suppose the answer to your question depends on a) what you consider as 
information that identifies the Certificate Subject and b) whether the process 
required establishes the minimum relationship between that information and your 
definition of SII.

From: Ryan Sleevi 
Sent: Friday, November 1, 2019 10:11 AM
To: Jeremy Rowley 
Cc: mozilla-dev-security-policy 
Subject: Re: Certificate OU= fields with missing O= field

Is your view that the OU is not Subject Identity Information, despite that 
being the Identity Information that appears in the Subject? Are there other 
fields and values that you believe are not SII? This seems inconsistent with 
7.1.4.2, the section in which this is placed.

As to the .com in the OU, 7.1.4.2 also prohibits this:
CAs SHALL NOT include a Domain Name or IP Address in a Subject attribute except 
as specified in Section 3.2.2.4 or Section 3.2.2.5.

On Fri, Nov 1, 2019 at 8:41 AM Jeremy Rowley via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
 wrote:
A mistake in the BRs (I wrote the language unfortunately so shame on me for not 
matching the other sections of org name or the given name). There's no 
certificate that ever contains all of these fields. How would you ever have 
that?

There's no requirement that the OU field information relate to the O field 
information as long as the information is verified.

-Original Message-
From: dev-security-policy 
mailto:dev-security-policy-boun...@lists.mozilla.org>>
 On Behalf Of Alex Cohn via dev-security-policy
Sent: Friday, November 1, 2019 9:13 AM
To: Kurt Roeckx mailto:k...@roeckx.be>>
Cc: Matthias van de Meent 
mailto:matthias.vandeme...@cofano.nl>>; MDSP 
mailto:dev-security-policy@lists.mozilla.org>>
Subject: Re: Certificate OU= fields with missing O= field

On Fri, Nov 1, 2019 at 5:14 AM Kurt Roeckx via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
 wrote:
>
> On Fri, Nov 01, 2019 at 11:08:23AM +0100, Matthias van de Meent via 
> dev-security-policy wrote:
> > Hi,
> >
> > I recently noticed that a lot of leaf certificates [0] have
> > organizationalUnitName specified without other organizational
> > information such as organizationName. Many times this field is used
> > for branding purposes, e.g. "issued through "
> > or "SomeBrand SSL".
> >
> > BR v1.6.6 § 7.1.4.2.2i has guidance on usage of the OU field: "The
> > CA SHALL implement a process that prevents an OU attribute from
> > including a name, DBA, tradename, trademark, address, location, or
> > other text that refers to a specific natural person or Legal Entity
> > unless the CA has verified this information in accordance with
> > Section 3.2 and the Certificate also contains
> > subject:organizationName, , subject:givenName, subject:surname,
> > subject:localityName, and subject:countryName attributes, also
> > verified in accordance with Section 3.2.2.1."
> >
> > As the organizationName and other related attributes are not set in
> > many of those certificates, even though e.g. "COMODO SSL Unified
> > Communications" is a very strong reference to Sectigo's ssl branding
> > & business, I believe the referenced certificate is not issued in
> > line with the BR.
> >
> > Is the above interpretation of BR section 7.1.4.2.2i correct?
>
> That OU clearly doesn't have anything to do with the subject that was
> validated, so I also consider that a misissue.
>
>
> Kurt

A roughly-equivalent Censys.io query, excluding a couple other unambiguous 
"domain validated" OU values: "not _exists_:
parsed.subject.organization and _exists_:
parsed.subject.organizational_unit and not
parsed.subject.organizational_unit: "Domain Control Validated" and not
parsed.subject.organizational_unit: "Domain Validated Only" and not
parsed.subject.organizational_unit: "Domain Validated" and
validation.nss.valid: true" returns 17k hits.

IMO the "Hosted by .Com" certs fail 7.1.4.2.2i - the URL of a web host is 
definitely "text that refers to a specific ... Legal Entity".

> Certificate also contains subject:organizationName, ,
> subject:givenName, subject:surname, subject:localityName, and
> subject:countryName attributes, also verified in accordance with Section 
> 3.2.2.1.

I'm pretty sure this isn't what the BRs intended, but this appears to forbid 
issuance with a meaningful subject:organizationalUnitName un

RE: Certificate OU= fields with missing O= field

2019-11-01 Thread Jeremy Rowley via dev-security-policy
A mistake in the BRs (I wrote the language unfortunately so shame on me for not 
matching the other sections of org name or the given name). There's no 
certificate that ever contains all of these fields. How would you ever have 
that?

There's no requirement that the OU field information relate to the O field 
information as long as the information is verified. 

-Original Message-
From: dev-security-policy  On 
Behalf Of Alex Cohn via dev-security-policy
Sent: Friday, November 1, 2019 9:13 AM
To: Kurt Roeckx 
Cc: Matthias van de Meent ; MDSP 

Subject: Re: Certificate OU= fields with missing O= field

On Fri, Nov 1, 2019 at 5:14 AM Kurt Roeckx via dev-security-policy 
 wrote:
>
> On Fri, Nov 01, 2019 at 11:08:23AM +0100, Matthias van de Meent via 
> dev-security-policy wrote:
> > Hi,
> >
> > I recently noticed that a lot of leaf certificates [0] have 
> > organizationalUnitName specified without other organizational 
> > information such as organizationName. Many times this field is used 
> > for branding purposes, e.g. "issued through "
> > or "SomeBrand SSL".
> >
> > BR v1.6.6 § 7.1.4.2.2i has guidance on usage of the OU field: "The 
> > CA SHALL implement a process that prevents an OU attribute from 
> > including a name, DBA, tradename, trademark, address, location, or 
> > other text that refers to a specific natural person or Legal Entity 
> > unless the CA has verified this information in accordance with 
> > Section 3.2 and the Certificate also contains 
> > subject:organizationName, , subject:givenName, subject:surname, 
> > subject:localityName, and subject:countryName attributes, also 
> > verified in accordance with Section 3.2.2.1."
> >
> > As the organizationName and other related attributes are not set in 
> > many of those certificates, even though e.g. "COMODO SSL Unified 
> > Communications" is a very strong reference to Sectigo's ssl branding 
> > & business, I believe the referenced certificate is not issued in 
> > line with the BR.
> >
> > Is the above interpretation of BR section 7.1.4.2.2i correct?
>
> That OU clearly doesn't have anything to do with the subject that was 
> validated, so I also consider that a misissue.
>
>
> Kurt

A roughly-equivalent Censys.io query, excluding a couple other unambiguous 
"domain validated" OU values: "not _exists_:
parsed.subject.organization and _exists_:
parsed.subject.organizational_unit and not
parsed.subject.organizational_unit: "Domain Control Validated" and not
parsed.subject.organizational_unit: "Domain Validated Only" and not
parsed.subject.organizational_unit: "Domain Validated" and
validation.nss.valid: true" returns 17k hits.

IMO the "Hosted by .Com" certs fail 7.1.4.2.2i - the URL of a web host is 
definitely "text that refers to a specific ... Legal Entity".

> Certificate also contains subject:organizationName, , 
> subject:givenName, subject:surname, subject:localityName, and 
> subject:countryName attributes, also verified in accordance with Section 
> 3.2.2.1.

I'm pretty sure this isn't what the BRs intended, but this appears to forbid 
issuance with a meaningful subject:organizationalUnitName unless all of the 
above attributes are populated. EVG §9.2.9 forbids including those attributes 
in the first place. Am I reading this wrong, or was this an oversight in the 
BRs?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Mozilla Policy Requirements CA Incidents

2019-10-15 Thread Jeremy Rowley via dev-security-policy
I like this approach. You could either add a page in the policy document or 
include the information in the management assertion letter (or auditor letter) 
that gives information about the auditor’s credentials and background. I also 
like the idea of summary on what the auditor followed up on from the previous 
year. This could be helpful to document where an auditor changed between years 
to see what they reviewed that another auditor noted or to see where the 
auditor had concerns from year to year. It can track where the CA may have a 
re-occurring issue instead of something that is a one-off concern.

From: Ryan Sleevi 
Sent: Monday, October 14, 2019 4:12 PM
To: Ryan Sleevi 
Cc: Jeremy Rowley ; Wayne Thayer 
; mozilla-dev-security-policy 

Subject: Re: Mozilla Policy Requirements CA Incidents

In the spirit of improving transparency, I've gone and filed 
https://github.com/mozilla/pkipolicy/issues/192 , which is specific to auditors.

However, I want to highlight this model (the model used by the US Federal PKI), 
because it may also provide a roadmap for dealing with issues like this / those 
called by policy changes. Appendix C of those annual requirements for the US 
Federal PKI includes a number of useful requirements (really, all of them are 
in line with things we've discussed here), but two particularly relevant 
requirements are:

Guidance: Previous year findings Did the auditor review findings from previous 
year and ensure all findings were corrected as proposed during the previous 
audit?
Commentary: Often, the auditor sees an Audit Correction Action Plan, POA&M, or 
other evidence that the organization has recognized audit findings and intends 
to correct them, but the auditor is not necessarily engaged to assess the 
corrections at the time they are applied. The auditor should review that all 
proposed corrections have addressed the previous year’s findings.

Guidance: Changes Because the FPKI relies on a mapped CP and/or CPS for 
comparable operations, has the auditor been apprised of changes both to 
documentation and operations from the previous audit?
Commentary: CPs change over time and each Participating PKI in the FPKI has an 
obligation to remain in synch with the changing requirements of the applicable 
FPKI CP (either FBCA or COMMON Policy) – has the participating PKI’s CP and CPS 
been updated appropriately? If there have been other major changes in 
operations, has a summary since the last year’s audit been provided or 
discussed with the auditor?


This might be a model to further include/require within the overall audit 
package. This would likely only make sense if also adding "Audit Operational 
Findings" (which Illustrative Guidance for WebTrust now includes guidance on, 
but which ETSI continues to refuse to add) and "Audit MOA Findings" (for which 
"MOA" may be instead seen as "Mozilla Root Certificate Policy" - i.e. the 
things above/beyond the BRs). We've already seen WebTrust similarly developing 
reporting for "Architectural Overview", and they've already updated reporting 
for "Assertion of Audit Scope", thus showing in many ways, WebTrust already has 
the tools available to meet these requirements. It would similarly be possible 
for ETSI-based audits to meet these requirements, since the reports provided to 
browsers need not be as limited as a Certification statement; they could 
include more holistic reporting, in line with the eIDAS Conformity Assessment 
Reports.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


DNS records and delegation

2019-10-10 Thread Jeremy Rowley via dev-security-policy
Question, is there any prohibition against demonstration of domain control 
being delegated to a third party or even the CA itself? I don't think so, but 
figured we've discussed differences in interpretation a lot lately so wanted to 
see if people agreed.


Section 3.2.2.4.7 in the CAB/F requires that the CA verify a domain by 
"confirming the Applicant's control over the FQDN by confirming the presence of 
a Random Value or Request Token for either in a DNS CNAME, TXT or CAA record 
for either an Authorization Domain Name; or 2) an Authorization Domain Name 
that is prefixed with a label that begins with an underscore character."

If the CA is using a random value then the Random Value has to be unique the 
certificate request.

Could a third party or the CA itself set up a service of entities that hated 
doing domain validation? For example:



_validation.customer.com. 3600 IN CNAME _validation.domain.com.

_validation.domain.com. 3600 IN CNAME _validation.myvalidation.com.

_validation.myvalidation.com. 1 IN CNAME _.myalidation.com.

Since each domain approval request requires an unique random value, the random 
value could be uploaded each time a certificate request comes in and checked.

I mean, the obvious issue is the customer.com domain would need to want to 
delegate this domain.com. But if you had a pretty non-technical person 
operating the DNS, they could set it to the domain.com name and leave their DNS 
settings forever.

This looks allowed under the BRs, but should it be? Or is it like key escrow - 
okay if a reseller does it (but frowned upon). Totally not cool if the CA does 
it.

Jeremy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Mozilla Policy Requirements CA Incidents

2019-10-08 Thread Jeremy Rowley via dev-security-policy
I think requiring publication of profiles for certs is a good idea. It’s part 
of what I’ve wanted to publish as part of our CPS. You can see most of our 
profiles here: 
https://content.digicert.com/wp-content/uploads/2019/07/Digicert-Certificate-Profiles.pdf,
 but it doesn’t include ICAs right now. That was an oversight that we should 
fix. Publication of profiles probably won’t prevent issues related to 
engineering snafu’s or more manual procedures. However, publication may 
eliminate a lot of the disagreement on BR/Mozilla policy wording. That’s a lot 
more work though for the policy owners so the community would probably need to 
be more actively involved in reviewing profiles. Requiring publication at least 
gives the public a chance to review the information, which may not exist today.

The manual component definitely introduces a lot of risk in sub CA creation, 
and the explanation I gave is broader than renewals. It’s more about the risks 
currently associated with Sub CAs. The difference between renewal and new 
issuance doesn’t exist at DigiCert – we got caught on that issue a long time 
ago.


From: Ryan Sleevi 
Sent: Tuesday, October 8, 2019 5:49 PM
To: Jeremy Rowley 
Cc: Wayne Thayer ; Ryan Sleevi ; 
mozilla-dev-security-policy 
Subject: Re: Mozilla Policy Requirements CA Incidents



On Tue, Oct 8, 2019 at 6:42 PM Jeremy Rowley 
mailto:jeremy.row...@digicert.com>> wrote:
Tackling Sub CA renewals/issuance from a compliance perspective is difficult 
because of the number of manual components involved. You have the key ceremony, 
the scripting, and all of the formal process involved. Because the root is 
stored in an offline state and only brought out for a very intensive procedure, 
there is lots that can go wrong  compared to end-entity certs, including bad 
profiles and bad coding. These events are also things that happen rarely enough 
that many CAs might not have well defined processes around. A couple things 
we’ve done to eliminate issues include:


  1.  2 person review over the profile + a formal sign-off from the policy 
authority
  2.  A standard scripting tool for generating the profile to ensure only the 
subject info in the cert changes.  This has basic some linting.
  3.  We issue a demo cert. This cert is exactly the same as the cert we want 
to issue but it’s not publicly trusted and includes a different serial. We then 
review the demo cert to ensure profile accuracy. We should run this cert 
through a linter (added to my to-do list).

We used to treat renewals separate from new issuance. I think there’s still a 
sense that they “are” different, but that’s been changing. I’m definitely 
looking forward to hearing what other CAs do.

It's not clear: Are you suggesting the the configuration of sub-CA profiles is 
more, less, or the as risky as for end-entity certificates? It would seem that, 
regardless, the need for review and oversight is the same, so I'm not sure that 
#1 or #2 would be meaningfully different between the two types of certificates?

That said, of the incidents, only two of those were potentially related to the 
issuance of new versions of the intermediates (Actalis and QuoVadis). The other 
two were new issuance.

So I don't think we can explain it as entirely around renewals. I definitely 
appreciate the implicit point you're making: which is every manual action of a 
CA, or more generally, every action that requires a human be involved, is an 
opportunity for failure. It seems that we should replace all the humans, then, 
to mitigate the failure? ;)

To go back to your transparency suggestion, would we have been better if:
1) CAs were required to strictly disclose every single certificate profile for 
everything "they sign"
2) Demonstrate compliance by updating their CP/CPS to the new profile, by the 
deadline required. That is, requiring all CAs update their CP/CPS prior to 
2019-01-01.

Would this prevent issues? Maybe - only to extent CAs view their CP/CPS as 
authoritative, and strictly review what's on them. I worry that such a solution 
would lead to the "We published it, you didn't tell us it was bad" sort of 
situation (as we've seen with audit reports), which then further goes down a 
rabbit-hole of requiring CP/CPS be machine readable, and then tools to lint 
CP/CPS, etc. By the time we've added all of this complexity, I think it's 
reasonable to ask if the problem is not the humans in the loop, but the wrong 
humans (i.e. going back to distrusting the CA). I know that's jumping to 
conclusions, but it's part of what taking an earnest look at these issues are: 
how do we improve things, what are the costs, are there cheaper solutions that 
provide the same assurances?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Mozilla Policy Requirements CA Incidents

2019-10-08 Thread Jeremy Rowley via dev-security-policy
Tackling Sub CA renewals/issuance from a compliance perspective is difficult 
because of the number of manual components involved. You have the key ceremony, 
the scripting, and all of the formal process involved. Because the root is 
stored in an offline state and only brought out for a very intensive procedure, 
there is lots that can go wrong  compared to end-entity certs, including bad 
profiles and bad coding. These events are also things that happen rarely enough 
that many CAs might not have well defined processes around. A couple things 
we’ve done to eliminate issues include:


  1.  2 person review over the profile + a formal sign-off from the policy 
authority
  2.  A standard scripting tool for generating the profile to ensure only the 
subject info in the cert changes.  This has basic some linting.
  3.  We issue a demo cert. This cert is exactly the same as the cert we want 
to issue but it’s not publicly trusted and includes a different serial. We then 
review the demo cert to ensure profile accuracy. We should run this cert 
through a linter (added to my to-do list).

We used to treat renewals separate from new issuance. I think there’s still a 
sense that they “are” different, but that’s been changing. I’m definitely 
looking forward to hearing what other CAs do.

Jeremy


From: Wayne Thayer 
Sent: Tuesday, October 8, 2019 3:20 PM
To: Ryan Sleevi 
Cc: Jeremy Rowley ; mozilla-dev-security-policy 

Subject: Re: Mozilla Policy Requirements CA Incidents

Ryan,

Thank you for pointing out these incidents, and for raising the meta-issue of 
policy compliance. We saw similar issues with CP/CPS compliance to changes in 
the 2.5 and 2.6 versions of policy, with little explanation beyond "it's hard 
to update our CPS" and "oops". Historically, our approach has been to strive to 
communicate policy updates to CAs with the assumption that they will happily 
comply with all of the requirements they are aware of. I don't think that's a 
bad thing to continue, but I agree it is is not working.

Having said that, I do recognize that translating "Intermediates must contain 
EKUs" into "don't renew this particular certificate" across an organization 
isn't as easy as it sounds. I'd be really interested in hearing how CAs are 
successfully managing the task of adapting to new requirements and if there is 
something we can do to encourage all CAs to adopt best practices in this 
regard. Our reactive options short of outright distrust are limited- so I think 
it would be worthwhile to focus on new preventive measures.

Thanks,

Wayne

On Tue, Oct 8, 2019 at 11:02 AM Ryan Sleevi via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
 wrote:
On the topic of root causes, there's also
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3425554 that was
recently published. I'm not sure if that was peer reviewed, but it does
provide an analysis of m.d.s.p and Bugzilla. I have some concerns about the
study methodology (for example, when incident reports became normalized is
relevant, as well as incident reporting where security researchers first
went to the CA), but I think it looks at root causes a bit holistically.

I recently shared on the CA/B Forum's mailing list another example of
"routine" violation:
https://cabforum.org/pipermail/servercert-wg/2019-October/001154.html

My concern is that, 7 years later, while I think that compliance has
marginally improved (largely due to things led by outside the CA ecosystem,
like CT and ZLint/Certlint), I think the answers/responses/explanations we
get are still falling into the same predictable buckets, and that concerns
me, because it's neither sustainable nor healthy for the ecosystem.


   - We misinterpreted the requirements. It said X, but we thought it meant
   Y (Often: even though there's nothing in the text to support Y, that's just
   how we used to do business, and we're CAs so we know more than browsers
   about what browsers expect from us)
   - We weren't paying attention to the updates. We've now assigned people
   to follow updates.
   - We do X by saying our staff should do X. In this case, they forgot.
   We've retrained our staff / replaced our staff / added more staff to
   correct this.
   - We had a bug. We did not detect the bug because we did not have tests
   for this. We've added tests.
   - We weren't sure if X was wrong, but since no one complained, we
   assumed it was OK.
   - Our auditor said it was OK
   - Our vendor said it was OK

and so forth.

And then, in the responses, we generally see:

   - These certificates are used in Very Important Systems, so even though
   we said we'd comply, we cannot comply.
   - We don't think X is actually bad. We think X should be OK, and it
   should be Browsers that reject X if they don't like X (implicit: But they
   should

RE: Mozilla Policy Requirements CA Incidents

2019-10-07 Thread Jeremy Rowley via dev-security-policy
 to evaluate a CA’s issues over the past year and 
how they addressed what went wrong compared to previous years and see what the 
CA is doing that will make the next year will be even better.


Jeremy



From: Ryan Sleevi 
Sent: Monday, October 7, 2019 6:45 PM
To: Jeremy Rowley 
Cc: mozilla-dev-security-policy 
; r...@sleevi.com
Subject: Re: Mozilla Policy Requirements CA Incidents



On Mon, Oct 7, 2019 at 7:06 PM Jeremy Rowley 
mailto:jeremy.row...@digicert.com>> wrote:
Interesting. I can't tell with the Netlock certificate, but the other three 
non-EKU intermediates look like replacements for intermediates that were issued 
before the policy date and then reissued after the compliance date.  The 
industry has established that renewal and new issuance are identical (source?), 
but we know some CAs treat these as different instances.

Source: Literally every time a CA tries to use it as an excuse? :)

My question is how we move past “CAs provide excuses”, and at what point the 
same excuses fall flat?

While that's not an excuse, I can see why a CA could have issues with a renewal 
compared to new issuance as changing the profile may break the underlying CA.

That was Quovadis’s explanation, although with no detail to support that it 
would break something, simply that they don’t review the things they sign. Yes, 
I’m frustrated that CAs continue to struggle with anything that is not entirely 
supervised. What’s the point of trusting a CA then?

 However, there's probably something better than "trust" vs. "distrust" or 
"revoke" v "non-revoke", especially when it comes to an intermediate.  I guess 
the question is what is the primary goal for Mozilla? Protect users? Enforce 
compliance?  They are not mutually exclusive objectives of course, but the 
primary drive may influence how to treat issuing CA non-compliance vs. 
end-entity compliance.

I think a minimum goal is to ensure the CAs they trust are competent and take 
their job seriously, fully aware of the risk they pose. I am more concerned 
about issues like this which CAs like QuoVadis acknowledges they would not 
cause.

The suggestion of a spectrum of responses fundamentally suggests root stores 
should eat the risk caused by CAs flagrant violations. I want to understand why 
browsers should continue to be left holding the bag, and why every effort at 
compliance seems to fall on how much the browsers push.

Of the four, only Quovadis has responded to the incident with real information, 
and none of them have filed the required format or given sufficient 
information. Is it too early to say what happens before there is more 
information about what went wrong? Key ceremonies are, unfortunately, very 
manual beasts. You can automate a lot of it with scripting tools, but the 
process of taking a key out, performing a ceremony, and putting things a way is 
not automated due to the off-line root and FIPS 140-3 requirements.

Yes, I think it’s appropriate to defer discussing what should happen to these 
specific CAs. However, I don’t think it’s too early to begin to try and 
understand why it continues to be so easy to find massive amounts of 
misissuance, and why policies that are clearly communicated and require 
affirmative consent is something CAs are still messing up. It suggests trying 
to improve things by strengthening requirements isn’t helping as much as 
needed, and perhaps more consistent distrusting is a better solution.

In any event, having CAs share the challenges is how we do better. 
Understanding how the CAs not affected prevent these issues is equally 
important. We NEED CAs to be better here, so what’s the missing part about why 
it’s working for some and failing for others?

I know it seems extreme to suggest to start distrusting CAs over this, but 
every single time, it seems there’s a CA communication, affirmative consent, 
and then failure. The most recent failure to disclose CAs is equally 
disappointing and frustrating, and it’s not clear we have CAs adequately 
prepared to comply with 2.7, no matter how much we try.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Mozilla Policy Requirements CA Incidents

2019-10-07 Thread Jeremy Rowley via dev-security-policy
Interesting. I can't tell with the Netlock certificate, but the other three 
non-EKU intermediates look like replacements for intermediates that were issued 
before the policy date and then reissued after the compliance date.  The 
industry has established that renewal and new issuance are identical (source?), 
but we know some CAs treat these as different instances.  While that's not an 
excuse, I can see why a CA could have issues with a renewal compared to new 
issuance as changing the profile may break the underlying CA.

Note that revoking these CAs puts the CA back on issuing from the legacy ICA 
that was issued before the renewal. Depending on the reason for the reissue, 
that may be a less desirable outcome.  I don't have a good answer on what to do 
in a circumstance like this (and I'm a bit biased probably since we have a 
relationship with Quovadis).  However, there's probably something better than 
"trust" vs. "distrust" or "revoke" v "non-revoke", especially when it comes to 
an intermediate.  I guess the question is what is the primary goal for Mozilla? 
Protect users? Enforce compliance?  They are not mutually exclusive objectives 
of course, but the primary drive may influence how to treat issuing CA 
non-compliance vs. end-entity compliance. 

Of the four, only Quovadis has responded to the incident with real information, 
and none of them have filed the required format or given sufficient 
information. Is it too early to say what happens before there is more 
information about what went wrong? Key ceremonies are, unfortunately, very 
manual beasts. You can automate a lot of it with scripting tools, but the 
process of taking a key out, performing a ceremony, and putting things a way is 
not automated due to the off-line root and FIPS 140-3 requirements. 

BTW, I'm really liking how these issues are raised here in bulk by problem. 
This is a really nice format and gets the community involved in looking at what 
to do.  I think it also helps identify common causes of problems.

Jeremy

-Original Message-
From: dev-security-policy  On 
Behalf Of Ryan Sleevi via dev-security-policy
Sent: Monday, October 7, 2019 12:53 PM
To: mozilla-dev-security-policy 
Subject: Mozilla Policy Requirements CA Incidents

In light of Wayne's many planned updates as part of version 2.7 of the Mozilla 
Root Store Policy, and prompted by some folks looking at adding linters, I 
recently went through and spot-checked some of the Mozilla Policy-specific 
requirements to see how well CAs are doing at following these.

I discovered five issues, below:

# Intermediates that do not comply with the EKU requirements

In September 2018 [1], Mozilla sent a CA Communications reminding CAs about the 
changes in Policy 2.6.1. One specific change, called to attention in ACTION 3, 
required the presence of EKUs for intermediates, and the separation of e-mail 
and SSL/TLS from the intermediates. This requirement, while new to Mozilla 
Policy, was not new to publicly trusted CAs, as it matched an existing 
requirement from Microsoft's Root Program [2]. This requirement was first 
introduced by Microsoft in July 2015, with their Version 2.0 of their own 
policy.

It's a reasonable expectation to expect that all CAs in both Microsoft and 
Mozilla's program would have been conforming to the stricter requirement of 
Microsoft, which goes above-and-beyond the Baseline Requirements. However, 
Mozilla still allowed a grandfathering in of existing intermediates, setting 
the new requirement for their policy at 2019-01-01. Mozilla also set forth 
certain exclusions to account for cross-signing.

Despite that, four CAs have violated this requirement in 2019:
* Microsoft: https://bugzilla.mozilla.org/show_bug.cgi?id=1586847
* Actalis: https://bugzilla.mozilla.org/show_bug.cgi?id=1586787
* QuoVadis: https://bugzilla.mozilla.org/show_bug.cgi?id=1586792
* NetLock: https://bugzilla.mozilla.org/show_bug.cgi?id=1586795

# Authority Key Identifier issues

RFC 5280, Section 4.2.1.1 [3], defines the Authority Key Identifier extension. 
Within RFC 5280, it states that (emphasis added)

   The identification MAY be based on ***either*** the
   key identifier (the subject key identifier in the issuer's
   certificate) ***or*** the issuer name and serial number.

That is, it provides an either/or requirement for this field. Despite this not 
being captured in the updated ASN.1 module defined in RFC 5912 [4], Mozilla 
Root Store Policy has, since Version 1.0 [5], included a requirement that CAs 
MUST NOT issue certificates that have (emphasis added) "incorrect extensions 
(e.g., SSL certificates that exclude SSL usage, or ***authority key IDs that 
include both the key ID and the issuer's issuer name and serial number)***;"

In examining issuance, I found that one CA, trusted by Mozilla, regularly 
violates this requirement:

* Camerfirma: https://bugzilla.mozilla.org/show_bug.cgi?id=1586860

# Thoughts

While I've opened CA incident issues for all

RE: CAs cross-signing roots whose subjects don't comply with the BRs

2019-10-07 Thread Jeremy Rowley via dev-security-policy
For this particular incident, I would like to know why the CA didn’t review the 
profile before signing the root. It seems like a flaw in the process in the key 
ceremony process not to go through a checklist with the profile and ensure each 
field complies with the current version of BRs. 

Question on browser behavior - will revocation of the cross essentially result 
in revocation of root 2 since the date is newer? Anyone distributing that 
cross, will basically see the root as revoked unless they have the root 
embedded by then, right?

-Original Message-
From: dev-security-policy  On 
Behalf Of Jeremy Rowley via dev-security-policy
Sent: Monday, October 7, 2019 10:21 AM
To: r...@sleevi.com
Cc: mozilla-dev-security-policy 
Subject: RE: CAs cross-signing roots whose subjects don't comply with the BRs

Yeah - I like the visibility here since I know I often forget to post the 
incident to the Mozilla list as well as post the bug. 

IMO - it's up to the CA to decide if they want to sign something in violation 
of the BRs and then it's up the browsers on what the action taken in response 
is. I acknowledge this is somewhat a non-answer, but I think if the CA 
discloses why they are signing something, works with the community to decide 
the action taken is better than the alternative, accepts the risk to the audit, 
then they should do it, assuming the risk. The BRs are pretty rigid so there 
may be circumstances that merit violation of the requirements, but that 
violation should only be done with as much transparency as possible and 
considering all the positions. 

For example, suppose a root was created before a rule went into place and the 
root needs to be renewed for some reason. If the root was compliant before 
creation and modifying the profile would break something with the root, then 
there's a good argument that you shouldn't modify the root during the resign. 
That assumes the reasons are discussed here and alternatives are explored 
fully.  This should then be documented (including the reasons) in an incident 
report and the subsequent audit. 

Tl;dr - No, CAs shouldn't sign things that violate the BRs, even roots. But I 
could see there being reasons for the CA to do so.

(And I haven't scanned CT to discover if it is us. Crossing my fingers it's not 
😊. If I don't scan, it's like a terrible version of Christmas.)

-Original Message-
From: dev-security-policy  On 
Behalf Of Ryan Sleevi via dev-security-policy
Sent: Monday, October 7, 2019 10:07 AM
To: Jeremy Rowley 
Cc: mozilla-dev-security-policy 
Subject: Re: CAs cross-signing roots whose subjects don't comply with the BRs

On Mon, Oct 7, 2019 at 11:54 AM Jeremy Rowley 
wrote:

> Are both roots trusted in the Mozilla root store? If so, could you say 
> that Mozilla has approved of the root not-withstanding the non-compliance?
> If root 2 did go through the public review process and had the public 
> look at the certificate and still got embedded, then Mozilla perhaps 
> signed off on the root.
>

Good question!

Yes, it turns out that a version of this cross-sign is included, and while 
there was a public discussion phase, this non-compliance was not detected 
during the inclusion request nor part of the discussion. In fact, there were 
zero comments during the public discussion phase.


> That said, I don't personally see the harm in incident reports (other 
> than the fact that they can be used for negative marketing). They are 
> there for documenting issues and making the public aware of issues.
> Like qualified audits, they don't necessarily mean something terrible 
> since they represent a disclosure/record of some kind. Even if the 
> incident report is open, discussed, and closed pretty quickly, then 
> you end up with an a record that can be pointed to.  Filing more 
> incident report (as long as they are different issues) is a good thing 
> as it gives extra transparency in the CA's operations that is easily 
> discoverable and catalogable. Makes data analytics easier and you can 
> go back through the incidents to see how things are changing with the CA.
>

Well, the reason I raised it here, rather than as an incident, was to try and 
nail down the expectations here. For example, would it be better to have that 
discussion on the incident, with "Foo" arguing "You approved it, ergo it's not 
a violation to cross-sign it"? Or would it be better to have visibility here, 
perhaps in the abstract (even if it is trivial to scan CT and figure out which 
CA I'm talking about), if only to get folks expectations here on whether or not 
new certificates should be signed that violate the BRs?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___

RE: CAs cross-signing roots whose subjects don't comply with the BRs

2019-10-07 Thread Jeremy Rowley via dev-security-policy
Yeah - I like the visibility here since I know I often forget to post the 
incident to the Mozilla list as well as post the bug. 

IMO - it's up to the CA to decide if they want to sign something in violation 
of the BRs and then it's up the browsers on what the action taken in response 
is. I acknowledge this is somewhat a non-answer, but I think if the CA 
discloses why they are signing something, works with the community to decide 
the action taken is better than the alternative, accepts the risk to the audit, 
then they should do it, assuming the risk. The BRs are pretty rigid so there 
may be circumstances that merit violation of the requirements, but that 
violation should only be done with as much transparency as possible and 
considering all the positions. 

For example, suppose a root was created before a rule went into place and the 
root needs to be renewed for some reason. If the root was compliant before 
creation and modifying the profile would break something with the root, then 
there's a good argument that you shouldn't modify the root during the resign. 
That assumes the reasons are discussed here and alternatives are explored 
fully.  This should then be documented (including the reasons) in an incident 
report and the subsequent audit. 

Tl;dr - No, CAs shouldn't sign things that violate the BRs, even roots. But I 
could see there being reasons for the CA to do so.

(And I haven't scanned CT to discover if it is us. Crossing my fingers it's not 
😊. If I don't scan, it's like a terrible version of Christmas.)

-Original Message-
From: dev-security-policy  On 
Behalf Of Ryan Sleevi via dev-security-policy
Sent: Monday, October 7, 2019 10:07 AM
To: Jeremy Rowley 
Cc: mozilla-dev-security-policy 
Subject: Re: CAs cross-signing roots whose subjects don't comply with the BRs

On Mon, Oct 7, 2019 at 11:54 AM Jeremy Rowley 
wrote:

> Are both roots trusted in the Mozilla root store? If so, could you say 
> that Mozilla has approved of the root not-withstanding the non-compliance?
> If root 2 did go through the public review process and had the public 
> look at the certificate and still got embedded, then Mozilla perhaps 
> signed off on the root.
>

Good question!

Yes, it turns out that a version of this cross-sign is included, and while 
there was a public discussion phase, this non-compliance was not detected 
during the inclusion request nor part of the discussion. In fact, there were 
zero comments during the public discussion phase.


> That said, I don't personally see the harm in incident reports (other 
> than the fact that they can be used for negative marketing). They are 
> there for documenting issues and making the public aware of issues. 
> Like qualified audits, they don't necessarily mean something terrible 
> since they represent a disclosure/record of some kind. Even if the 
> incident report is open, discussed, and closed pretty quickly, then 
> you end up with an a record that can be pointed to.  Filing more 
> incident report (as long as they are different issues) is a good thing 
> as it gives extra transparency in the CA's operations that is easily 
> discoverable and catalogable. Makes data analytics easier and you can 
> go back through the incidents to see how things are changing with the CA.
>

Well, the reason I raised it here, rather than as an incident, was to try and 
nail down the expectations here. For example, would it be better to have that 
discussion on the incident, with "Foo" arguing "You approved it, ergo it's not 
a violation to cross-sign it"? Or would it be better to have visibility here, 
perhaps in the abstract (even if it is trivial to scan CT and figure out which 
CA I'm talking about), if only to get folks expectations here on whether or not 
new certificates should be signed that violate the BRs?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CAs cross-signing roots whose subjects don't comply with the BRs

2019-10-07 Thread Jeremy Rowley via dev-security-policy
Are both roots trusted in the Mozilla root store? If so, could you say that 
Mozilla has approved of the root not-withstanding the non-compliance? If root 2 
did go through the public review process and had the public look at the 
certificate and still got embedded, then Mozilla perhaps signed off on the root.

That said, I don't personally see the harm in incident reports (other than the 
fact that they can be used for negative marketing). They are there for 
documenting issues and making the public aware of issues. Like qualified 
audits, they don't necessarily mean something terrible since they represent a 
disclosure/record of some kind. Even if the incident report is open, discussed, 
and closed pretty quickly, then you end up with an a record that can be pointed 
to.  Filing more incident report (as long as they are different issues) is a 
good thing as it gives extra transparency in the CA's operations that is easily 
discoverable and catalogable. Makes data analytics easier and you can go back 
through the incidents to see how things are changing with the CA.  

-Original Message-
From: dev-security-policy  On 
Behalf Of Ryan Sleevi via dev-security-policy
Sent: Monday, October 7, 2019 9:35 AM
To: Jakob Bohm 
Cc: mozilla-dev-security-policy 
Subject: Re: CAs cross-signing roots whose subjects don't comply with the BRs

On Mon, Oct 7, 2019 at 11:26 AM Jakob Bohm via dev-security-policy < 
dev-security-policy@lists.mozilla.org> wrote:

> On 07/10/2019 16:52, Ryan Sleevi wrote:
> > I'm curious how folks feel about the following practice:
> >
> > Imagine a CA, "Foo", that creates a new Root Certificate ("Root 1"). 
> > They create this Root Certificate after the effective date of the 
> > Baseline Requirements, but prior to Root Programs consistently 
> > requiring
> compliance
> > with the Baseline Requirements (i.e. between 2012 and 2014). This 
> > Root Certificate does not comply with the BRs' rules on Subject: 
> > namely, it omits the Country field.
>
> Clarification needed: Does it omit Country from the DN of the root 1 
> itself, from the DN of intermediary CA certs and/or from the DN of End 
> Entity certs?
>

It's as I stated: The Subject of the Root Certificate omits the Country field.


> >
> > Later, in 2019, Foo takes their existing Root Certificate ("Root 
> > 2"), included within Mozilla products, and cross-signs the Subject. 
> > This now creates a cross-signed certificate, "Root 1 signed-by Root 
> > 2", which has
> a
> > Subject field that does not comport with the Baseline Requirements.
>
> Nit: Signs the Subject => Signs Root 1
>

Perhaps it would be helpful if you were clearer about what you believe you were 
correcting.

I thought I was very precise here, so it's useful to understand your
confusion:

Root 2, a root included in Mozilla products, cross-signs Root 1, a root which 
omits the Country field from the Subject.

This creates a certificate, whose issuer is Root 2 (a Root included in Mozilla 
Products), and whose Subject is Root 1. The Subject of Root 1 does not meet the 
BRs requirements on Subjects for intermediate/root
certificates: namely, the certificate issued by Root 2 omits the C, because 
Root 1 omits the C.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Policy 2.7 Proposal: Forbid Delegation of Email Validation for S/MIME Certificates

2019-10-04 Thread Jeremy Rowley via dev-security-policy
I’m thinking more in terms of the potential rule in the Mozilla policy. If the 
rule is “the CA MUST verify the domain component of the email address” then the 
rule potentially prohibits the scenario where the CA verifies the entire email 
address, not the domain component, by sending a random value to each email 
address and requiring the email address holder to approve issuance. I actually 
liked the previous requirement prohibiting delegation of email address 
verification, although the rule lacked clarity on what email address 
verification entailed. I figure that will be defined in the s/MIME working 
group.

Describing the actors is a good way to look at it though. Besides those three 
buckets of issuers, you have the RAs, the email holders, and the organization 
controlling the domain portion of the email address. These entities may not be 
the same as the CA. More often than not, the RA ends up being the organization 
contracting with the CA for the s/MIME services. The RAs are the risky party 
that I think should be restricted on what they can verify since that’s where 
the lack of transparency starts to come in. With the prohibition against 
delegation of email control eliminated, we’re again placing domain/email 
control responsibilities on a party that has some incentive to misuse it (to 
read email of a third party) without audit, technical, or policy controls that 
limit their authority.  Because there are a lack of controls over the RAs, they 
become a hidden layer in the process that can issue certificates without anyone 
looking at how they are verifying the email address or domain name and whether 
these processes are equivalent to the controls found int eh BR.  Similar to 
TLS, the unaudited party should not be the one providing or verifying 
acceptance of the tokens used to approve issuance.

In short, I’m agreeing with the “at least” verifying the domain control 
portion. However, I know we verify a lot of email addresses directly with the 
email owner that doesn’t have control over the domain name. So the rule should 
be something that permits verification by the root CA of either the full email 
address or the domain name but at least eliminates delegation to non-audited 
third parties. For phrasing, “the CA MUST verify either the domain component of 
the email address or the entire email address using a process that is 
substantially similar to the process used to verify domain names as described 
in the Baseline Requirements”, with the understanding that we will rip out the 
language and replace it with the s/MIME requirements once those are complete at 
the CAB Forum.

Jeremy

From: Ryan Sleevi 
Sent: Friday, October 4, 2019 10:56 PM
To: Jeremy Rowley 
Cc: Kathleen Wilson ; Wayne Thayer ; 
mozilla-dev-security-policy 
Subject: Re: Policy 2.7 Proposal: Forbid Delegation of Email Validation for 
S/MIME Certificates

Jeremy:

Could you describe a bit more who the actors are?

Basically, it seems that the actual issuance is going to fall into one of 
several buckets:
1) Root CA controls Issuing CAs key
2) Issuing CA controls its own key, but is technically constrained
3) Issuing CA controls its own key, and is not technically constrained

We know #1 is covered by Root CA’s audit, and we know #3 is covered by Issuing 
CA’s audit, and #2 is technically constrained and thus the domain part is 
apriori validated.

So when you say “some organizations”, I’m trying to understand which of the 
three cases here they fall under. If I understand correctly, the idea is that 
Customer Foo approaches Root CA (Case #1). Root CA knows Foo’s namespace is 
foo.example through prior verification, and Root CA allows Foo to issue to 
*@foo.example<mailto:*@foo.example>. Then Foo says “oh, hey, we have a 
contractor at user@bar.example<mailto:user@bar.example>, we’d like a cert for 
them too”.

Why can’t Root CA verify themselves? Why would or should Root CA trust Foo to 
do it correctly? I can imagine plenty of verification protocols where Foo can 
be the “face” of the verification, but that it uses Root CAs APIs and systems 
under the hood. I‘m fairly certain DigiCert has experience doing this for their 
customers, such as for white label CAs.

So that’s why I’m struggling to understand the use case, or the challenges, 
with at least requiring domain-part validation by the CA.

On Fri, Oct 4, 2019 at 8:09 PM Jeremy Rowley via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
 wrote:
Will this permit either verification of the email address or the domain part? 
For example, some organizations may verify their entire domain space and then 
confirm contractors using a random value sent to the email address itself. They 
don't  need the entire domain space in those cases, but they do need to issue 
certificates for a few email addresses outside of their domain control. 
Verification of email control using a random value seems like it affords 
controls that are equi

RE: Policy 2.7 Proposal:Extend Section 8 to Encompass Subordinate CAs

2019-10-04 Thread Jeremy Rowley via dev-security-policy
I did flag that part as wearing my personal hat 😊.

The Trust Italia Sub CA is an example of where confusion may arise in the 
policy and where the complexity arises in these relationships. This was not 
necessarily a “new” CA from what I understood. This is also why I qualified the 
shut down in 2020 as the non-browser TLS. We aren’t expanding the sub CAs 
beyond what exists while we figure out what to do with existing infrastructures 
that are using them.

That said, still wearing a personal hat, I do think the community should have 
access to this information and review the entities that transitively chain up 
to the Mozilla root program for sMIME or TLS. There should be a review of all 
these sub CAs, including the Trust Italia one we replaced. The only reason I 
can see grandfathering is if there are too many to review at one time. Then 
grandfather the new ones and review before a new CA is required to issue. 
That’ll spur CAs to bring the on-prem Sub CAs forward for review.

The bigger question to me, is how should this review take place? Should the 
root CA sponsor the sub CA and talk about the infrastructure and operations? I 
think there should be an established process of how this occurs, and the 
process is probably slightly different than roots because of the extra party 
(root CA) involved.

From: Wayne Thayer 
Sent: Friday, October 4, 2019 12:40 PM
To: Jeremy Rowley 
Cc: mozilla-dev-security-policy 
Subject: Re: Policy 2.7 Proposal:Extend Section 8 to Encompass Subordinate CAs

Thanks Jeremy.

On Thu, Oct 3, 2019 at 5:06 PM Jeremy Rowley 
mailto:jeremy.row...@digicert.com>> wrote:
Hey Wayne,

I think there might be confusion on how the notification is supposed to happen. 
Is notification through CCADB sufficient? We've uploaded all of the Sub CAs to 
CCADB including the technically constrained ICAs. Each one that is 
hosted/operated by itself is marked that way using the Subordinate CA Owner 
field. Section 8 links to emailing 
certifica...@mozilla.org<mailto:certifica...@mozilla.org> but operationally, 
CCADB has become the default means of providing this notice. If you're 
expecting email, that may be worth clarifying in case CAs missed that an email 
is required. I know I missed that, and because CCADB is the common method of 
notification there is a chance that notice was considered sent but not in the 
expected way.

Considering that section 8 links to an email address where it states "MUST 
notify Mozilla<mailto:certifica...@mozilla.org>", I'm skeptical that there is 
confusion, but I do agree that it makes sense for the notification to be 
triggered via an update to CCADB rather than an email. I'll look in to this.

There's also confusion over the "new to Mozilla" language I think. I 
interpreted this language as organizations issued cross-signs after the policy. 
For example, Siemens operated a Sub CA through Quovadis prior to policy date so 
they aren't "new" to the CA space even if they were re-certified.

That's the correct interpretation, barring any further clarifications...

However, they would be new in the sense you identified - they haven't gone 
through an extensive review by the community.  If the goal is to ensure the 
community review happens for each Sub CA, then requiring all recertifications 
to go through an approval process makes sense instead of making an exception 
for new. I'm not sure how many exist currently, but if there are not that many 
organizations, does a grandfathering clause cause unnecessary complexity? I 
realize this is not in DigiCert's best interest, but the community may benefit 
the most by simply requiring a review of all Sub CAs instead of trying to 
grandfather in existing cross-signs.  Do you have an idea on the number that 
might entail? At worst, we waste a bunch of time discovering that all of these 
are perfectly operated and that they could have been grandfathered in the first 
place. At best, we identify some critical issues and resolve them as a 
community.

It appears to be at least 2 dozen organizations, based on the "subordinate CA 
owner" field in CCADB. I say "at least" because, as Dimitris noted, we're just 
now identifying intermediates that are incorrectly labeled as falling under the 
parent certificate's audits.

If there are a significant number of unconstrained on-prem CAs, then language 
that requires a review on re-signing would be helpful.  Perhaps say "As of X 
date, a CA MUST NOT sign a non-technically constrained certificate where 
cA=True for keys that are hosted external to the CA's infrastructure or that 
are not operated in accordance with the issuing CA's policies and procedures 
unless Mozilla has first granted permission for such certificate"? The wording 
needs work of course, but the idea is that they go through the discussion and 
Mozilla signs off. A proce

RE: Policy 2.7 Proposal: Forbid Delegation of Email Validation for S/MIME Certificates

2019-10-04 Thread Jeremy Rowley via dev-security-policy
Will this permit either verification of the email address or the domain part? 
For example, some organizations may verify their entire domain space and then 
confirm contractors using a random value sent to the email address itself. They 
don't  need the entire domain space in those cases, but they do need to issue 
certificates for a few email addresses outside of their domain control. 
Verification of email control using a random value seems like it affords 
controls that are equivalent to the challenge-response mechanisms in the BRs.  

-Original Message-
From: dev-security-policy  On 
Behalf Of Wayne Thayer via dev-security-policy
Sent: Friday, October 4, 2019 2:38 PM
To: Kathleen Wilson 
Cc: mozilla-dev-security-policy 
Subject: Re: Policy 2.7 Proposal: Forbid Delegation of Email Validation for 
S/MIME Certificates

I'd like to revive this discussion. So far we've established that the existing 
"required practice" [1] is too stringent for email address validation and needs 
to be changed. We can do that by removing email addresses from the scope of the 
requirement as Kathleen proposed, or by exempting the local part of the email 
address as I proposed earlier:

"CAs MUST NOT delegate validation of the domain name part of an email address 
to a 3rd party."

We have a fairly detailed explanation from Ryan Hurst of why and how removing 
the requirement entirely is beneficial, but no one else has spoken in favor of 
this need. Kathleen did however point out that this requirement doesn't appear 
to be the result of a thorough analysis. We have Ryan Sleevi arguing that the 
process described by Ryan Hurst is insecure and thus a reason to forbid 
delegation of validation of the domain name part. Pedro Fuentes also wrote in 
favor of this outcome.

One thing that might help to resolve this is a more detailed description of the 
weaknesses that are present in the process described by Ryan Hurst. If we can 
all agree that the process is vulnerable, then it seems that we'd have a strong 
argument for banning it.

- Wayne

[1]
https://wiki.mozilla.org/CA/Forbidden_or_Problematic_Practices#Delegation_of_Domain_.2F_Email_Validation_to_Third_Parties


On Thu, May 23, 2019 at 12:22 PM Kathleen Wilson via dev-security-policy < 
dev-security-policy@lists.mozilla.org> wrote:

> On 5/13/19 10:24 AM, Wayne Thayer wrote:
> > The BRs forbid delegation of domain and IP address validation to 
> > third parties. However, the BRs don't forbid delegation of email 
> > address validation nor do they apply to S/MIME certificates.
> >
> > Delegation of email address validation is already addressed by 
> > Mozilla's Forbidden Practices [1] state:
> >
> > "Domain and Email validation are core requirements of the Mozilla's 
> > Root Store Policy and should always be incorporated into the issuing 
> > CA's procedures. Delegating this function to 3rd parties is not permitted."
> >
> > I propose that we move this statement (changing "the Mozilla's Root 
> > Store Policy" to "this policy") into policy section 2.2 "Validation 
> > Practices".
> >
> > This is https://github.com/mozilla/pkipolicy/issues/175
> >
> > I will appreciate everyone's input on this proposal.
> >
> > - Wayne
> >
> > [1]
> >
> https://wiki.mozilla.org/CA/Forbidden_or_Problematic_Practices#Delegat
> ion_of_Domain_.2F_Email_Validation_to_Third_Parties
> >
>
>
> All,
>
> As the person who filed the Github issue for this, I would like to 
> provide some background and my opinion.
>
> Currently the 'Delegation of Domain / Email Validation to Third Parties'
> section of the 'Forbidden Practices' page says:
> "This is forbidden by the Baseline Requirements, section 1.3.2.
> Domain and Email validation are core requirements of the Mozilla's 
> Root Store Policy and should always be incorporated into the issuing 
> CA's procedures. Delegating this function to 3rd parties is not permitted."
>
> Based on the way that section is written, it appears that domain 
> validation (and the BRs) was the primary consideration, and that the 
> Email part of it was an afterthought, or added later. Historically, my 
> attention has been focused on TLS certs, so it is possible that the 
> ramifications of adding Email validation to this section was not fully 
> thought through.
>
> I don't remember who added this email validation text or when, but I 
> can tell you that when I review root inclusion requests I have only 
> been concerned about making sure that domain validation is not being 
> delegated to 3rd parties. It wasn't until a representative of a CA 
> brought this to my attention that I realized that there has been a 
> difference in text on this wiki page versus the rules I have been 
> trying to enforce. That is when I filed the github issue.
>
> I propose that we can resolve this discrepancy for now by removing "/ 
> Email Validation" from the title of the section and removing "and Email"
> from the contents of the section.
>
> Unless we believe there are significant sec

RE: OCSP responder support for SHA256 issuer identifier info

2019-10-04 Thread Jeremy Rowley via dev-security-policy
(And, for the record, none of that legacy infrastructure that would Ryan 
mentions taking years to update exists anymore. Yay for shutting down legacy 
systems!)

-Original Message-
From: dev-security-policy  On 
Behalf Of Jeremy Rowley via dev-security-policy
Sent: Friday, October 4, 2019 12:35 PM
To: Tomas Gustavsson ; 
mozilla-dev-security-pol...@lists.mozilla.org
Subject: RE: OCSP responder support for SHA256 issuer identifier info

The CAB forum specifies that OCSP responses MUST conform to RFC5019 or RFC 
6960.  The requirements do not specific which RFC to follow when processing 
requests, but I think you can imply that either one is required, right?  

Section 2.1.1. specifies that:  
Clients MUST use SHA1 as the hashing algorithm for the CertID.issuerNameHash 
and the CertID.issuerKeyHash values. Anyone implementing the BRs would expect 
SHA1 for both fields. Where does the requirement to support SHA256 come in? As 
Ryan mentioned, there was some discussion, but it seems like there was nothing 
settled. I'd support a ballot clarifying the profile, but I don't understand 
the value of requiring both SHA1 and SHA2 signatures for OCSP. Doesn't it just 
make OCSP more cumbersome? 

-Original Message-
From: dev-security-policy  On 
Behalf Of Tomas Gustavsson via dev-security-policy
Sent: Friday, October 4, 2019 1:45 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: OCSP responder support for SHA256 issuer identifier info

I was pointed to this interesting discussion. We were forced to support 
requests with SHA256 in CertID back in 2014. Not for any relevant security 
reasons, just because some stubborn auditors saw a red flag on the mentioning 
of SHA-1.

We've implemented it by having both hashes in the lookup table where we check 
for issuer when a response comes in. 

What to have in the response was an interesting topic.

In the response we use the same certID that the client sent. I would expect 
that any client if checking CertID in the response would expect it to match 
what they send. 

I'm suspicious of adding two SingleResponse in the response, one for each 
CertID. 
- Clients are used to one response, they may fail verification if the first one 
doesn't have the same CertID
- When auditors that requiring SHA3, shall we add three? That approach does not 
seem agile.
- It increases the size of responses, we've been told before about the desire 
to keep responses as small as possible (typically to fit in a single etehrnet 
frame)

Regards,
Tomas

On Thursday, September 19, 2019 at 7:45:10 PM UTC+2, Ryan Sleevi wrote:
> Thanks for raising this!
> 
> There some some slight past discussion in the CA/B Forum on this - 
> https://cabforum.org/pipermail/public/2013-November/002440.html - as 
> well as a little during the SHA-1 deprecation discussions ( 
> https://cabforum.org/pipermail/public/2016-November/008979.html ) and 
> crypto agility discussions ( 
> https://cabforum.org/pipermail/public/2014-September/003921.html ), 
> but none really nailed it down to the level you have.
> 
> Broadly, it suggests the need for a much tighter profile of OCSP, 
> either within policies or the BRs. Two years ago, I started work on 
> such a thing -
> https://github.com/sleevi/cabforum-docs/pull/2 - but a certain large 
> CA suggested it would take them years to even implement that, and it 
> wouldn't have covered this!
> 
> I can't see #3 being valid, but I can see and understand good 
> arguments for
> #1 and #4. I don't think #5 works, because of Section 2.3 of RFC 6960.
> 
> The question about whether #2 is valid is about whether or not a 
> client should be expected to be able to match the CertID in the 
> OCSPRequest.requestList to the CertID in the 
> OCSPResponse.BasicOCSPResponse.responses list. 4.2.2.3 requires that 
> the response MUST include a SingleResponse for each certificate in the 
> request, but may include additional, and so a client encountering a
> SHA-1 computed CertID in response to a SHA-256 CertID would have to 
> recompute all the CertIDs to see if it matched. On the other hand, RFC
> 5019 2.2.1 states that "In the case where a responder does not have 
> the ability to respond to an OCSP response containing an option not 
> supported by the server, it SHOULD return the most complete response it can."
> 
> A different question would be whether said responder, in response to a
> SHA-1 request, can and/or should provide a response with both a SHA-1 
> computed CertID AND a SHA-256 computed CertID. This would improve the 
> pre-generation performance that Rob was concerned about, and allow 
> both
> SHA-1 and SHA-2 requests to be satisfied by the same BasicOCSPResponse.
> 
> However, one concern with the pre-generation approach is that 4.2.2.3 
> requires that the res

RE: OCSP responder support for SHA256 issuer identifier info

2019-10-04 Thread Jeremy Rowley via dev-security-policy
The CAB forum specifies that OCSP responses MUST conform to RFC5019 or RFC 
6960.  The requirements do not specific which RFC to follow when processing 
requests, but I think you can imply that either one is required, right?  

Section 2.1.1. specifies that:  
Clients MUST use SHA1 as the hashing algorithm for the CertID.issuerNameHash 
and the CertID.issuerKeyHash values. Anyone implementing the BRs would expect 
SHA1 for both fields. Where does the requirement to support SHA256 come in? As 
Ryan mentioned, there was some discussion, but it seems like there was nothing 
settled. I'd support a ballot clarifying the profile, but I don't understand 
the value of requiring both SHA1 and SHA2 signatures for OCSP. Doesn't it just 
make OCSP more cumbersome? 

-Original Message-
From: dev-security-policy  On 
Behalf Of Tomas Gustavsson via dev-security-policy
Sent: Friday, October 4, 2019 1:45 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: OCSP responder support for SHA256 issuer identifier info

I was pointed to this interesting discussion. We were forced to support 
requests with SHA256 in CertID back in 2014. Not for any relevant security 
reasons, just because some stubborn auditors saw a red flag on the mentioning 
of SHA-1.

We've implemented it by having both hashes in the lookup table where we check 
for issuer when a response comes in. 

What to have in the response was an interesting topic.

In the response we use the same certID that the client sent. I would expect 
that any client if checking CertID in the response would expect it to match 
what they send. 

I'm suspicious of adding two SingleResponse in the response, one for each 
CertID. 
- Clients are used to one response, they may fail verification if the first one 
doesn't have the same CertID
- When auditors that requiring SHA3, shall we add three? That approach does not 
seem agile.
- It increases the size of responses, we've been told before about the desire 
to keep responses as small as possible (typically to fit in a single etehrnet 
frame)

Regards,
Tomas

On Thursday, September 19, 2019 at 7:45:10 PM UTC+2, Ryan Sleevi wrote:
> Thanks for raising this!
> 
> There some some slight past discussion in the CA/B Forum on this - 
> https://cabforum.org/pipermail/public/2013-November/002440.html - as 
> well as a little during the SHA-1 deprecation discussions ( 
> https://cabforum.org/pipermail/public/2016-November/008979.html ) and 
> crypto agility discussions ( 
> https://cabforum.org/pipermail/public/2014-September/003921.html ), 
> but none really nailed it down to the level you have.
> 
> Broadly, it suggests the need for a much tighter profile of OCSP, 
> either within policies or the BRs. Two years ago, I started work on 
> such a thing -
> https://github.com/sleevi/cabforum-docs/pull/2 - but a certain large 
> CA suggested it would take them years to even implement that, and it 
> wouldn't have covered this!
> 
> I can't see #3 being valid, but I can see and understand good 
> arguments for
> #1 and #4. I don't think #5 works, because of Section 2.3 of RFC 6960.
> 
> The question about whether #2 is valid is about whether or not a 
> client should be expected to be able to match the CertID in the 
> OCSPRequest.requestList to the CertID in the 
> OCSPResponse.BasicOCSPResponse.responses list. 4.2.2.3 requires that 
> the response MUST include a SingleResponse for each certificate in the 
> request, but may include additional, and so a client encountering a 
> SHA-1 computed CertID in response to a SHA-256 CertID would have to 
> recompute all the CertIDs to see if it matched. On the other hand, RFC 
> 5019 2.2.1 states that "In the case where a responder does not have 
> the ability to respond to an OCSP response containing an option not 
> supported by the server, it SHOULD return the most complete response it can."
> 
> A different question would be whether said responder, in response to a
> SHA-1 request, can and/or should provide a response with both a SHA-1 
> computed CertID AND a SHA-256 computed CertID. This would improve the 
> pre-generation performance that Rob was concerned about, and allow 
> both
> SHA-1 and SHA-2 requests to be satisfied by the same BasicOCSPResponse.
> 
> However, one concern with the pre-generation approach is that 4.2.2.3 
> requires that the response MUST include a SingleResponse for each 
> certificate in the request. RFC 5019 2.1.1 limits clients using that 
> profile to only include one Request in the OCSPRequest.RequestList 
> (via a MUST). So should responders be permitted to reject requests 
> that include multiple? Or should they be required to do online 
> signing? Similar to extensions.
> 
> This suggests we should actually nail down and define what we expect, 
> perhaps as a clear processing algorithm for how a Responder must 
> respond to various requests. I suspect that "What we want" is a 
> profile of RFC 5019 that nails down the SHOULD / SHOULD NOT and MAY /

RE: Policy 2.7 Proposal:Extend Section 8 to Encompass Subordinate CAs

2019-10-03 Thread Jeremy Rowley via dev-security-policy
Hey Wayne, 

I think there might be confusion on how the notification is supposed to happen. 
Is notification through CCADB sufficient? We've uploaded all of the Sub CAs to 
CCADB including the technically constrained ICAs. Each one that is 
hosted/operated by itself is marked that way using the Subordinate CA Owner 
field. Section 8 links to emailing certifica...@mozilla.org but operationally, 
CCADB has become the default means of providing this notice. If you're 
expecting email, that may be worth clarifying in case CAs missed that an email 
is required. I know I missed that, and because CCADB is the common method of 
notification there is a chance that notice was considered sent but not in the 
expected way. 

There's also confusion over the "new to Mozilla" language I think. I 
interpreted this language as organizations issued cross-signs after the policy. 
For example, Siemens operated a Sub CA through Quovadis prior to policy date so 
they aren't "new" to the CA space even if they were re-certified. However, they 
would be new in the sense you identified - they haven't gone through an 
extensive review by the community.  If the goal is to ensure the community 
review happens for each Sub CA, then requiring all recertifications to go 
through an approval process makes sense instead of making an exception for new. 
I'm not sure how many exist currently, but if there are not that many 
organizations, does a grandfathering clause cause unnecessary complexity? I 
realize this is not in DigiCert's best interest, but the community may benefit 
the most by simply requiring a review of all Sub CAs instead of trying to 
grandfather in existing cross-signs.  Do you have an idea on the number that 
might entail
 ? At worst, we waste a bunch of time discovering that all of these are 
perfectly operated and that they could have been grandfathered in the first 
place. At best, we identify some critical issues and resolve them as a 
community. 

If there are a significant number of unconstrained on-prem CAs, then language 
that requires a review on re-signing would be helpful.  Perhaps say "As of X 
date, a CA MUST NOT sign a non-technically constrained certificate where 
cA=True for keys that are hosted external to the CA's infrastructure or that 
are not operated in accordance with the issuing CA's policies and procedures 
unless Mozilla has first granted permission for such certificate"? The wording 
needs work of course, but the idea is that they go through the discussion and 
Mozilla signs off. A process for unconstrained Sub CAs that is substantially 
similar to the root inclusion makes sense, but there is documentation on CCADB 
for the existing ones. Still, this documentation should probably made 
available, along with the previous incident reports, to the community for 
review and discussion. Afterall, anything not fully constrained is essentially 
operating the same as a fully embedded root.

Speaking on a personal, non-DigiCert note, I think on-prem sub CAs are a bad 
idea, and I fully support more careful scrutiny on which entities are 
controlling keys. Looking at the DigiCert metrics, the on-prem Sub CAs are 
responsible for over half of the incident reports, with issues ranging from 
missed audit dates to incorrect profile information. The long cycle in getting 
information,  being a middle-man information gathering, and trying to convey 
both Mozilla and CAB forum policy makes controlling compliance very difficult, 
and a practice I would not recommend to any CA. Once you've established a 
relationship as a signer CA (or acquired a relationship), extraditing yourself 
is... difficult.  The certificates end up embedded on smart cards, used by 
government institutions and pinned in weird places. And the unfortunate part is 
you don't have the direct relationship with the end-user to offer counsel 
against some of the practices. That extra abstraction layer between the CA and 
root
  store program ends up adding a lot more complexity than you'd initially 
think. Delegating the CA responsibility ends up being a bad idea and takes 
years to fix. DigiCert is finally down to the final few TLS sub CAs (5) and 
each are operating in OCSP signing mode only. They'll all be revoked in 2020. 

Jeremy


-Original Message-
From: dev-security-policy  On 
Behalf Of Wayne Thayer via dev-security-policy
Sent: Thursday, October 3, 2019 2:45 PM
To: mozilla-dev-security-policy 
Subject: Re: Policy 2.7 Proposal:Extend Section 8 to Encompass Subordinate CAs

I'd like to revisit this topic because I see it as a significant change and am 
surprised that it didn't generate any discussion.

Taking a step back, a change [1] to notification requirements was made last 
year to require CAs that are signing unconstrained subordinate CAs (including 
cross-certs) controlled by a different organization to notify Mozilla. We have 
received few, if any, notifications of this nature, so I have to wonder if CAs 
are adhering to th

RE: Next Root Store Policy Update

2019-10-02 Thread Jeremy Rowley via dev-security-policy
One suggestion on incident reports is to define "regularly update" as some 
period of time as non-responses can result in additional incident reports.  
Maybe something along the lines of "the greater of every 7 days, the time 
period specified in the next update field by Mozilla, or the time period for 
the next update as agreed upon with Mozilla". I'd also change "the 
corresponding bug is resolved by a Mozilla representative" to "the 
corresponding bug is marked as resolved in bugzilla by a Mozilla 
representative" since the CA is resolving the actual bug, and Mozilla is 
managing its perception on the bug's status.

Jeremy

-Original Message-
From: dev-security-policy  On 
Behalf Of Wayne Thayer via dev-security-policy
Sent: Wednesday, October 2, 2019 4:17 PM
To: mozilla-dev-security-policy 
Subject: Re: Next Root Store Policy Update

Over the past 3 months, a number of other projects distracted me from this 
work. Now I'd like to focus on finishing these updates to our Root Store 
policy. There are roughly 6 issues remaining to be discussed, and I will, as 
always, greatly appreciate everyone's input on them. I'll be sending out 
individual emails on each issue.

Meanwhile, you can view a redline of the changes we previously agreed on:
https://github.com/mozilla/pkipolicy/compare/master...2.7

- Wayne

On Wed, Mar 27, 2019 at 4:12 PM Wayne Thayer  wrote:

> I've added a few more issues that were recently created to the list 
> for
> 2.7: https://github.com/mozilla/pkipolicy/labels/2.7
>
> 176 - Clarify revocation requirements for S/MIME certs
> 175 - Forbidden Practices wiki page says email validation cannot be 
> delegated to 3rd parties
>
> I plan to begin posting issues for discussion shortly.
>
>
> On Fri, Mar 8, 2019 at 2:12 PM Wayne Thayer  wrote:
>
>> Later this month, I would like to begin discussing a number of 
>> proposed changes to the Mozilla Root Store policy [1]. I have 
>> reviewed the list of issues on GitHub and labeled the ones that I recommend 
>> discussing:
>> https://github.com/mozilla/pkipolicy/labels/2.7 They are:
>>
>> 173 - Strengthen requirement for newly included roots to meet all 
>> current requirements
>> 172 - Update section 5.3 to include Policy Certification Authorities 
>> as an exception to the mandatory EKU inclusion requirement
>> 171 - Require binding of CA certificates to CP/CPS
>> 170 - Clarify Section 5.1 about allowed ECDSA curve-hash pair 169, 
>> 140 - Extend Section 8 to also encompass subordinate CAs 168, 161, 
>> 158  - Require Incident Reports, move practices into policy
>> 163 - Require EKUs in end-entity certificates (S/MIME)
>> 162 - Require disclosure of CA software vendor/version in incident 
>> report
>> 159 - Clarify section 5.3.1 Technically Constrained
>> 152 - Add EV audit exception for policy constrained intermediates
>> 151 - Change PITRA to Point-in-Time assessment in section 8
>>
>> I will appreciate any feedback on the proposed list of issues to discuss.
>>
>> I do recognize that the current DarkMatter discussions could result 
>> in the need to add some additional items to this list.
>>
>> I have created a new branch for drafting these changes [1] and made 
>> one commit that adds a bullet to the BR Conformance section informing 
>> the reader that Mozilla policy has a more restrictive list of 
>> approved algorithms [3]
>>
>> As we've done in the past, I plan to post individual issues for 
>> discussion in small batches over the next few months, with the goal 
>> of finalizing version 2.7 by June.
>>
>> - Wayne
>>
>> [1]
>> https://www.mozilla.org/en-US/about/governance/policies/security-grou
>> p/certs/policy/ [2] 
>> https://github.com/mozilla/pkipolicy/blob/2.7/rootstore/policy.md
>> [3] https://github.com/mozilla/pkipolicy/issues/167
>>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Policy 2.7 Proposal: Require EKUs in End-Entity Certificates

2019-10-02 Thread Jeremy Rowley via dev-security-policy
I'm surprised any CA has heartburn over the EKU changes. Microsoft has required 
them in end entity certificates for quite some time. From the MS policy: 
"Effective February 1, 2017, all end-entity certificates must contain the EKU 
for the purpose that the CA issued the certificate to the customer, and the 
end-entity certificate may not use “any EKU.”" There's a chance that the CA is 
not in Microsoft, but I thought Mozilla usually had fewer CAs than Microsoft 
included. 

-Original Message-
From: dev-security-policy  On 
Behalf Of Wayne Thayer via dev-security-policy
Sent: Wednesday, October 2, 2019 6:05 PM
To: horn...@gmail.com
Cc: mozilla-dev-security-policy 
Subject: Re: Policy 2.7 Proposal: Require EKUs in End-Entity Certificates

On Tue, Jul 9, 2019 at 2:12 AM horn917--- via dev-security-policy < 
dev-security-policy@lists.mozilla.org> wrote:

> Wayne Thayer於 2019年3月30日星期六 UTC+8上午4時48分27秒寫道:
> > The BRs require EKUs in leaf TLS certs, but there is no equivalent 
> > requirement for S/MIME certificates. This leads to confusion such as 
> > [1]
> in
> > which certificates that are not intended for TLS or S/MIME fall 
> > within
> the
> > scope of our policies.
> >
> > Simply requiring EKUs in S/MIME certificates won't solve the problem
> unless
> > we are willing to exempt certificates without an EKU from our 
> > policies,
> and
> > doing that would create a rather obvious loophole for issuing S/MIME 
> > certificates that don't adhere to our policies.
> >
> > The proposed solution is to require EKUs in all certificates that 
> > chain
> up
> > to roots in our program, starting on some future effective date (e.g.
> April
> > 1, 2020). This has the potential to cause some compatibility 
> > problems
> that
> > would be difficult to measure and assess. Before drafting language 
> > for
> this
> > proposal, I would like to gauge everyone's support for this proposal.
> >
> > Alternately, we could easily argue that section 1.1 of our existing
> policy
> > already makes it clear that CAs must include EKUs other than 
> > id-kp-serverAuth and id-kp-emailProtection in certificates that they 
> > wish to remain out of scope for our policies.
> >
> > This is https://github.com/mozilla/pkipolicy/issues/163
> >
> > I will greatly appreciate everyone's input on this topic.
> >
> > - Wayne
> >
> > [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1523221
>
>
> GPKI(Taiwan) will follow Mozilla policy to add EKU to EE certificates 
> However, the influence range of implementation is very large.
> We need to define our own Document Signing EKU and data encryption 
> EKU, and coordinate all subordinate Cas(Five CAs) and application 
> system’s owners(more than 2,000 application systems).
> It needs a whole year to implement this. Therefore, after multiple 
> evaluations, it is decided to officially add the EKU to the user 
> certificate on January 1, 2021.
> It is difficult for us to complete ahead of April 2020.
> Can we get more buffer?
>
>
I had expected to have this policy update completed sooner when I proposed 
April 2020 as the date for requiring EKUs in end-entity certificates. Given 
that, I think it's reasonable to push the date back to July 2020, but not to 
January 2021. 2021 seems particularly unreasonable in light of the Microsoft 
requirement [1] that went into effect in January 2017 and appears to apply to 
GPKI.

Will any other CAs find it impossible to meet this requirement for certificates 
issued after June 2020? Also, are there any concerns with this applying to 
certificates issued from technically constrained intermediate CA certificates? 
As-proposed, this applies to those certificates (and it appears to me that 
Microsoft's policy does as well).

- Wayne

[1]
https://docs.microsoft.com/en-us/security/trusted-root/program-requirements#4-program-technical-requirements
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: DigiCert OCSP services returns 1 byte

2019-09-12 Thread Jeremy Rowley via dev-security-policy
But the pre-cert exists and the pre-cert is a cert. I should be able to tell 
the status of a pre-cert because it really is a certificate.

-Original Message-
From: dev-security-policy  On 
Behalf Of Tim Shirley via dev-security-policy
Sent: Thursday, September 12, 2019 6:21 PM
To: r...@sleevi.com; Alex Cohn 
Cc: mozilla-dev-security-pol...@lists.mozilla.org; Jeremy Rowley 
; Wayne Thayer 
Subject: RE: DigiCert OCSP services returns 1 byte

Why would a user agent view a pre-certificate as "evidence that an equivalent 
certificate exists"?  It's evidence that it might exist..  That the CA had the 
intent to make it exist at the time they signed the precertificate..  But I 
can't find anything in RFC 6962 that suggests to me that if the CA doesn't 
follow through on that intent that their systems are required to behave as if 
it does exist.

And of course Mozilla could add a policy to require exactly that, as the 
proposed addition does.  But I'm struggling to see the practical value in doing 
so.

Regards,
Tim

-Original Message-
From: dev-security-policy  On 
Behalf Of Ryan Sleevi via dev-security-policy
Sent: Thursday, September 12, 2019 6:40 PM
To: Alex Cohn 
Cc: mozilla-dev-security-pol...@lists.mozilla.org; Wayne Thayer 
; Jeremy Rowley 
Subject: Re: DigiCert OCSP services returns 1 byte

On Thu, Sep 12, 2019 at 11:25 AM Alex Cohn via dev-security-policy < 
dev-security-policy@lists.mozilla.org> wrote:

> On Wed, Sep 11, 2019 at 10:09 PM Jeremy Rowley via dev-security-policy 
> < dev-security-policy@lists.mozilla.org> wrote:
>
> > This means, for example, that (i) a CA must provide OCSP services 
> > and responses in accordance with the Mozilla policy for all 
> > pre-certificates
> as
> > if corresponding certificate exists and (ii) a CA must be able to 
> > revoke
> a
> > pre-certificate if revocation of the certificate is required under 
> > the Mozilla policy and the corresponding certificate doesn't 
> > actually exist
> and
> > therefore cannot be revoked.
> >
>
> Should a CA using a precertificate signing certificate be required to 
> provide OCSP services for their precertificates? Or is it on the 
> relying party to calculate the proper OCSP request for the final 
> certificate and send that instead? In other words, should we expect a 
> CT-naïve OCSP checker to work normally when presented, e.g., with 
> https://scanmail.trustwave.com/?c=4062&d=ysn63WDApoyhRKbM1KwsYGvdnm11Y
> YNCq-2zZRHKOQ&s=5&u=https%3a%2f%2fcrt%2esh%2f%3fid%3d1868433277%3f
>

I think this may be the wrong framing. The issue is not about ensuring "a 
CT-naïve OCSP checker" can get responses for pre-certs. It's about ensuring 
that, from the point of view of a user agent that views a pre-certificate as 
evidence that an equivalent certificate exists, even if it's not known (or even 
if it was not actually issued), can they verify that OCSP services exist and 
are configured for that equivalent certificate?

In this scenario, because RFC 6962 establishes that, even when using a 
Precertificate Signing Certificate, it will have been directly issued by the CA 
Certificate that will ultimately issue the "final" certificate (...
or would be treated as if it had), then we have the (name-hash, key-hash) that 
Neil was referring to, and we can easily verify using that, for the serial 
number indicated in the pre-certificate, that the OCSP response can be verified 
using the issuer of the Precertificate Signing Certificate.

Have I overlooked some ambiguity?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://scanmail.trustwave.com/?c=4062&d=ysn63WDApoyhRKbM1KwsYGvdnm11YYNCq7_hMxDGZg&s=5&u=https%3a%2f%2flists%2emozilla%2eorg%2flistinfo%2fdev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: DigiCert OCSP services returns 1 byte

2019-09-12 Thread Jeremy Rowley via dev-security-policy
The language says you have to provide the response for the cert as if it 
exists, but the reality is that sending a response for the precert is the same 
as calculating the result for the certificate as if it exists and sending that. 
They are the same thing because the precert is treated the same as the final 
cert if the final cert doesn’t exist.

I believe the intent is that a CT-naïve OCSP checker would work normally when 
presented with a precert or a certificate. Afterall, a precert is really just a 
certificate with a special extension.

From: Alex Cohn 
Sent: Thursday, September 12, 2019 9:25 AM
To: Jeremy Rowley 
Cc: Wayne Thayer ; 
mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: DigiCert OCSP services returns 1 byte

On Wed, Sep 11, 2019 at 10:09 PM Jeremy Rowley via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
 wrote:
This means, for example, that (i) a CA must provide OCSP services and responses 
in accordance with the Mozilla policy for all pre-certificates as if 
corresponding certificate exists and (ii) a CA must be able to revoke a 
pre-certificate if revocation of the certificate is required under the Mozilla 
policy and the corresponding certificate doesn't actually exist and therefore 
cannot be revoked.

Should a CA using a precertificate signing certificate be required to provide 
OCSP services for their precertificates? Or is it on the relying party to 
calculate the proper OCSP request for the final certificate and send that 
instead? In other words, should we expect a CT-naïve OCSP checker to work 
normally when presented, e.g., with https://crt.sh/?id=1868433277?

Alex
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert OCSP services returns 1 byte

2019-09-11 Thread Jeremy Rowley via dev-security-policy
I think that's perfectly clear but I wanted to double check in case "perfectly 
clear" was me misreading it. One thing that does come up a lot is whether a CA 
has to revoke a pre-certificate if the certificate doesn't actually issue. I 
think this has been adequately answered on the bug lists but would be good to 
codify.

For the language maybe:

This means, for example, that (i) a CA must provide OCSP services and responses 
in accordance with the Mozilla policy for all pre-certificates as if 
corresponding certificate exists and (ii) a CA must be able to revoke a 
pre-certificate if revocation of the certificate is required under the Mozilla 
policy and the corresponding certificate doesn't actually exist and therefore 
cannot be revoked.



From: Wayne Thayer 
Sent: Wednesday, September 11, 2019 7:22:29 PM
To: Jeremy Rowley 
Cc: mozilla-dev-security-pol...@lists.mozilla.org 

Subject: Re: DigiCert OCSP services returns 1 byte

Correct. That's what I intended to convey with the last sentence:

This means, for example, that the requirements for OCSP for end-entity 
certificates apply even when a CA has issued a precertificate without issuing a 
corresponding certificate.


Do you have any suggestions for how I can improve/clarify?


On Wed, Sep 11, 2019 at 6:19 PM Jeremy Rowley 
mailto:jeremy.row...@digicert.com>> wrote:
Hey Wayne - I take it that this "Mozilla recognizes a precertificate as proof 
that a corresponding certificate has been issued" means a CA issuing a precert 
without the final cert must respond "good" unless the pre-cert is revoked? 
Responding unknown means the CA wouldn't know that they issued the cert, which 
means they disagree with the proof that a corresponding cert has been issued.

Jeremy

-Original Message-
From: dev-security-policy 
mailto:dev-security-policy-boun...@lists.mozilla.org>>
 On Behalf Of Wayne Thayer via dev-security-policy
Sent: Wednesday, September 11, 2019 7:08 PM
To: 
mozilla-dev-security-pol...@lists.mozilla.org<mailto:mozilla-dev-security-pol...@lists.mozilla.org>
Subject: Re: DigiCert OCSP services returns 1 byte

Mozilla has, to-date, not published policies related to Certificate 
Transparency, but this is a case where a clarification would be helpful. I 
propose adding the following language to our "Required Practices" wiki page
[1]:

The current implementation of Certificate Transparency does not provide any
> way for Relying Parties to determine if a certificate corresponding to
> a given precertificate has or has not been issued. It is only safe to
> assume that a certificate corresponding to every precertificate exists.
>
> RFC 6962 states "The signature on the TBSCertificate indicates the
> certificate authority's intent to issue a certificate.  This intent is
> considered binding (i.e., misissuance of the Precertificate is
> considered equal to misissuance of the final certificate)."
>
>
>
> However, BR 7.1.2.5 state "For purposes of clarification, a
> Precertificate, as described in RFC 6962 - Certificate Transparency,
> shall not be considered to be a "certificate" subject to the
> requirements of RFC
> 5280 - Internet X.509 Public Key Infrastructure Certificate and
> Certificate Revocation List (CRL) Profile under these Baseline Requirements."
>
> Mozilla interprets the BR language as a specific exception allowing
> CAs to issue a precertificate containing the same serial number as the
> subsequent certificate [2]. Mozilla recognizes a precertificate as
> proof that a corresponding certificate has been issued.
>
> This means, for example, that the requirements for OCSP for end-entity
> certificates apply even when a CA has issued a precertificate without
> issuing a corresponding certificate.
>

I plan to add this to the wiki next week. I also plan to include this in in a 
future update to our Root Store Policy.

I will greatly appreciate your constructive feedback on this proposal.

- Wayne

[1] https://wiki.mozilla.org/CA/Required_or_Recommended_Practices
[2] https://cabforum.org/pipermail/public/2014-January/002694.html

On Thu, Aug 29, 2019 at 12:53 PM Jeremy Rowley via dev-security-policy < 
dev-security-policy@lists.mozilla.org<mailto:dev-security-policy@lists.mozilla.org>>
 wrote:

> Let me try that again since I didn't explain my original post very well.
> Totally worth it since I got a sweet Yu-gi-oh reference out of fit.
>
> What happened at DigiCert is that the OCSP service failed to return a
> signed response for a certificate where a pre-certificate existed by a
> certificate had not issued for whatever reason. The question asked was
> what type of OCSP services are required for a pre-cert if there is no
> other certificate. The answer we came up wit

RE: DigiCert OCSP services returns 1 byte

2019-09-11 Thread Jeremy Rowley via dev-security-policy
Hey Wayne - I take it that this "Mozilla recognizes a precertificate as proof 
that a corresponding certificate has been issued" means a CA issuing a precert 
without the final cert must respond "good" unless the pre-cert is revoked? 
Responding unknown means the CA wouldn't know that they issued the cert, which 
means they disagree with the proof that a corresponding cert has been issued.

Jeremy

-Original Message-
From: dev-security-policy  On 
Behalf Of Wayne Thayer via dev-security-policy
Sent: Wednesday, September 11, 2019 7:08 PM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: DigiCert OCSP services returns 1 byte

Mozilla has, to-date, not published policies related to Certificate 
Transparency, but this is a case where a clarification would be helpful. I 
propose adding the following language to our "Required Practices" wiki page
[1]:

The current implementation of Certificate Transparency does not provide any
> way for Relying Parties to determine if a certificate corresponding to 
> a given precertificate has or has not been issued. It is only safe to 
> assume that a certificate corresponding to every precertificate exists.
>
> RFC 6962 states “The signature on the TBSCertificate indicates the 
> certificate authority's intent to issue a certificate.  This intent is 
> considered binding (i.e., misissuance of the Precertificate is 
> considered equal to misissuance of the final certificate).”
>
>
>
> However, BR 7.1.2.5 state “For purposes of clarification, a 
> Precertificate, as described in RFC 6962 – Certificate Transparency, 
> shall not be considered to be a “certificate” subject to the 
> requirements of RFC
> 5280 - Internet X.509 Public Key Infrastructure Certificate and 
> Certificate Revocation List (CRL) Profile under these Baseline Requirements.”
>
> Mozilla interprets the BR language as a specific exception allowing 
> CAs to issue a precertificate containing the same serial number as the 
> subsequent certificate [2]. Mozilla recognizes a precertificate as 
> proof that a corresponding certificate has been issued.
>
> This means, for example, that the requirements for OCSP for end-entity 
> certificates apply even when a CA has issued a precertificate without 
> issuing a corresponding certificate.
>

I plan to add this to the wiki next week. I also plan to include this in in a 
future update to our Root Store Policy.

I will greatly appreciate your constructive feedback on this proposal.

- Wayne

[1] https://wiki.mozilla.org/CA/Required_or_Recommended_Practices
[2] https://cabforum.org/pipermail/public/2014-January/002694.html

On Thu, Aug 29, 2019 at 12:53 PM Jeremy Rowley via dev-security-policy < 
dev-security-policy@lists.mozilla.org> wrote:

> Let me try that again since I didn't explain my original post very well.
> Totally worth it since I got a sweet Yu-gi-oh reference out of fit.
>
> What happened at DigiCert is that the OCSP service failed to return a 
> signed response for a certificate where a pre-certificate existed by a 
> certificate had not issued for whatever reason. The question asked was 
> what type of OCSP services are required for a pre-cert if there is no 
> other certificate. The answer we came up with is it should respond 
> "GOOD" based on the Mozilla policy, not Unknown or any other response. 
> Note that this was a bug in the DigiCert system but it lead to a fun internal 
> discussion.
> What I'm sharing is the outcome of the internal discussion - it's only 
> tangentially related to the bug, not the cause or remediation of it.
>
> Summary: Pre-certs require a standard OCSP response as if the pre-cert 
> was a normal cert. In fact, it's a mistake to ever think of pre-certs 
> as anything other than TLS certs, even if the poison extension exists.
>
> The question comes up because the BRs don't cover pre-certs. However, 
> as Ryan points out, the browsers sort-of cover this as does the 
> Mozilla policy. The browser say this is a promise to issue the cert 
> and mis-issuance of a pre-cert is the same as mis-issuance of a cert. 
> Although this isn't mis-issuance in the traditional profile sense, the 
> lack of OCSP services for the pre-cert is a violation of the Mozilla 
> policy. I couldn't figure out if it's a violation of the Chrome policy 
> since Chrome says it's a promise to issue a cert. If the cert hasn't 
> issued, then I'm not sure where that puts the OCSP service for Chrome. 
> Regardless, according to Mozilla's policy, the requirement is that 
> regardless of how long the cert takes to issue, the CA must provide 
> OCSP services for the pre-cert. The reason is Mozilla requires an OCSP 
> for each end-entity certificate listin

EV Jurisdiction of Incorporation

2019-09-11 Thread Jeremy Rowley via dev-security-policy
Hi Everyone,



One of my goals at DigiCert is provide greater transparency. One of the ideas 
I’ve kicked around is community-drive EV or EV transparency.  To start that 
off, I thought I’d share the sources we use verification of the jurisdiction of 
incorporation/registration here. This list is available here 
https://www.digicert.com/legal-repository/ (direct: 
https://www.digicert.com/wp-content/uploads/2019/09/DigiCert-Approved-Incorporating-Agencies.xlsx).
  Sharing this was suggested from the community and the digicert leadership 
team thought it was a great idea. Not only does it get community feedback on 
the sources we use (or shouldn’t use), but it may identify sources that other 
CAs could use to do the verification. The idea is we could build a definitive 
master list that the CAB forum could use for verification of EV. This would 
further standardize EV. If we start including a reference to the source, then 
someone could easily verify the accuracy of the information and the identity of 
an organization.  This would solve a major headache I’ve had with EV – you 
can’t see where the JOI information originates.



For reference, section 8.5.2 requires a CA to verify the legal existence of an 
entity through “a filing with (or an act of) the Incorporating or Registration 
Agency in its Jurisdiction of Incorporation or Registration (e.g., by issuance 
of a certificate of incorporation, registration number, etc.) or created or 
recognized by a Government Agency (e.g. under a charter, treaty, convention, or 
equivalent recognition instrument)”. This is far broader than an incorporating 
agency, but we use incorporating agencies as the primary source, and we’re 
working to eliminate sources like SEC.   This source list combines information 
from primary and secondary sources (both incorporating and registration 
sources).

 

Sharing this kind of information helps us get to the end-goal of a more 
transparent EV ecosystem and builds a more community-driven EV practice. I’m 
looking forward to your feedback. Also, let me know if this is interesting, and 
what else you’d like to see.



Thanks!



Jeremy

 



___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2019.08.28 Let’s Encrypt OCSP Responder Returned “Unauthorized” for Some Precertificates

2019-08-31 Thread Jeremy Rowley via dev-security-policy
Obviously I think good is the best answer based on my previous posts. A precert 
is still a cert. But I can see how people could disagree with me.

From: dev-security-policy  on 
behalf of Jeremy Rowley via dev-security-policy 

Sent: Saturday, August 31, 2019 9:05:24 AM
To: Tomas Gustavsson ; 
mozilla-dev-security-pol...@lists.mozilla.org 

Subject: Re: 2019.08.28 Let’s Encrypt OCSP Responder Returned “Unauthorized” 
for Some Precertificates

I dont recall the cab forum ever contemplating or discussing  ocsp for 
precertificates. The requirement to provide responses is pretty clear, but what 
that response should be is a little confusing imo.

From: dev-security-policy  on 
behalf of Tomas Gustavsson via dev-security-policy 

Sent: Saturday, August 31, 2019 9:00:08 AM
To: mozilla-dev-security-pol...@lists.mozilla.org 

Subject: Re: 2019.08.28 Let’s Encrypt OCSP Responder Returned “Unauthorized” 
for Some Precertificates

On Saturday, August 31, 2019 at 3:13:00 PM UTC+2, Jeremy Rowley wrote:
> >From RFC6962:
>
> “As above, the Precertificate submission MUST be accompanied by the 
> Precertificate Signing Certificate, if used, and all additional certificates 
> required to verify the chain up to an accepted root certificate.  The 
> signature on the TBSCertificate indicates the certificate authority's intent 
> to issue a certificate.  This intent is considered binding (i.e., misissuance 
> of the Precertificate is considered equal to misissuance of the final 
> certificate).  Each log verifies the Precertificate signature chain and 
> issues a Signed Certificate Timestamp on the corresponding TBSCertificate.”
>
> >From 7.1.2.5 of the baseline requirements:
> “For purposes of clarification, a Precertificate, as described in RFC 6962 – 
> Certificate Transparency, shall not be considered to be a “certificate” 
> subject to the requirements of RFC 5280 - Internet X.509 Public Key 
> Infrastructure Certificate and Certificate Revocation List (CRL) Profile 
> under these Baseline Requirements”
>
> >From 6960:
>The "good" state indicates a positive response to the status inquiry.  At 
> a minimum, this positive response indicates that no certificate with the 
> requested certificate serial number currently within its validity interval is 
> revoked.  This state does not necessarily mean that the certificate was ever 
> issued or that the time at which the response was produced is within the 
> certificate's validity interval. Response extensions may be used to convey 
> additional information on assertions made by the responder regarding the 
> status of the certificate, such as a positive statement about issuance, 
> validity, etc.
>
>The "revoked" state indicates that the certificate has been revoked, 
> either temporarily (the revocation reason is certificateHold) or permanently. 
>  This state MAY also be returned if the associated CA has no record of ever 
> having issued a certificate with the certificate serial number in the 
> request, using any current or previous issuing key (referred to as a 
> "non-issued" certificate in this document).
>
>The "unknown" state indicates that the responder doesn't know about the 
> certificate being requested, usually because the request indicates an 
> unrecognized issuer that is not served by this responder.”
>
> Mozilla Policy:
>
> 1.  Does not define a precertificate. Instead Mozilla policy covers 
> everything with serverAuth (1.1)
>
> 2.  Requires functioning OCSP if the certificate contains an AIA (5.2)
>
> 3.  Must provide revocation information via the AIA for the cert (6)
>
> Argument for responding “Good”:
> A pre-certificate is a certificate and contains an AIA. Per the Mozilla 
> policy, the CA must provide services. Providing revoked or unknown is 
> incorrect because the pre-cert did issue. Although the BRs define the 
> pre-cert as out of scope of the BRs for CRLs and 5280, that just means the 
> serial number can repeat. Responding anything other than good would provide 
> wrong information about the status of the pre-cert.
>
> Argument for responding “Revoked”, “Unknown”, or “Invalid”:
> A pre-certificate is not a cert. The BRs define it as a not a cert as does 
> RFC 6962. A pre-cert is an intent to issue a certificate. RFC6960 
> specifically calls out that the CA may use revoke as a response for 
> certificates without a record of being issued and unknown where there isn’t a 
> status yet. Although the Mozilla policy is silent on the status of whether a 
> pre-cert is a certificate, section 2.3 incorporates the baseline 
> requirements. As such, the Mozilla policy implicitly defines a precertificate 
> as “not a ce

Re: 2019.08.28 Let’s Encrypt OCSP Responder Returned “Unauthorized” for Some Precertificates

2019-08-31 Thread Jeremy Rowley via dev-security-policy
I dont recall the cab forum ever contemplating or discussing  ocsp for 
precertificates. The requirement to provide responses is pretty clear, but what 
that response should be is a little confusing imo.

From: dev-security-policy  on 
behalf of Tomas Gustavsson via dev-security-policy 

Sent: Saturday, August 31, 2019 9:00:08 AM
To: mozilla-dev-security-pol...@lists.mozilla.org 

Subject: Re: 2019.08.28 Let’s Encrypt OCSP Responder Returned “Unauthorized” 
for Some Precertificates

On Saturday, August 31, 2019 at 3:13:00 PM UTC+2, Jeremy Rowley wrote:
> >From RFC6962:
>
> “As above, the Precertificate submission MUST be accompanied by the 
> Precertificate Signing Certificate, if used, and all additional certificates 
> required to verify the chain up to an accepted root certificate.  The 
> signature on the TBSCertificate indicates the certificate authority's intent 
> to issue a certificate.  This intent is considered binding (i.e., misissuance 
> of the Precertificate is considered equal to misissuance of the final 
> certificate).  Each log verifies the Precertificate signature chain and 
> issues a Signed Certificate Timestamp on the corresponding TBSCertificate.”
>
> >From 7.1.2.5 of the baseline requirements:
> “For purposes of clarification, a Precertificate, as described in RFC 6962 – 
> Certificate Transparency, shall not be considered to be a “certificate” 
> subject to the requirements of RFC 5280 - Internet X.509 Public Key 
> Infrastructure Certificate and Certificate Revocation List (CRL) Profile 
> under these Baseline Requirements”
>
> >From 6960:
>The "good" state indicates a positive response to the status inquiry.  At 
> a minimum, this positive response indicates that no certificate with the 
> requested certificate serial number currently within its validity interval is 
> revoked.  This state does not necessarily mean that the certificate was ever 
> issued or that the time at which the response was produced is within the 
> certificate's validity interval. Response extensions may be used to convey 
> additional information on assertions made by the responder regarding the 
> status of the certificate, such as a positive statement about issuance, 
> validity, etc.
>
>The "revoked" state indicates that the certificate has been revoked, 
> either temporarily (the revocation reason is certificateHold) or permanently. 
>  This state MAY also be returned if the associated CA has no record of ever 
> having issued a certificate with the certificate serial number in the 
> request, using any current or previous issuing key (referred to as a 
> "non-issued" certificate in this document).
>
>The "unknown" state indicates that the responder doesn't know about the 
> certificate being requested, usually because the request indicates an 
> unrecognized issuer that is not served by this responder.”
>
> Mozilla Policy:
>
> 1.  Does not define a precertificate. Instead Mozilla policy covers 
> everything with serverAuth (1.1)
>
> 2.  Requires functioning OCSP if the certificate contains an AIA (5.2)
>
> 3.  Must provide revocation information via the AIA for the cert (6)
>
> Argument for responding “Good”:
> A pre-certificate is a certificate and contains an AIA. Per the Mozilla 
> policy, the CA must provide services. Providing revoked or unknown is 
> incorrect because the pre-cert did issue. Although the BRs define the 
> pre-cert as out of scope of the BRs for CRLs and 5280, that just means the 
> serial number can repeat. Responding anything other than good would provide 
> wrong information about the status of the pre-cert.
>
> Argument for responding “Revoked”, “Unknown”, or “Invalid”:
> A pre-certificate is not a cert. The BRs define it as a not a cert as does 
> RFC 6962. A pre-cert is an intent to issue a certificate. RFC6960 
> specifically calls out that the CA may use revoke as a response for 
> certificates without a record of being issued and unknown where there isn’t a 
> status yet. Although the Mozilla policy is silent on the status of whether a 
> pre-cert is a certificate, section 2.3 incorporates the baseline 
> requirements. As such, the Mozilla policy implicitly defines a precertificate 
> as “not a certificate”. Because it’s not a certificate the OCSP service does 
> not know how to respond so any response is okay because there is no 
> certificate with that serial number until the ‘intent to issue’ is fulfilled.
>
> Note that even if you argue that “revoked”, “invalid”, or “unknown” are 
> appropriate, the RFC still permits “good” as a response because no 
> certificates with that serial number are revoked. Good is the safe answer.

Was there not a pla

RE: 2019.08.28 Let’s Encrypt OCSP Responder Returned “Unauthorized” for Some Precertificates

2019-08-31 Thread Jeremy Rowley via dev-security-policy
You’re right. It could be any of the responses under RFC 6960.

From: Alex Cohn 
Sent: Friday, August 30, 2019 7:22 PM
To: Jeremy Rowley 
Cc: Jacob Hoffman-Andrews ; 
mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: 2019.08.28 Let’s Encrypt OCSP Responder Returned “Unauthorized” 
for Some Precertificates

On Fri, Aug 30, 2019 at 10:26 AM Jeremy Rowley via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
 wrote:
Is our answer right though? I wasn't sure. I said "Good" because "a promise to 
issue a cert" could be considered the same issued. In that case the BRs say you 
must respond good. However, if "a promise to issue a certificate" is not the 
same as issuance, the BRs don't apply to the OCSP until the certificate issues 
and the correct response is "Revoked" per the RFC.

The BRs apply for sure to the contents, but do they apply to the OCSP responses 
in the time period between when the pre-cert is logged and the cert is signed.

It seems reasonable to me to assume that if the contents of a precertificate 
are in-scope for the BRs, the OCSP responses would be likewise in-scope.

Seems like a nice simple rule is that the promise to issue is issuance 
regardless of what the BRs say and that you should respond good. This was our 
logic and why we decided on "Good".

I agree. A CA cannot prove they didn't issue the final certificate. Given 
existence of a pre-certificate, it is reasonable for a relying party to assume 
that the final certificate exists and therefore that OCSP services will be 
functional. I personally would view arguments such as "we didn't actually issue 
that, so we don't need to provide sane OCSP responses" with a great deal of 
skepticism, especially from CAs that do not automatically CT log their final 
certificates (nudge nudge DigiCert, Amazon, Entrust).

However, a very strict reading of the RFC and BR interaction means you need to 
respond "Revoked" until the cert issues. I don't like that outcome because it's 
complicated and leads to confusion.

Looking at sections 2.2 of RFC6960 and 4.9.10 of the BRs, I don't see the 
requirement to respond "revoked" for unknown or non-issued certificates - can 
you explain further? 4.9.10 forbids replying "good" for non-issued 
certificates, but I don't see any stipulations surrounding replying "unknown." 
The certificateHold + 1970-1-1 revocation date method of indicating a 
non-issued certificate in 6960 2.2 might be forbidden by an especially strict 
reading of BRs 4.9.13, but it's not mandated by 6960. In the absence of a 
precertificate signing certificate, OCSP queries for precertificate and 
certificate are identical, so it could be argued that the precertificate itself 
means it's not a "non-issued" certificate?

From a RP's perspective, I can easily envision problems resulting from an OCSP 
response for a given serial number transitioning from revoked to good, 
especially if the response is cached by the relying party or a proxy.

It also occurred to me that CAs using a precertificate signing certificate 
(e.g. Trustwave or NetLock) would be able to differentiate OCSP requests for 
precertificates from final certificates. How do these CAs handle OCSP for 
precertificates? Trustwave appears to always answer 
"unauthorized<https://crt.sh/?id=1827579322&opt=ocsp>" and NetLock 
"malformed<https://crt.sh/?id=1826448700&opt=ocsp>", which is...curious.

Alex
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: 2019.08.28 Let’s Encrypt OCSP Responder Returned “Unauthorized” for Some Precertificates

2019-08-31 Thread Jeremy Rowley via dev-security-policy
The best way to codify it is at the CAB forum since the CAB Forum language is 
the one that causes the problem (imo). We made a mistake by defining a 
precertificate as “not a certificate” when the intent was mostly to allow CAs 
to issue precertificates that had serial numbers duplicative with the actual 
certificate. I was thinking we should remove overly broad language and replace 
it with language that is more restrictive – something like all certificates 
must confirm to 5280 except precertificates which must conform to 6962. Needs 
work of course, but that was my initial idea.

From: Ryan Sleevi 
Sent: Friday, August 30, 2019 12:58 PM
To: Jeremy Rowley 
Cc: Jacob Hoffman-Andrews ; 
mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: 2019.08.28 Let’s Encrypt OCSP Responder Returned “Unauthorized” 
for Some Precertificates



On Fri, Aug 30, 2019 at 11:26 AM Jeremy Rowley via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
 wrote:
Is our answer right though? I wasn't sure. I said "Good" because "a promise to 
issue a cert" could be considered the same issued. In that case the BRs say you 
must respond good.

Citation needed ;-)

However, if "a promise to issue a certificate" is not the same as issuance, the 
BRs don't apply to the OCSP until the certificate issues and the correct 
response is "Revoked" per the RFC.

That would be... inadvisable, as you'd be unrevoking a certificate, which would 
definitely Be An Issue (e.g. 
https://bugzilla.mozilla.org/show_bug.cgi?id=1532333 )

I definitely want to make sure any confusion is resolved. Can you point to any 
Mozilla policies or comms that use the phrase "promise to issue a certificate" 
so we can clarify? It's unclear to me if it's being used as an unfortunate 
shorthand (and special apologies if I've contributed to that).

RFC 6962, Section 3.1, phrases it as such:
   "The signature on the TBSCertificate indicates the certificate
authority's intent to issue a certificate. This intent is considered
binding (i.e., misissuance of the Precertificate is considered equal
to misissuance of the final certificate)."

(Some past discussion at 
https://groups.google.com/d/topic/mozilla.dev.security.policy/Hk78klSv8AY/discussion
 )

In various discussions, this has been attempted to be further clarified: If a 
precertificate exists, for all intents and purposes of all policy obligations, 
an equivalent certificate is presumed to exist, as the CA has signaled a 
binding intent to do so. As a consequence, regardless of whether or not we 
"see" the certificate, it should be presumed to exist, and the OCSP status and 
any other certificate services, databases, or other should be presumed to exist.

The BRs apply for sure to the contents, but do they apply to the OCSP responses 
in the time period between when the pre-cert is logged and the cert is signed.

Seems like a nice simple rule is that the promise to issue is issuance 
regardless of what the BRs say and that you should respond good. This was our 
logic and why we decided on "Good". However, a very strict reading of the RFC 
and BR interaction means you need to respond "Revoked" until the cert issues. I 
don't like that outcome because it's complicated and leads to confusion.

Agreed that it's nonsense, but I don't see how the strict reading can lead to 
that conclusion. There's more statuses than the binary Good/Revoked, and that's 
extremely relevant (and the BRs Have Opinions on which statuses you can and 
should use for certs you don't know about)

Despite all of the writing above, I'm too lazy to copy/paste my comment from 
the Let's Encrypt issue, but I would hope any CA contemplating things should 
look at https://bugzilla.mozilla.org/show_bug.cgi?id=1577652#c3 in terms of a 
possible 'ideal' flow, and to share concerns or considerations with that. Even 
better would be CAs that have suggestions on how best to codify and memorialize 
that suggestion, if it's sensible and correct.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2019.08.28 Let’s Encrypt OCSP Responder Returned “Unauthorized” for Some Precertificates

2019-08-31 Thread Jeremy Rowley via dev-security-policy
>From RFC6962:

“As above, the Precertificate submission MUST be accompanied by the 
Precertificate Signing Certificate, if used, and all additional certificates 
required to verify the chain up to an accepted root certificate.  The signature 
on the TBSCertificate indicates the certificate authority's intent to issue a 
certificate.  This intent is considered binding (i.e., misissuance of the 
Precertificate is considered equal to misissuance of the final certificate).  
Each log verifies the Precertificate signature chain and issues a Signed 
Certificate Timestamp on the corresponding TBSCertificate.”

>From 7.1.2.5 of the baseline requirements:
“For purposes of clarification, a Precertificate, as described in RFC 6962 – 
Certificate Transparency, shall not be considered to be a “certificate” subject 
to the requirements of RFC 5280 - Internet X.509 Public Key Infrastructure 
Certificate and Certificate Revocation List (CRL) Profile under these Baseline 
Requirements”

>From 6960:
   The "good" state indicates a positive response to the status inquiry.  At a 
minimum, this positive response indicates that no certificate with the 
requested certificate serial number currently within its validity interval is 
revoked.  This state does not necessarily mean that the certificate was ever 
issued or that the time at which the response was produced is within the 
certificate's validity interval. Response extensions may be used to convey 
additional information on assertions made by the responder regarding the status 
of the certificate, such as a positive statement about issuance, validity, etc.

   The "revoked" state indicates that the certificate has been revoked, either 
temporarily (the revocation reason is certificateHold) or permanently.  This 
state MAY also be returned if the associated CA has no record of ever having 
issued a certificate with the certificate serial number in the request, using 
any current or previous issuing key (referred to as a "non-issued" certificate 
in this document).

   The "unknown" state indicates that the responder doesn't know about the 
certificate being requested, usually because the request indicates an 
unrecognized issuer that is not served by this responder.”

Mozilla Policy:

1.  Does not define a precertificate. Instead Mozilla policy covers everything 
with serverAuth (1.1)

2.  Requires functioning OCSP if the certificate contains an AIA (5.2)

3.  Must provide revocation information via the AIA for the cert (6)

Argument for responding “Good”:
A pre-certificate is a certificate and contains an AIA. Per the Mozilla policy, 
the CA must provide services. Providing revoked or unknown is incorrect because 
the pre-cert did issue. Although the BRs define the pre-cert as out of scope of 
the BRs for CRLs and 5280, that just means the serial number can repeat. 
Responding anything other than good would provide wrong information about the 
status of the pre-cert.

Argument for responding “Revoked”, “Unknown”, or “Invalid”:
A pre-certificate is not a cert. The BRs define it as a not a cert as does RFC 
6962. A pre-cert is an intent to issue a certificate. RFC6960 specifically 
calls out that the CA may use revoke as a response for certificates without a 
record of being issued and unknown where there isn’t a status yet. Although the 
Mozilla policy is silent on the status of whether a pre-cert is a certificate, 
section 2.3 incorporates the baseline requirements. As such, the Mozilla policy 
implicitly defines a precertificate as “not a certificate”. Because it’s not a 
certificate the OCSP service does not know how to respond so any response is 
okay because there is no certificate with that serial number until the ‘intent 
to issue’ is fulfilled.

Note that even if you argue that “revoked”, “invalid”, or “unknown” are 
appropriate, the RFC still permits “good” as a response because no certificates 
with that serial number are revoked. Good is the safe answer.





From: dev-security-policy  on 
behalf of Tomas Gustavsson via dev-security-policy 

Sent: Saturday, August 31, 2019 5:01:42 AM
To: mozilla-dev-security-pol...@lists.mozilla.org 

Subject: Re: 2019.08.28 Let’s Encrypt OCSP Responder Returned “Unauthorized” 
for Some Precertificates

Hi,

I find and hear a few non conclusive, sometimes contradictory, messages about 
OCSP responder handling of pre-certificates without final certificates. Reading 
this thread I don't find a firm conclusion either (albeit I may have missed it).
I'm not saying anything others have not said before, so I don't claim to say 
anything new, just to summarize what I believe to be a safe behavior.

(I'm merely interested in the technical behavior of an OCSP responder)

My position dates back to 2013 during CT implementation. Discussions back then:
https://groups.google.com/forum/m/#!searchin/certificate-transparency/tomas/certificate-transparency/1tWzSXKe3gQ
"a Precertificate is arguably not a Certificate

RE: 2019.08.28 Let’s Encrypt OCSP Responder Returned “Unauthorized” for Some Precertificates

2019-08-30 Thread Jeremy Rowley via dev-security-policy
Is our answer right though? I wasn't sure. I said "Good" because "a promise to 
issue a cert" could be considered the same issued. In that case the BRs say you 
must respond good. However, if "a promise to issue a certificate" is not the 
same as issuance, the BRs don't apply to the OCSP until the certificate issues 
and the correct response is "Revoked" per the RFC. 

The BRs apply for sure to the contents, but do they apply to the OCSP responses 
in the time period between when the pre-cert is logged and the cert is signed. 

Seems like a nice simple rule is that the promise to issue is issuance 
regardless of what the BRs say and that you should respond good. This was our 
logic and why we decided on "Good". However, a very strict reading of the RFC 
and BR interaction means you need to respond "Revoked" until the cert issues. I 
don't like that outcome because it's complicated and leads to confusion. 

-Original Message-
From: dev-security-policy  On 
Behalf Of Jacob Hoffman-Andrews via dev-security-policy
Sent: Thursday, August 29, 2019 5:37 PM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: 2019.08.28 Let’s Encrypt OCSP Responder Returned “Unauthorized” for 
Some Precertificates

Also filed at https://bugzilla.mozilla.org/show_bug.cgi?id=1577652

On 2019.08.28 we read Apple’s bug report at 
https://bugzilla.mozilla.org/show_bug.cgi?id=1577014 about DigiCert’s OCSP 
responder returning incorrect results for a precertificate. This prompted us to 
run our own investigation. We found in an initial review that for 35 of our 
precertificates, we were serving incorrect OCSP results (“unauthorized” instead 
of “good”). Like DigiCert, this happened when a precertificate was issued, but 
the corresponding certificate was not issued due to an error.

We’re taking these additional steps to ensure a robust fix:
  - For each precertificate issued according to our audit logs, verify that we 
are serving a corresponding OCSP response (if the precertificate is currently 
valid).
  - Configure alerting for the conditions that create this problem, so we can 
fix any instances that arise in the short term.
  - Deploy a code change to Boulder to ensure that we serve OCSP even if an 
error occurs after precertificate issuance.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: DigiCert OCSP services returns 1 byte

2019-08-29 Thread Jeremy Rowley via dev-security-policy
Let me try that again since I didn't explain my original post very well. 
Totally worth it since I got a sweet Yu-gi-oh reference out of fit. 

What happened at DigiCert is that the OCSP service failed to return a signed 
response for a certificate where a pre-certificate existed by a certificate had 
not issued for whatever reason. The question asked was what type of OCSP 
services are required for a pre-cert if there is no other certificate. The 
answer we came up with is it should respond "GOOD" based on the Mozilla policy, 
not Unknown or any other response. Note that this was a bug in the DigiCert 
system but it lead to a fun internal discussion. What I'm sharing is the 
outcome of the internal discussion - it's only tangentially related to the bug, 
not the cause or remediation of it.

Summary: Pre-certs require a standard OCSP response as if the pre-cert was a 
normal cert. In fact, it's a mistake to ever think of pre-certs as anything 
other than TLS certs, even if the poison extension exists.

The question comes up because the BRs don't cover pre-certs. However, as Ryan 
points out, the browsers sort-of cover this as does the Mozilla policy. The 
browser say this is a promise to issue the cert and mis-issuance of a pre-cert 
is the same as mis-issuance of a cert. Although this isn't mis-issuance in the 
traditional profile sense, the lack of OCSP services for the pre-cert is a 
violation of the Mozilla policy. I couldn't figure out if it's a violation of 
the Chrome policy since Chrome says it's a promise to issue a cert. If the cert 
hasn't issued, then I'm not sure where that puts the OCSP service for Chrome. 
Regardless, according to Mozilla's policy, the requirement is that regardless 
of how long the cert takes to issue, the CA must provide OCSP services for the 
pre-cert. The reason is Mozilla requires an OCSP for each end-entity 
certificate listing an AIA in the certificate. Pre-certs are end-entity 
certificates.
 
Jeremy

-Original Message-
From: dev-security-policy  On 
Behalf Of Jeremy Rowley via dev-security-policy
Sent: Thursday, August 29, 2019 11:55 AM
To: Peter Bowen ; Ryan Sleevi 
Cc: Curt Spann ; mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: DigiCert OCSP services returns 1 byte

Yes. That was the point of my post. There is a requirement fo return an ocsp 
repsonse for a pre cert where the cert hasn't issued because of the Mozilla 
policy. Hence our failure was a Mozilla policy violation even if no practical 
system can use the response because no actual cert (without a posion extension) 
exists.

From: Peter Bowen 
Sent: Thursday, August 29, 2019 11:44:11 AM
To: Ryan Sleevi 
Cc: Jeremy Rowley ; Curt Spann ; 
mozilla-dev-security-pol...@lists.mozilla.org 

Subject: Re: DigiCert OCSP services returns 1 byte



On Thu, Aug 29, 2019 at 10:38 AM Ryan Sleevi via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
 wrote:
On Thu, Aug 29, 2019 at 1:15 PM Jeremy Rowley via dev-security-policy < 
dev-security-policy@lists.mozilla.org<mailto:dev-security-policy@lists.mozilla.org>>
 wrote:

> Thanks for posting this Curt.  We investigated and posted an incident 
> report on Bugzilla. The root cause was related to pre-certs and an 
> error in generating certificates for them. We're fixing the issue 
> (should be done shortly).  I figured it'd be good to document here why 
> pre-certs fall under the requirement so there's no confusion for other CAs.
>

Oh, Jeremy, you were going so well on the bug, but now you've activated my trap 
card (since you love the memes :) )

It's been repeatedly documented every time a CA tries to make this argument.

Would you suggest we remove that from the BRs? I'm wholly supportive of this, 
since it's known I was not a fan of adding it to the BRs for precisely this 
sort of creative interpretation. I believe you're now the ... fourth... CA 
that's tried to skate on this?

Multiple root programs have clarified: The existence of a pre-certificate is 
seen as a binding committment, for purposes of policy, by that CA, that it will 
or has issued an equivalent certificate.

Is there a requirement that a CA return a valid OCSP response for a pre-cert if 
they have not yet issued the equivalent certificate?

Is there a requirement that a CA return a valid OCSP response for a serial 
number that has never been assigned?  I know of several OCSP responders that 
return a HTTP error in this case.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert OCSP services returns 1 byte

2019-08-29 Thread Jeremy Rowley via dev-security-policy
Yes. That was the point of my post. There is a requirement fo return an ocsp 
repsonse for a pre cert where the cert hasn't issued because of the Mozilla 
policy. Hence our failure was a Mozilla policy violation even if no practical 
system can use the response because no actual cert (without a posion extension) 
exists.

From: Peter Bowen 
Sent: Thursday, August 29, 2019 11:44:11 AM
To: Ryan Sleevi 
Cc: Jeremy Rowley ; Curt Spann ; 
mozilla-dev-security-pol...@lists.mozilla.org 

Subject: Re: DigiCert OCSP services returns 1 byte



On Thu, Aug 29, 2019 at 10:38 AM Ryan Sleevi via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
 wrote:
On Thu, Aug 29, 2019 at 1:15 PM Jeremy Rowley via dev-security-policy <
dev-security-policy@lists.mozilla.org<mailto:dev-security-policy@lists.mozilla.org>>
 wrote:

> Thanks for posting this Curt.  We investigated and posted an incident
> report on Bugzilla. The root cause was related to pre-certs and an error in
> generating certificates for them. We're fixing the issue (should be done
> shortly).  I figured it'd be good to document here why pre-certs fall under
> the requirement so there's no confusion for other CAs.
>

Oh, Jeremy, you were going so well on the bug, but now you've activated my
trap card (since you love the memes :) )

It's been repeatedly documented every time a CA tries to make this
argument.

Would you suggest we remove that from the BRs? I'm wholly supportive of
this, since it's known I was not a fan of adding it to the BRs for
precisely this sort of creative interpretation. I believe you're now the
... fourth... CA that's tried to skate on this?

Multiple root programs have clarified: The existence of a pre-certificate
is seen as a binding committment, for purposes of policy, by that CA, that
it will or has issued an equivalent certificate.

Is there a requirement that a CA return a valid OCSP response for a pre-cert if 
they have not yet issued the equivalent certificate?

Is there a requirement that a CA return a valid OCSP response for a serial 
number that has never been assigned?  I know of several OCSP responders that 
return a HTTP error in this case.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert OCSP services returns 1 byte

2019-08-29 Thread Jeremy Rowley via dev-security-policy
Oh, I wasnt arguing that this isnt an issue. The opposite in fact.  I was 
documenting why it is an issue  ie, that a ca can't argue this isnt a 
compliance concern.  It comes up a lot but I dont remember seeing it here.

From: Ryan Sleevi
Sent: Thursday, August 29, 11:38 AM
Subject: Re: DigiCert OCSP services returns 1 byte
To: Jeremy Rowley
Cc: Curt Spann, mozilla-dev-security-pol...@lists.mozilla.org




On Thu, Aug 29, 2019 at 1:15 PM Jeremy Rowley via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
 wrote:
Thanks for posting this Curt.  We investigated and posted an incident report on 
Bugzilla. The root cause was related to pre-certs and an error in generating 
certificates for them. We're fixing the issue (should be done shortly).  I 
figured it'd be good to document here why pre-certs fall under the requirement 
so there's no confusion for other CAs.

Oh, Jeremy, you were going so well on the bug, but now you've activated my trap 
card (since you love the memes :) )

It's been repeatedly documented every time a CA tries to make this argument.

Would you suggest we remove that from the BRs? I'm wholly supportive of this, 
since it's known I was not a fan of adding it to the BRs for precisely this 
sort of creative interpretation. I believe you're now the ... fourth... CA 
that's tried to skate on this?

Multiple root programs have clarified: The existence of a pre-certificate is 
seen as a binding committment, for purposes of policy, by that CA, that it will 
or has issued an equivalent certificate.

1) Has DigiCert reviewed the existing incident reports from other CAs?
2) What process does DigiCert have to review all compliance issues, regardless 
of the CA, so that it can examine its own systems for similar issues or be 
aware of relevant discussions and/or ambiguities?

(And, yes, it's a trap)


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: DigiCert OCSP services returns 1 byte

2019-08-29 Thread Jeremy Rowley via dev-security-policy
Thanks for posting this Curt.  We investigated and posted an incident report on 
Bugzilla. The root cause was related to pre-certs and an error in generating 
certificates for them. We're fixing the issue (should be done shortly).  I 
figured it'd be good to document here why pre-certs fall under the requirement 
so there's no confusion for other CAs.   

It can get confusing because the BRs section 7.1.2.7 a pre-cert is "not 
considered a certificate subject to the requirements of RFC 5280". The scope of 
the BRs is also questionable since it's still only applicable to "certificates 
intended to bused for authenticating servers"  (BRs 1.1) . By virtue of the 
poison extension, precerts can never be applicable to authenticating servers. 
Initially, it's easy to think that pre-certs may not require OCSP or the same 
strict compliance

I reviewed the CT log policy and, unless I missed something, the policy there 
is silent on pre-certs and OCSP.

I think the requirement for pre-certs comes from two places. The requirements 
around revocation information originates from Mozilla policy 5.2 "CAs MUST NOT 
issue certificates that have: cRLDistributionPoints or OCSP 
authorityInfoAccess extensions for which no operational CRL or OCSP service 
exists." Then in Section 6 "For end-entity certificates, if the CA provides 
revocation information via an Online Certificate Status Protocol (OCSP) 
service:"

What this means that a CA including a crl distribution point or OCSP service in 
the pre-cert must provide the OCPS/CRL service for the pre-cert, even if 
there's no possible way the pre-cert can be used by Mozilla on a server. The 
idea we had is simply drop the revocation information from the precert. 
Unfortunately, this doesn't work either because the pre-cert must be identical 
to the certificate plus the poison extension.  This was probably obvious to 
anyone following CT over the years, but the discussion comes up every once in a 
while internally so I thought I'd document it externally so others can also 
chime in. 

Feel free to substitute SCT for pre-cert if you want to use correct terminology.

Jeremy

-Original Message-
From: dev-security-policy  On 
Behalf Of Curt Spann via dev-security-policy
Sent: Tuesday, August 27, 2019 2:05 PM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: DigiCert OCSP services returns 1 byte

Hello,

I created the following bug: 
https://bugzilla.mozilla.org/show_bug.cgi?id=1577014
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Symantec migration update

2019-08-29 Thread Jeremy Rowley via dev-security-policy
Yeah - these types of weird continuity requirements  are all over the place and 
the reason the consolidation has taken this long. A lot of the effort has been 
trying to figure out how to replace things tied to old hardware with updated 
systems, essentially rebuilding things (like the timestamp service) so it can 
scale better and use more current technology. 

-Original Message-
From: dev-security-policy  On 
Behalf Of Jakob Bohm via dev-security-policy
Sent: Thursday, August 29, 2019 8:26 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Symantec migration update

Note that while not used by Mozilla, the Time Stamping Authority services 
formerly owned by Symantec have some unique business continuity
requirements:

1. Time stamp signatures, by their very purpose, need to remain valid
   and trusted for decades after signing, even if no more signatures
   are generated.  This means keeping up private key protection and
   revocation checking.

2. The specific Symantec time stamping URLs are baked into thousands of
   scripts, as they have remained the same since before Symantec acquired
   VeriSign's CA business.  But nothing prevents pointing them to
   equivalent DigiCert servers, as Symantec had already pointed them to
   Thawte servers.

3. The unique protocol for generated Microsoft-compatible SHA-1 time
   stamp signatures remain important as long as there are still Vista,
   Windows 7, Server 2008 and older machines around (even though
   Microsoft support for those is going away, that hasn't historically
   stopped users from keeping stuff running and asking 4th party vendors
   for fresh application updates, thus causing those 4th party vendors to
   need old form signatures trusted by those old systems).  CAs would
   instead need to take additional precautions to guard against SHA-1
   weaknesses.


On 29/08/2019 00:58, Jeremy Rowley wrote:
> Hey – I realized it’s been a long time since I posted an update about where 
> we are at on shutting down the legacy Symantec systems. I thought the 
> community might find it interesting on what we’re doing to consolidate all 
> the system.
> 
> 
> When we bought the Symantec CA business, we promised to bring the systems up 
> to modern compliance standards and deprecate systems or improve the customer 
> experience by consolidating storefronts. We soon realized that we may have 
> underestimate the  scope of this promise as the systems were a bit old. As 
> soon as we gained access to the legacy Symantec systems, we audited the 
> systems for features use and functionality and the legal business agreements 
> requiring certain things. From this list, we created a migration roadmap that 
> ensured we could support customer needs and maintain their certificate 
> landscapes without extensive interference, prioritizing systems based on 
> complexity and risk.
> 
> 
> 
> Once we had a preliminary list, engineering started developing solutions to 
> to migrate customers to our primary certificate management system, 
> CertCentral. This was a necessary foundational step to moving customers off 
> legacy Symantec storefronts and APIs, so that we can shut down the old 
> Symantec systems.  As everyone knows, we did the migration of all validation 
> first (per the browser requirements), followed by the migration of org 
> details, and other information. We are still working on some  migration tools.
> 
> 
> 
> Migration to a new system is complicated for customers, and we’ve tried to 
> take an approach that would accommodate their needs. To do this, we’re 
> migrating customers in a staged approach that allows us to target groups as 
> soon as we have system readiness for different customer subsets. We began 
> migration efforts with small accounts and accounts who had outgrown the 
> performance of their old Symantec console. Moving these customers first 
> allowed us to learn and adapt our migration plan. Next, we began targeting 
> larger enterprise customers and partners with more complicated, business 
> integrations. Finally, we have begun targeting API integrated customers with 
> the most difficult migration plans. We’re working with these customers to 
> provide them the resources they need to update integrations as soon as 
> possible.
> 
> 
> 
> Of course, on the backend, we already migrated all legacy Symantec customers 
> to the DigiCert validation and issuance systems for TLS and most code 
> signing. We’re still working on it for SMIME.
> 
> 
> 
> To date, we’ve migrated a total of about 50,000 legacy accounts. This is 
> important progress towards our goal of system consolidation since we need to 
> move customers off legacy systems before we turn them off.   This isn’t the 
> half-way point, but we’ve been ramping up migration as new portions of th

Symantec migration update

2019-08-28 Thread Jeremy Rowley via dev-security-policy
Hey – I realized it’s been a long time since I posted an update about where we 
are at on shutting down the legacy Symantec systems. I thought the community 
might find it interesting on what we’re doing to consolidate all the system.


When we bought the Symantec CA business, we promised to bring the systems up to 
modern compliance standards and deprecate systems or improve the customer 
experience by consolidating storefronts. We soon realized that we may have 
underestimate the  scope of this promise as the systems were a bit old. As soon 
as we gained access to the legacy Symantec systems, we audited the systems for 
features use and functionality and the legal business agreements requiring 
certain things. From this list, we created a migration roadmap that ensured we 
could support customer needs and maintain their certificate landscapes without 
extensive interference, prioritizing systems based on complexity and risk.



Once we had a preliminary list, engineering started developing solutions to to 
migrate customers to our primary certificate management system, CertCentral. 
This was a necessary foundational step to moving customers off legacy Symantec 
storefronts and APIs, so that we can shut down the old Symantec systems.  As 
everyone knows, we did the migration of all validation first (per the browser 
requirements), followed by the migration of org details, and other information. 
We are still working on some  migration tools.



Migration to a new system is complicated for customers, and we’ve tried to take 
an approach that would accommodate their needs. To do this, we’re migrating 
customers in a staged approach that allows us to target groups as soon as we 
have system readiness for different customer subsets. We began migration 
efforts with small accounts and accounts who had outgrown the performance of 
their old Symantec console. Moving these customers first allowed us to learn 
and adapt our migration plan. Next, we began targeting larger enterprise 
customers and partners with more complicated, business integrations. Finally, 
we have begun targeting API integrated customers with the most difficult 
migration plans. We’re working with these customers to provide them the 
resources they need to update integrations as soon as possible.



Of course, on the backend, we already migrated all legacy Symantec customers to 
the DigiCert validation and issuance systems for TLS and most code signing. 
We’re still working on it for SMIME.



To date, we’ve migrated a total of about 50,000 legacy accounts. This is 
important progress towards our goal of system consolidation since we need to 
move customers off legacy systems before we turn them off.   This isn’t the 
half-way point, but we’ve been ramping up migration as new portions of the code 
come into completion. Most of the migration has completed since June.



Now that we’re nearly product ready for all customer types to migrate, we have 
a line of sight for end of life of the Symantec systems that I thought would be 
useful to share:



**Timeline of migration events**

November 2017: purchased Symantec

December 1 2017: Began revalidation of Symantec certificates

December 2017-December 2018: Symantec business migration (transition service 
agreement exits)

January 2018: Symantec product audit

February-November 2018: Required TSA exits and internal process updates to 
absorb additional business

February 2018: Began planning massive effort to shut down legacy systems

February 2018-December 2019: Developing required customer functionality for all 
markets

January 2018? -April 2019: Data center migrations

February 2019 -November 2019: Developing a means to migrate active certificate 
data into

April 2019: Began migrating retail customers

June 2019: Began migrating Enterprise customers and Partners

August 2019: Completed DigiCert MPKI migration

August 2019: Begin DigiCert Retail migration for domain validation 
consolidation (target completion Oct 31)



**Coming next**

Q4 2019: End of sale of Symantec Enterprise solutions

Q4 2019: End of life of Symantec Encryption Everywhere, a service for hosting 
providers

Q1 2020: We will begin the shutdown process for legacy systems

Q3 2020: Target completion for all account migrations

Q3 2020: Target for system shut down for legacy storefronts, including 
migration of legacy DigiCert and Symantec systems. We aim to shut down systems 
sooner, but this largely depends on how the shutdown process proceeds.



We’re currently evaluating any additional time required to migrate API 
customers.



We are constantly working towards these dates and will post updates as things 
change.



** Note that this plan excludes QuoVadis; we will be posting updates on the 
QuoVadis system migration later once we free up resources from the Symantec 
migration.



Looking forward to the questions!



Jeremy






___
dev-security-policy mailing list
dev-secur

RE: GlobalSign: SSL Certificates with US country code and invalid State/Prov

2019-08-28 Thread Jeremy Rowley via dev-security-policy
I've always thought the reason OV/EV ballots haven't been proposed/passed is 
combination of a lack of interest from the browsers and the fact that 
governance reform seems to get in the way of everything else.  I've for 
proposed tons of things over the years that simply fail because I can't get 
enough interest because they aren't shiny enough to capture attention. I don't 
think CAs would actually oppose a clean up ballot - and Hurst's proposal to 
require the BR OIDs for OV/DV wasn't opposed by all CAs.

A ballot standardizing on abbreviated states (for example) would probably pass. 
I think any ballot requiring a standard format of cert profiles would actually 
pass. And I think they are talking about standardizing a list of allowed 
sources for verifying incorporation on the validation working group. The CAB 
Forum just moves more slowly than it used to. We can speed it up by simply 
proposing more ballots. There's nothing that requires long waiting periods. 

Heck, if interested parties want to work on ballots with me, I'd be happy to 
propose them at the CAB Forum.  That'd be really fun actually - propose a bunch 
of relying party ballots from the Mozilla community that we put 
forward/sponsor. LMK

-Original Message-
From: dev-security-policy  On 
Behalf Of Ryan Sleevi via dev-security-policy
Sent: Wednesday, August 28, 2019 9:02 AM
To: Corey Bonnell 
Cc: mozilla-dev-security-policy 
Subject: Re: GlobalSign: SSL Certificates with US country code and invalid 
State/Prov

On Wed, Aug 28, 2019 at 7:13 AM Corey Bonnell via dev-security-policy < 
dev-security-policy@lists.mozilla.org> wrote:

> Anyhow, judging from censys.io, it looks like there are far bigger 
> offenders of this particular quirky rule than Digicert and GlobalSign. 
> I'd love to know why the BRs/EVGs are inconsistent with this 
> requirement for jurisST having to be a full name, but ST can seemingly 
> be either. It looks like the existing language in the BRs for ST 
> stemmed from Ballot 88, but unfortunately there was little discussion 
> on the mailing list that I could find about the rationale for this 
> inconsistency.
>
> Ideally, the requirements would be identical so that Relying Parties 
> can more easily extract identity information from these certificates 
> to aid in trust decisions.
>

There's a long list of things that CAs that advocate for OV/EV information 
could be doing to make it reliable and useful to Relying Parties.

Consider this post, from Ryan Hurst in 2012 -
https://unmitigatedrisk.com/?p=203 - talking about the 'simple' challenges just 
in distinguishing DV vs OV certificates. The proliferation of CA-specific 
policy OIDs has, functionally, made this a non-trivial task.
While the Baseline Requirements provide a series of policy OIDs that CAs may 
assert, the use is not mandatory nor widespread.

With respect to OV/EV information, it's clear that in the absence of an 
allowlist, and more explicit profiles, the functional value to Relying Parties 
is going to be greatly limited. This is not an exclusive criticism of OV/EV 
either; the lack of technical profiles for both CRLs and OCSP represent 
significant challenges.

The problem, as with all things in the Forum, is that the incentive structures 
are misaligned. There is questionable benefit, to a CA, to develop a ballot to 
normatively specify a requirement on the information sources or the formatting 
of certificates. While some have suggested the costs in determining adequate 
information sources (e.g. "does X meet the criteria of the BRs?") might be 
reduced by such a shared list, most of that cost has already been sunk by the 
extant CAs, so it only benefits new upstarts or those entering new 
jurisdictions. With profiles for certificates, it's even more marked - every 
profile represents a chance for the CA to be accused of violating them, and 
represents additional controls all CAs would need to implement. It's naturally 
in their own self-interest to not only not propose such changes, but oppose 
them when proposed, because it's all risk for benefit that they and their 
Subscribers do not realize.

So it's left to browsers to normatively require things, much as it was with the 
Baseline Requirements. And we know the browsers are a busy lot, in part due to 
the influx of issues and the woeful responses and/or remediations.

So what can be done?

If folks joined the CA/Browser Forum as Interested Parties (and thus executed 
the IP agreement), it's a much quicker path towards writing technical changes 
which browsers might then be able to propose as draft ballots to then place 
into the BRs. Alternatively, folks here could open issues with Mozilla Policy, 
wholly ignoring the CA/Browser due to its many issues, proposing changes to 
Mozilla Policy. Mozilla could eventually propose these as ballots or, more 
pragmatically, CAs who have to follow these rules anyways might be inclined to 
formalize them into the BRs, rather than run the risk of futu

RE: DigiCert OCSP services returns 1 byte

2019-08-27 Thread Jeremy Rowley via dev-security-policy
Our super unpublished RFC.  

Sadly no. We're still investigating, but it looks like it has to do with 
pre-certs and the way the system responds if when the actual cert never issued. 
We're working on an incident report. Funny enough (and not in the ha-ha way), 
the system works if the pre-cert was revoked but not if the pre-cert issued but 
something terrible happened between pre-cert issuance and real cert issuance.

-Original Message-
From: dev-security-policy  On 
Behalf Of Peter Gutmann via dev-security-policy
Sent: Tuesday, August 27, 2019 7:27 PM
To: mozilla-dev-security-pol...@lists.mozilla.org; Curt Spann 
Subject: Re: DigiCert OCSP services returns 1 byte

Curt Spann via dev-security-policy  
writes:

>I created the following bug:
>https://bugzilla.mozilla.org/show_bug.cgi?id=1577014

Maybe it's an implementation of OCSP SuperDietLite, 1 = revoked, 0 = not 
revoked.

In terms of it being unsigned, you can get the same effect by setting 
respStatus = TRYLATER, no signature required.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Jurisdiction of incorporation validation issue

2019-08-23 Thread Jeremy Rowley via dev-security-policy
>> 1. I believe the BRs and/or underlying technical standards are very
   clear if the ST field should be a full name ("California") or an
   abbreviation ("CA").

This is only true of the EV guidelines and only for Jurisdiction of 
Incorporation.  There is no formatting requirement for place of business. I 
think requiring a format would help make the data more useful as you could 
consume it easier en masse.

>> 2. The fact that a country has subdivisions listed in the general ISO
   standard for country codes doesn't mean that those are always part of
   the jurisdiction of incorporation and/or address.

Right. For the EV Guidelines, what matters is the Jurisdiction of Registration 
or Jurisdiction of Incorporation as that is what is used to determine the 
Jurisdiction of Incorporation/Registration information, including what goes 
into the Registration Number Field. 
 
Incorporating Agency is defined as: In the context of a Private Organization, 
the government agency in the Jurisdiction of
Incorporation under whose authority the legal existence of the entity is 
registered (e.g., the government agency that issues
certificates of formation or incorporation). In the context of a Government 
Entity, the entity that enacts law, regulations, or
decrees establishing the legal existence of Government Entities

Registration Agency: A Governmental Agency that registers business information 
in connection with an entity's business
formation or authorization to conduct business under a license, charter or 
other certification. A Registration Agency MAY
include, but is not limited to (i) a State Department of Corporations or a 
Secretary of State; (ii) a licensing agency, such as a
State Department of Insurance; or (iii) a chartering agency, such as a state 
office or department of financial regulation,
banking or finance, or a federal agency such as the Office of the Comptroller 
of the Currency or Office of Thrift
Supervision

This is broad. IMO we should reduce it to be the number listed on the 
certificate of formation/incorporation so there is consistency to what the 
registration means. We should also identify in the certificate the source of 
the registration number as it provides information to relying parties about the 
actual organization. 

>> 3. The fact that a government data source lists the incorporation
   locality of a company, doesn't mean that this locality detail is
   actually a relevant part of the jurisdictionOfIncorporation.  This
   essentially depends if the rules in that country ensure uniqueness of
   both the company number and company name at a higher jurisdiction
   level (national or state) to the same degree as at the lower level.
For example, in the US the company name "Stripe" is not unique
   nationwide.

Right - this depends on where the formation/registration occurs. That's 
captured in the EV guidelines.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Jurisdiction of incorporation validation issue

2019-08-23 Thread Jeremy Rowley via dev-security-policy

>>  I'm a little nervous about encouraging wide use of OCR. You may recall at 
>> least one CA was bit by an issue in which their OCR system misidentified 
>> letters - https://bugzilla.mozilla.org/show_bug.cgi?id=1311713

>> That's why I was keen to suggest technical solutions which would verify and 
>> cross-check. My main concern here would be, admittedly, to ensure the 
>> serialNumber itself is reliably entered and detected. Extracting that from a 
>> system, such as you could due via an Extension when looking at, say, the 
>> Handelsregister, is a possible path to reduce both human transcription and 
>> machine-aided transcription issues.

Right – and the OCR there is just to make the initial assessment. The idea is 
to still require validation staff to select the appropriate fields. I like the 
idea of cross-checking. Maybe what we can also do is tie into a non-primary 
source (Like D&B or something) to confirm the  jurisdiction information. We’ll 
have to evaluate it, but I like the idea of cross-checking against a reliable 
source that has an API even if we can’t use the source as our primary source 
for that information. I’ll need to investigate, but it should be possible for 
most of EU and the US. Less so for the middle east and Asia.

>> Of course, alternative ways of cross-checking and vetting that data may 
>> exist. Alternatively, it may be that the solution would be to only allowlist 
>> the use of validation sources that made their datasets machine readable - 
>> this would/could address a host of issues in terms of quality. I'm 
>> admittedly not sure the extent to which organizations still rely on legacy 
>> paper trails, and I understand they're still unfortunately common in some 
>> jurisdictions, particularly in the Asia/Pacific region, so it may not be as 
>> viable.

Yeah – that mean you basically can’t issue in middle east and mot of Asia. 
Japan would still work. China I’d have to look. Like I said, there could be 
non-primary sources that could correlate. We’ll spec that out as we get closer 
and see what we can do for cross-correlation. May be that we can have enough 
somethings world-wide that you can always confirm registration with a secondary 
source.

The process right now is we right a script based on things we can think of that 
might be wrong (abbreviated states, the word “some” in the state field, etc). 
We usually pull a sampling of a couple thousand certs and review those to see 
if we can find anything wrong that can help identify other patterns. We’re in 
the middle of doing that for the JOI issues.  What would be WAY better is if we 
had rule sets for validation information (similar to cablint) that checked 
validation information and how it is stored in our system and made these rule 
sets run on the complete data every time we change something in validation. 
Right now, we build quick and dirty checks that run one time when we have an 
incident. That’s not great as it’s a lot of stuff we can’t reuse. What we 
should do is build something (that crossing my fingers we can open source and 
share) that will be a library of checks on validation information. Sure, it’ll 
take a lot of configuration to work with how other CAs store data, but one 
thing we’ve seen problems with is that changes in one system lead to 
un-expected potential non-compliances in others. Having something  that works 
cross-functionally throughout the system helps.


  *   Hugely, and this is exactly the kind of stuff I'm excited to see CAs 
discussing and potentially sharing. I think there are some opportunities for 
incremental improvements here that may be worth looking at, even before that 
final stage.


  *   I would argue a source of (some of) these problems is ambiguity that is 
left to the CA's discretion. For example, is the state abbreviated or not? Is 
the jurisdictional information clear?  Who are the authorized registries for a 
jurisdiction that a CA can use?

I think that’s definitely true. There’s lots of ambiguities in the EV 
guidelines. You and I were talking about Incorporating Agencies, which is not 
really defined as incorporating agencies. Note that CAs can use Incorporating 
Agencies or Registration Agencies to confirm identity, which is very broad, but 
there is no indication in the certificate what that means.

> I can think of some incremental steps here:
> - Disclosing exact detailed procedures via CP/CPS

Maybe an addendum to the CPS. Or RPS. I’ll experiment and post something to see 
what the community thinks.

>  - An emphasis should be on allowlisting. Anything not on the allowlist 
> *should* be an exceptional thing.

This we actually have internally. Or are you saying across the industry? The 
allow list internally is something prevetted by compliance and legal. We’re 
currently (prompted by a certificate problem report) reviewing the entire 
allowed list to see what’s there and taking anything off that I don’t like. 
Basically we’re using your suggestion of 

RE: Jurisdiction of incorporation validation issue

2019-08-23 Thread Jeremy Rowley via dev-security-policy


  *   Could you highlight a bit more your proposal here? My understanding is 
that, despite the Handelsregister ("Commercial Register") being available at a 
country level, it's further subdivided into a list of couunty or region - e.g. 
the Amtsgericht Herne ("Local Court Herne").


  *   It sounds like you're still preparing to allow for manual/human input, 
and simply consistency checking. Is there a reason to not use an 
allowlist-based approach, in which your Registration Agents may only select 
from an approved list of County/Region/Locality managed by your Compliance Team?


  *   That, of course, still allows for human error. Using the excellent 
example of the Handelsregister, perhaps you could describe a bit more the flow 
a Validation Specialist would go through. Are they expected to examine a faxed 
hardcopy? Or do they go to handelsregister.de and 
look up via the registration code?



  *   I ask, because it strikes me that this could be an example where a CA 
could further improve automation. For example, it's not difficult to imagine 
that a locally-developed extension could know the webpages used for validation 
of the information, and extract the salient info, when that information is not 
easily encoded in a URL. For those not familiar, Handelsregister encodes the 
parameters via form POST, a fairly common approach for these company registers, 
and thus makes it difficult to store a canonical resource URL for, say, a 
server-to-server retrieval. This would help you quickly and systematically 
identify the relevant jurisdiction and court, and in a way that doesn't involve 
human error.

I did not know that about Handelsregister. So that’s good info.  Right now, the 
validation staff selects Handelsregister as the source, the system retrieves 
the information, the staff then selects the jurisdiction information and enters 
the registration information. Germany is locked in as the country of 
verification (because Handelsregister is the source), but the staff enters the 
locality/state type information as the system doesn’t know which region is 
correct.

The idea is that everywhere we can, the process should automatically fill in 
jurisdiction information for the validation staff so no typing is required. 
This is being done in three parts:

  1.  Immediate (aka Stop the Hurt): The first step is to put the GeoCode check 
in place to ensure that no matter what there will be valid non-mispelled 
information in the certificate. There will still be user-typed information 
during the phase since this phase is Aug 18 2019. The system will work exactly 
as it does now except that the JOI information will run through the GeoCode 
system to verify that yes, this information isn’t wrong. If wrong, the system 
won’t allow the cert to be approved.  At this point, no new issues should 
occur, but I won’t be satisfied as its way too manual – and the registration 
number is still a manual entry. That needs to change.
  2.  Intermediate (aka Neuter the Staff): During this process we plan to 
eliminate typing of sources. Instead, the sources will be picklists based on 
jurisdiction. This means that if you select Germany and the company type is an 
LLC, you get a list of available sources. Fool proof-ish. There’s still a 
copy/paste or manual entry of the registration number. For those sources that 
do provide an API, we can tie into the API, retrieve the documentation, and 
populate the information.  We want to do that as well, provided it doesn’t 
throw off phase 3. Since the intermediate solution is also a stop-gap to the 
final solution, we want it to be a substantial improvement but one that doesn’t 
impede our final destination.
  3.  The refactor (aka Documents r Us): This is still very much being specc’ed 
but we’re currently thinking we want to evolve the system to a document system. 
Right now the system works on checklists. For JOI, you enter the JOI part, 
select a document (or two) that you’ll to verify JOI and then transfer 
information to the system from the document. The revamp moves it to where you 
have the document and specify on the document which parts of the document apply 
to the organization. For example, you specify on the document that a number is 
a registration number or that a name is an org name, highlighting the info.  
With auto-detection of the fields (just based on key words), you end up with a 
pretty dang automated system. The validation staff is there to review for 
accuracy and highlight things that might be missed. Hence, no typing or 
specifying any information. It’s all directly from the source.

Naming conventions also not approved yet. Since the engineers watch this forum, 
they’ll probably throw things at me when they see the code names.


  *   I'm curious how well that approach generalizes, and/or what challenges 
may exist. I totally understand that for registries which solely use hard 
copies, this is a far more difficult task than

Re: GlobalSign: SSL Certificates with US country code and invalid State/Prov

2019-08-22 Thread Jeremy Rowley via dev-security-policy
I only know because I was looking at this issue tonight as well to add an 
update later to the joi bug I posted.

From: dev-security-policy  on 
behalf of Jeremy Rowley via dev-security-policy 

Sent: Thursday, August 22, 2019 9:07:51 PM
To: Corey Bonnell ; Doug Beattie 
; mozilla-dev-security-pol...@lists.mozilla.org 

Subject: Re: GlobalSign: SSL Certificates with US country code and invalid 
State/Prov

It's a trap. I do wish memes showed up here

Censys shows something like 130 globalsign certs with abbreviated joi info. I 
think we show 16?

From: dev-security-policy  on 
behalf of Corey Bonnell via dev-security-policy 

Sent: Thursday, August 22, 2019 8:57:42 PM
To: Doug Beattie ; 
mozilla-dev-security-pol...@lists.mozilla.org 

Subject: Re: GlobalSign: SSL Certificates with US country code and invalid 
State/Prov

Hi Doug,
Thank for you for posting this incident report to the list. I have one 
clarifying question in regard to the correctness criteria for the jurisST field 
when performing the scanning for additional problematic certificates. Is 
GlobalSign allowing state abbreviations in the jurisST field, or only full 
state names?
Thanks,
Corey


From: dev-security-policy  on 
behalf of Doug Beattie via dev-security-policy 

Sent: Thursday, August 22, 2019 11:35
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: GlobalSign: SSL Certificates with US country code and invalid 
State/Prov

Today we opened a bug disclosing misissuance of some certificates that have
invalid State/Prov values:

   
https://eur04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbugzilla.mozilla.org%2Fshow_bug.cgi%3Fid%3D1575880&data=02%7C01%7C%7Ceba31d5add5949261f1508d7271662df%7C84df9e7fe9f640afb435%7C1%7C0%7C637020849406465940&sdata=sgDFjHsrYMjJMl02%2Bj3BH7Hw%2FUPNR3O8q6r8nr3OgZE%3D&reserved=0



On Tuesday August 20th 2019, GlobalSign was notified by a third party
through the report abuse email address that two certificates were discovered
which contained wrong State information, either in the stateOrProvinceName
field or in the jurisdictionStateOrProvinceName field.



The two certificates in question were:

https://eur04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcrt.sh%2F%3Fid%3D1285639832&data=02%7C01%7C%7Ceba31d5add5949261f1508d7271662df%7C84df9e7fe9f640afb435%7C1%7C0%7C637020849406465940&sdata=jXC4T%2BbvYYNdPJhXUukJT7cGEYgv0Lyg3qFO81S9xPE%3D&reserved=0

https://eur04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcrt.sh%2F%3Fid%3D413247173&data=02%7C01%7C%7Ceba31d5add5949261f1508d7271662df%7C84df9e7fe9f640afb435%7C1%7C0%7C637020849406465940&sdata=KJ7FfggP5XKliFv%2FL2VLwpRtG8bcg7Eq%2FzFG6qx8ndQ%3D&reserved=0



GlobalSign started and concluded the investigation within 24 hours. Within
this timeframe GlobalSign reached out to the Certificate owners that these
certificates needed to be replaced because revocation would need to happen
within 5 days, following the Baseline Requirements. As of the moment of
reporting, these certificates have not yet been replaced, and the offending
certificates have not been revoked. The revocation will happen at the latest
on the 25th of August.



Following this report, GlobalSign initiated an additional internal review
for this problem specifically (unexpected values for US states in values in
the stateOrProvinceName or jurisdictionStateOrProvinceName fields). Expected
values included the full name of the States, or their official abbreviation.
We reviewed all certificates, valid on or after the 21st of August, that
weren't revoked for other unrelated reasons.



To accommodate our customers globally, the stateOrProvinceName field or in
the jurisdictionStateOrProvinceName are text fields during our ordering
process. The unexpected values were not spotted or not properly corrected.
We have put additional flagging in place to highlight unexpected values in
both of these fields, and are looking at other remedial actions. None of
these certificates were previously flagged for internal audit, which is
completely randomized.



We will update with a full incident report for this and also disclose all
other certificates found based on our research.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: GlobalSign: SSL Certificates with US country code and invalid State/Prov

2019-08-22 Thread Jeremy Rowley via dev-security-policy
It's a trap. I do wish memes showed up here

Censys shows something like 130 globalsign certs with abbreviated joi info. I 
think we show 16?

From: dev-security-policy  on 
behalf of Corey Bonnell via dev-security-policy 

Sent: Thursday, August 22, 2019 8:57:42 PM
To: Doug Beattie ; 
mozilla-dev-security-pol...@lists.mozilla.org 

Subject: Re: GlobalSign: SSL Certificates with US country code and invalid 
State/Prov

Hi Doug,
Thank for you for posting this incident report to the list. I have one 
clarifying question in regard to the correctness criteria for the jurisST field 
when performing the scanning for additional problematic certificates. Is 
GlobalSign allowing state abbreviations in the jurisST field, or only full 
state names?
Thanks,
Corey


From: dev-security-policy  on 
behalf of Doug Beattie via dev-security-policy 

Sent: Thursday, August 22, 2019 11:35
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: GlobalSign: SSL Certificates with US country code and invalid 
State/Prov

Today we opened a bug disclosing misissuance of some certificates that have
invalid State/Prov values:

   
https://eur04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbugzilla.mozilla.org%2Fshow_bug.cgi%3Fid%3D1575880&data=02%7C01%7C%7Ceba31d5add5949261f1508d7271662df%7C84df9e7fe9f640afb435%7C1%7C0%7C637020849406465940&sdata=sgDFjHsrYMjJMl02%2Bj3BH7Hw%2FUPNR3O8q6r8nr3OgZE%3D&reserved=0



On Tuesday August 20th 2019, GlobalSign was notified by a third party
through the report abuse email address that two certificates were discovered
which contained wrong State information, either in the stateOrProvinceName
field or in the jurisdictionStateOrProvinceName field.



The two certificates in question were:

https://eur04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcrt.sh%2F%3Fid%3D1285639832&data=02%7C01%7C%7Ceba31d5add5949261f1508d7271662df%7C84df9e7fe9f640afb435%7C1%7C0%7C637020849406465940&sdata=jXC4T%2BbvYYNdPJhXUukJT7cGEYgv0Lyg3qFO81S9xPE%3D&reserved=0

https://eur04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcrt.sh%2F%3Fid%3D413247173&data=02%7C01%7C%7Ceba31d5add5949261f1508d7271662df%7C84df9e7fe9f640afb435%7C1%7C0%7C637020849406465940&sdata=KJ7FfggP5XKliFv%2FL2VLwpRtG8bcg7Eq%2FzFG6qx8ndQ%3D&reserved=0



GlobalSign started and concluded the investigation within 24 hours. Within
this timeframe GlobalSign reached out to the Certificate owners that these
certificates needed to be replaced because revocation would need to happen
within 5 days, following the Baseline Requirements. As of the moment of
reporting, these certificates have not yet been replaced, and the offending
certificates have not been revoked. The revocation will happen at the latest
on the 25th of August.



Following this report, GlobalSign initiated an additional internal review
for this problem specifically (unexpected values for US states in values in
the stateOrProvinceName or jurisdictionStateOrProvinceName fields). Expected
values included the full name of the States, or their official abbreviation.
We reviewed all certificates, valid on or after the 21st of August, that
weren't revoked for other unrelated reasons.



To accommodate our customers globally, the stateOrProvinceName field or in
the jurisdictionStateOrProvinceName are text fields during our ordering
process. The unexpected values were not spotted or not properly corrected.
We have put additional flagging in place to highlight unexpected values in
both of these fields, and are looking at other remedial actions. None of
these certificates were previously flagged for internal audit, which is
completely randomized.



We will update with a full incident report for this and also disclose all
other certificates found based on our research.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Jurisdiction of incorporation validation issue

2019-08-22 Thread Jeremy Rowley via dev-security-policy
I posted this tonight: https://bugzilla.mozilla.org/show_bug.cgi?id=1576013. 
It's sort of an extension of the "some-state" issue, but with the incorporation 
information of an EV cert.  The tl;dr of the bug is that sometimes the 
information isn't perfect because of user entry issues.

What I was hoping to do is have the system automatically populate the 
jurisdiction information based on the incorporation information. For example, 
if you use the Delaware secretary of state as the source, then the system 
should auto-populate Delaware as the State and US as the jurisdiction. And it 
does...with some.

However, you do you have jurisdictions like Germany that consolidate 
incorporation information to 
www.handelsregister.de so you can't actually 
tell which area is the incorporation jurisdiction until you do a search. Thus, 
the fields to allow some user input. That user input is what hurts.   In the 
end, we're implementing an address check that verifies the 
locality/state/country combination.

The more interesting part (in my opinion) is how to find and address these 
certs. Right now, every time we have an issue or whenever a guideline changes 
we write a lot of code, pull a lot of certs, and spend a lot of time reviewing. 
Instead of doing this every time, we're going to develop a tool that will run 
automatically every time we change a validation rule to find everything else 
that will fail the new update rules. IN essence, building unit tests on the 
data. What I like about this approach is it ends up building a system that lets 
us see how all the rule changes interplay since sometimes they may intercept in 
weird ways. It'll also let us easier measure impact of changes on the system. 
Anyway, I like the idea. Thought I'd share it here to get feedback and 
suggestions for improvement. Still in spec phase, but I can share more info as 
it gets developed.

Thanks for listening.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CA handling of contact information when reporting problems

2019-08-22 Thread Jeremy Rowley via dev-security-policy
I'm not sure there should be a strict requirement that you can't provide that 
communication (sometimes there is good reason to get people talking together). 
However, we don't forward this information as policy because we like to get the 
reports. Anything that ends up stifling getting the information is worse for us 
and hinders getting third party input on improvements on our operation. A 
Mozilla policy or CAB forum policy against disclosure seems like a bad idea 
since there are cases of abuse that could happen give the broad range of 
potential reasons for revocation under the BRs, some of which may require 
corroboration between the reporter and site owner, like "accurate information" 
or "misuse". 

-Original Message-
From: dev-security-policy  On 
Behalf Of Matthew Hardeman via dev-security-policy
Sent: Thursday, August 22, 2019 9:49 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: CA handling of contact information when reporting problems

I'm merely a relying party and subscriber, but it seems quite unreasonable to 
believe that there is or should be any restriction upon a party to a business 
communication (which is what a report / complaint from a third party regarding 
key compromise, etc, is) from further dissemination of said communications.

It seems to me quite a stretch to suggest that the even the GDPR restrains such 
behavior.  Are people seriously suggesting that a third party, with whom you 
have no NDA or agreement in place, may as much as email you and expect you to 
take action based upon said email AND expect that you be enjoined from as 
little as forwarding a copy of that email?  That seems absurd.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Auditor letters and incident reports

2019-08-21 Thread Jeremy Rowley via dev-security-policy
Full disclosure - this was not my idea, but I thought it was a really good one 
and worth bringing up here.

-Original Message-
From: dev-security-policy  On 
Behalf Of Jeremy Rowley via dev-security-policy
Sent: Wednesday, August 21, 2019 10:46 PM
To: mozilla-dev-security-policy 
Subject: Auditor letters and incident reports

Hey all,

An interesting issue came up recently with audits. Because the Mozilla policy 
includes some requirements that diverge from the BRs, the audit criteria don't 
necessarily cover everything Mozilla cares about. Thus, it's possible to have 
an incident that doesn't show up on an audit. It's also possible that the 
auditor determines the incident is not sufficiently important/risky(?) to 
include it in an audit. For example: 
https://bugzilla.mozilla.org/show_bug.cgi?id=1458024. Auditors aren't 
controlled by the CA and operate independently which means the CA can't dictate 
what goes into the opinion. One solution is to require CAs to list all of the 
incidents that occur during their audit in the management assertion letter. I 
posted an addendum to the management assertion on that thread. Going forward, 
we'll just include it as part of the main body. I need to look into whether I 
can get our existing audit reissued to the appendix is part of the seal as well.

What do you think about just requiring that as part of the Mozilla policy? Ie - 
the management assertion letter must include a list of the incidents 
active/opened during the audit period. Something like that could ensure 
transparency and make sure all incidents are disclosed to the auditor, 
distinguishing the CA's disclosures from the auditors.

Jeremy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Auditor letters and incident reports

2019-08-21 Thread Jeremy Rowley via dev-security-policy
Hey all,

An interesting issue came up recently with audits. Because the Mozilla policy 
includes some requirements that diverge from the BRs, the audit criteria don't 
necessarily cover everything Mozilla cares about. Thus, it's possible to have 
an incident that doesn't show up on an audit. It's also possible that the 
auditor determines the incident is not sufficiently important/risky(?) to 
include it in an audit. For example: 
https://bugzilla.mozilla.org/show_bug.cgi?id=1458024. Auditors aren't 
controlled by the CA and operate independently which means the CA can't dictate 
what goes into the opinion. One solution is to require CAs to list all of the 
incidents that occur during their audit in the management assertion letter. I 
posted an addendum to the management assertion on that thread. Going forward, 
we'll just include it as part of the main body. I need to look into whether I 
can get our existing audit reissued to the appendix is part of the seal as well.

What do you think about just requiring that as part of the Mozilla policy? Ie - 
the management assertion letter must include a list of the incidents 
active/opened during the audit period. Something like that could ensure 
transparency and make sure all incidents are disclosed to the auditor, 
distinguishing the CA's disclosures from the auditors.

Jeremy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Change in control event at DigiCert

2019-07-18 Thread Jeremy Rowley via dev-security-policy
Thoma Bravo will no longer be involved once the deal happens.



From: Ryan Sleevi 
Sent: Thursday, July 18, 2019 3:30 PM
To: Jeremy Rowley 
Cc: mozilla-dev-security-policy 

Subject: Re: Change in control event at DigiCert







On Wed, Jul 17, 2019 at 8:09 PM Jeremy Rowley via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org> > wrote:

Just FYI, there is an upcoming change in control event that will happen at
DigiCert where TA and Clearlake will take equity ownership of the company.
TA is currently a minority shareholder in DigiCert. Details are posted here:
https://www.pehub.com/2019/07/clearlake-and-ta-to-invest-in-digicert/.
Operational personnel remain the same.  We're not sure when this will happen
exactly. Sometime later this year probably.



Let me know if you have any questions.



Jeremy,



Thanks for the disclosure here. To make sure it's clearer:

Thoma Bravo currently has a majority stake, from its 2015 acquisition of 
equity from TA. [1] At that time, TA remained a minority stakeholder.

Based on the announcement [2], TA and Clearlake are becoming equal partners - 
and, from what I understand, Thoma Bravo will no longer be a majority 
stakeholder.



What wasn't clear to me was, from the past tense usage within the 
announcements, where Thoma Bravo was remaining as a stakeholder. From the 
change in management, it seems there will be a new chairman (Jason Werlin, TA 
Associates) and a new member of the board (Hythem El-Nazer, TA) - it wasn't 
clear whether Thoma Bravo would still be involved.



I realize that discussing terms of such events are often fraught with 
complications in what can be said prior to the deal reaching approval and 
closing, but hopefully that's something that isn't too controversial or 
complicated.



[1] 
https://www.digicert.com/news/2015-08-26-thoma-bravo-majority-stake-in-digicert/

[2] 
https://www.thomabravo.com/media/clearlake-capital-group-and-ta-associates-to-make-a-strategic-growth-investment-in-digicert



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Change in control event at DigiCert

2019-07-17 Thread Jeremy Rowley via dev-security-policy
Just FYI, there is an upcoming change in control event that will happen at
DigiCert where TA and Clearlake will take equity ownership of the company.
TA is currently a minority shareholder in DigiCert. Details are posted here:
https://www.pehub.com/2019/07/clearlake-and-ta-to-invest-in-digicert/.
Operational personnel remain the same.  We're not sure when this will happen
exactly. Sometime later this year probably.

 

Let me know if you have any questions.

 

Jeremy



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Logotype extensions

2019-07-12 Thread Jeremy Rowley via dev-security-policy
The language of the BRs is pretty permissive.  Assuming Mozilla didn't update 
its policy, then issuance would be permitted if the CA can show that the 
following was false:

b. semantics that, if included, will mislead a Relying Party about the 
certificate information verified by
the CA (such as including extendedKeyUsage value for a smart card, where the CA 
is not able to verify
that the corresponding Private Key is confined to such hardware due to remote 
issuance)..  

I think this is section you are citing as prohibiting issuance correct? So as 
long as the CA can show that this is not true, then issuance is permitted under 
the current policy.  



-Original Message-
From: dev-security-policy  On 
Behalf Of Ryan Sleevi via dev-security-policy
Sent: Friday, July 12, 2019 3:01 PM
To: Doug Beattie 
Cc: mozilla-dev-security-policy 
; Wayne Thayer 

Subject: Re: Logotype extensions

Alternatively:

There is zero reason these should be included in publicly trusted certs used 
for TLS, and ample harm. It is not necessary nor essential to securing TLS, and 
that should remain the utmost priority.

CAs that wish to issue such certificates can do so from alternate hierarchies. 
There is zero reason to assume the world of PKI is limited to TLS, and 
tremendous harm has been done to the ecosystem, as clearly and obviously 
demonstrated by the failures of CAs to correctly implement and validate the 
information in a certificate, or timely revoke them. The fact that were 
multiple CAs who issued certificates like “Some-State” is a damning indictment 
not just on those CAs, but in an industry that does not see such certificates 
as an existential threat to the CAs relevance.

It is trivial to imagine how to issue such certificates from non-TLS 
hierarchies, and to have those still usable by clients. Any CA that can’t think 
of at least three ways to do that has no business in this industry - because it 
is truly basic application of existing technologies.

The BRs do not permit this. Just like they don’t permit a lot of things that 
CAs are unfortunately doing. If the CA portion of the industry wants to improve 
things, such that a single CA could reasonably be believed to be competent 
enough to issue such certificates, let alone reasonably validate them (as this 
has been a global challenge for well over a hundred years), perhaps getting the 
basics right, and formalizing best practices in a way that the whole industry 
can improve, is a better starting point.

I get some folks want to argue this is special, because they want it to be.
This is no different than why it’s problematic to have payment terminals using 
publicly trusted TLS certs, no different than why having drone PKI use a 
different than profile than TLS, and why having certificate profiles like QWACs 
or PSD2 not be used for TLS. The quicker we internalize that, the better we can 
move to having useful and specialized PKIs, instead of the actively harmful 
attempts, actively dangerous, actively problematic to put it all in a single 
cert, which they were never intended to do.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Logotype extensions

2019-07-05 Thread Jeremy Rowley via dev-security-policy
I think my biggest concern is that there hasn't actual been any proof that this 
would mislead relying parties. You'd actual have to have Mozilla do something 
with it first. The general badness can apply to any extension in a cert. No 
actual risk has been pointed out other than a CA may put something in a cert 
and a relying part may view the cert somehow to see the information.  This is 
true with every extension,  not just logo types.

From: dev-security-policy  on 
behalf of Wayne Thayer via dev-security-policy 

Sent: Friday, July 5, 2019 5:53:24 PM
To: mozilla-dev-security-policy
Subject: Re: Logotype extensions

Based on this discussion, I propose adding the following statement to the
Mozilla Forbidden Practices wiki page [1]:

** Logotype Extension **
Due to the risk of misleading Relying Parties and the lack of defined
validation standards for information contained in this field, as discussed
here [2], CAs MUST NOT include the RFC 3709 Logotype extension in CA or
Subscriber certificates.

Please respond if you have concerns with this change. As suggested in this
thread, we can discuss removing this restriction if/when a robust
validation process emerges.

- Wayne

[1] https://wiki.mozilla.org/CA/Forbidden_or_Problematic_Practices
[2]
https://groups.google.com/d/msg/mozilla.dev.security.policy/nZoK5akw2c8/ZtF0WZY8AgAJ

On Tue, Jun 18, 2019 at 6:47 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 14/06/2019 18:54, Ryan Sleevi wrote:
> > On Fri, Jun 14, 2019 at 4:12 PM Jakob Bohm via dev-security-policy <
> > dev-security-policy@lists.mozilla.org> wrote:
> >
> >> In such a case, there are two obvious solutions:
> >>
> >> A. Trademark owner (prompted by applicant) provides CA with an official
> >> permission letter stating that Applicant is explicitly licensed to
> >> mark the EV certificate for a specific list of SANs and and Subject
> >> DNs with their specific trademark (This requires the CA to do some
> >> validation of that letter, similar to what is done for domain
> >> letters).
> >
> >
> > This process has been forbidden since August 2018, as it is fundamentally
> > insecure, especially as practiced by a number of CAs. The Legal Opinion
> > Letter (LOL) has also been discussed at length with respect to a number
> of
> > problematic validations that have occurred, due to CAs failing to
> exercise
> > due diligence or their obligations under the NetSec requirements to
> > adequately secure and authenticate the parties involved in validating
> such
> > letters.
> >
>
> Well, that was unfortunate for the case where it is not straing parent-
> child (e.g. trademark owned by a foundation and licensed to the company)
> But ok, in that case the option is gone, and what follows below is moot:
>
> >
> > Letter needs to be reissued for end-of-period cert
> >> renewals, but not for unchanged early reissue where the cause is not
> >> applicant loss of rights to items.  For example, the if the
> Heartbleed
> >> incident had occurred mid-validity, the web server security teams
> >> could get reissued certificates with uncompromised private keys
> >> without repeating this time consuming validation step.
> >
> >
> > EV certificates require explicit authorization by an authorized
> > representative for each and every certificate issued. A key rotation
> event
> > is one to be especially defensive about, as an attacker may be attempting
> > to bypass the validation procedures to rotate to an attacker-supplied
> key.
> > This was an intentional design by CAs, in an attempt to provide some
> value
> > over DV and OV certificates by the presumed difficulty in substituting
> them.
> >
>
> I was considering the trademark as a validated property of the subject
> (similar to e.g. physical address), thus normally subject to the 825 day
> reuse limit.  My wording was intended to require stricter than current
> BR revalidation for renewal within that 825 day limit.
>
>
> Enjoy
>
> Jakob
> --
> Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
> Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
> This public discussion message is non-binding and may contain errors.
> WiseMo - Remote Service Management for PCs, Phones and Embedded
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Logotype extensions

2019-06-12 Thread Jeremy Rowley via dev-security-policy
That argument applies to every extension not expressly permitted by the BRs. 
Even if I just put a number in s private extension, a relying party could be 
led to think jts their age.  Can we better define what could constitute as 
potentially misleading extensions? Without that definition,  this is the same 
as saying mo additional extensions are allowed, which is clearly not the intent 
of the existing language.

From: dev-security-policy  on 
behalf of Corey Bonnell via dev-security-policy 

Sent: Wednesday, June 12, 2019 4:52:39 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Logotype extensions

On Tuesday, June 11, 2019 at 7:49:31 AM UTC-4, Jeremy Rowley wrote:
> We wanted to experiment a bit with logotype extensions and trademarks, but
> we heard from the CAB Forum that whether inclusion is allowed is subject a
> bit to interpretation by the browsers.
>
>
>
> >From the BRs section 7.1.2.4
>
> "All other fields and extensions MUST be set in accordance with RFC 5280.
> The CA SHALL NOT issue a Certificate that contains a keyUsage flag,
> extendedKeyUsage value, Certificate extension, or other data not specified
> in section 7.1.2.1, 7.1.2.2, or 7.1.2.3 unless the CA is aware of a reason
> for including the data in the Certificate. CAs SHALL NOT issue a Certificate
> with: a. Extensions that do not apply in the context of the public Internet
> (such as an extendedKeyUsage value for a service that is only valid in the
> context of a privately managed network), unless: i. such value falls within
> an OID arc for which the Applicant demonstrates ownership, or ii. the
> Applicant can otherwise demonstrate the right to assert the data in a public
> context; or b. semantics that, if included, will mislead a Relying Party
> about the certificate information verified by the CA (such as including
> extendedKeyUsage value for a smart card, where the CA is not able to verify
> that the corresponding Private Key is confined to such hardware due to
> remote issuance)."
>
>
>
> In this case, the logotype extension would have a trademark included (or
> link to a trademark). I think this allowed as:
>
> 1.There is a reason for including the data in the Certificate (to
> identify a verified trademark). Although you may disagree about the reason
> for needing this information, there is a not small number of people
> interested in figuring out how to better use identification information. No
> browser would be required to use the information (of course), but it would
> give organizations another way to manage certificates and identity
> information - one that is better (imo) than org information.
> 2.The cert applies in the context of the public Internet.
> Trademarks/identity information is already included in the BRs.
> 3.The trademark does not falls within an OID arc for which the
> Applicant demonstrates ownership (no OID included).
> 4.The Applicant can otherwise demonstrate the right to assert the data
> in a public context. If we vet ownership of the trademark with the
> appropriate office, there's no conflict there.
> 5.Semantics that, if included, will not mislead a Relying Party about
> the certificate information verified by the CA (such as including
> extendedKeyUsage value for a smart card, where the CA is not able to verify
> that the corresponding Private Key is confined to such hardware due to
> remote issuance). None of these examples are very close to the proposal.
>
>
>
> What I'm looking for is not a discussion on whether this is a good idea, but
> rather  is it currently permitted under the BRs per Mozilla's
> interpretation. I'd like to have the "is this a good idea" discussion, but
> in a separate thread to avoid conflating permitted action compared to ideal
> action.
>
>
>
> Jeremy

Absent policy surrounding the validation and encoding of Logotype data, I 
believe that the use of the Logotype extension to convey identity information 
may be fraught with security and privacy problems. A brief read of RFCs 3709 
and 6170 raises several concerns:

1.  Where are the image and audio assets stored? Will the CA allow for 
Applicants to specify an arbitrary URI for assets, or will the CA host them, or 
will the assets be encoded directly in the certificate using data URIs? The 
first two options have ramifications regarding client tracking, or even 
allowing attackers to suppress the retrieval of logo assets (thus providing for 
a potentially inconsistent UI between clients). This is compounded by the fact 
that the RFCs only allow retrieval via plaintext HTTP and FTP. I’m guessing the 
third option (“data” URI) is a non-starter due to certificate size limitations.
2.  The RFCs do not put a hard requirement o

Logotype extensions

2019-06-11 Thread Jeremy Rowley via dev-security-policy
We wanted to experiment a bit with logotype extensions and trademarks, but
we heard from the CAB Forum that whether inclusion is allowed is subject a
bit to interpretation by the browsers.

 

>From the BRs section 7.1.2.4

"All other fields and extensions MUST be set in accordance with RFC 5280.
The CA SHALL NOT issue a Certificate that contains a keyUsage flag,
extendedKeyUsage value, Certificate extension, or other data not specified
in section 7.1.2.1, 7.1.2.2, or 7.1.2.3 unless the CA is aware of a reason
for including the data in the Certificate. CAs SHALL NOT issue a Certificate
with: a. Extensions that do not apply in the context of the public Internet
(such as an extendedKeyUsage value for a service that is only valid in the
context of a privately managed network), unless: i. such value falls within
an OID arc for which the Applicant demonstrates ownership, or ii. the
Applicant can otherwise demonstrate the right to assert the data in a public
context; or b. semantics that, if included, will mislead a Relying Party
about the certificate information verified by the CA (such as including
extendedKeyUsage value for a smart card, where the CA is not able to verify
that the corresponding Private Key is confined to such hardware due to
remote issuance)."

 

In this case, the logotype extension would have a trademark included (or
link to a trademark). I think this allowed as:

1.  There is a reason for including the data in the Certificate (to
identify a verified trademark). Although you may disagree about the reason
for needing this information, there is a not small number of people
interested in figuring out how to better use identification information. No
browser would be required to use the information (of course), but it would
give organizations another way to manage certificates and identity
information - one that is better (imo) than org information.
2.  The cert applies in the context of the public Internet.
Trademarks/identity information is already included in the BRs. 
3.  The trademark does not falls within an OID arc for which the
Applicant demonstrates ownership (no OID included).
4.  The Applicant can otherwise demonstrate the right to assert the data
in a public context. If we vet ownership of the trademark with the
appropriate office, there's no conflict there.
5.  Semantics that, if included, will not mislead a Relying Party about
the certificate information verified by the CA (such as including
extendedKeyUsage value for a smart card, where the CA is not able to verify
that the corresponding Private Key is confined to such hardware due to
remote issuance). None of these examples are very close to the proposal.

 

What I'm looking for is not a discussion on whether this is a good idea, but
rather  is it currently permitted under the BRs per Mozilla's
interpretation. I'd like to have the "is this a good idea" discussion, but
in a separate thread to avoid conflating permitted action compared to ideal
action.

 

Jeremy

 



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: DigiCert validation issue

2019-06-05 Thread Jeremy Rowley via dev-security-policy
Here's the link: https://bugzilla.mozilla.org/show_bug.cgi?id=1556948


-Original Message-
From: dev-security-policy  On
Behalf Of Jeremy Rowley via dev-security-policy
Sent: Wednesday, June 5, 2019 12:17 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: DigiCert validation issue

I just posted this incident report.  The summary is we had an issue where a
certain path allowed issuance of certs for example.com when only
www.example.com <http://www.example.com>  was verified. This incident
happened previously with Comodo here:
https://groups.google.com/forum/#!msg/mozilla.dev.security.policy/PoMZvss_PR
o/TK8L-lK0EwAJ. At that time we checked out code, but missed a path. 



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


DigiCert validation issue

2019-06-04 Thread Jeremy Rowley via dev-security-policy
I just posted this incident report.  The summary is we had an issue where a
certain path allowed issuance of certs for example.com when only
www.example.com   was verified. This incident
happened previously with Comodo here:
https://groups.google.com/forum/#!msg/mozilla.dev.security.policy/PoMZvss_PR
o/TK8L-lK0EwAJ. At that time we checked out code, but missed a path. 



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CAA record checking issue

2019-05-10 Thread Jeremy Rowley via dev-security-policy
The difference is we actually have the data at time of issuance. It just wasn’t 
correctly relied on for these specific certs. I think this means there is an 
open question on whether the issuance even was a mis-issuance since the CAA 
information was collected…even if it wasn’t perfect. 

 

This is why we’re revising the approach to say “Were the certs actually 
mis-issued? If yes, revoke. If no, then don’t revoke.”

 

I was looking at it like a law.  You may think you trespassed by walking on 
some grass. But if permission was granted at  the time to walk on the grass, 
then you never actually violated a rule (even if you didn’t know about the 
permission). If permission was granted later, you still broke that law and are 
accountable, even if no penalty is applied.  Here, we didn’t appropriately 
store the information but the data may have been stored and checked in a 
process. More succinctly said, the difference is the broken process may result 
in compliantly issued certificates which is different than a broken certs that 
are then remediated.  If I can prove the compliance at the time the cert was 
issued, then the certs shouldn’t be revoked. 

 

Does that makes sense? I can certainly revoke all 1100 if that’s the preferred 
approach, but I figure with a few days time I can better answer question of 
what were the results in a break of normally compliant process? 

 

Oh, one other factor is that the system wasn’t exploitable. The break was 
between two internal processes talking to each other so the errors couldn’t 
result in certificates issued to a bad actor.  It was also a very low volume 
compared to normal issue. Neither of these are good reasons or excuses. Instead 
they are the reason we thought we should perhaps not revoke all the certs until 
we better understand the compliance implications. 

 

From: Ryan Sleevi  
Sent: Friday, May 10, 2019 2:16 PM
To: Jeremy Rowley 
Cc: r...@sleevi.com; mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: CAA record checking issue

 

 

 

On Fri, May 10, 2019 at 3:55 PM Jeremy Rowley mailto:jeremy.row...@digicert.com> > wrote:

The analysis was basically that all the verification documents are still good, 
which means if we issued the cert today, the issuance would pass without 
further checks (since the data itself is good for 825 days). Because of this, 
customers with domains that didn’t prohibit Digicert in their CAA record 
(anywhere in the chain) could simply reissue the certificate without a problem. 
We could require this of all customers. For the 16, issuance would fail if the 
CAA check was performed today. Therefore, we want to revoke those. 

 

The one reason I wanted more time to respond is that we think we may have most 
CAA records in our Splunk data for the time of issuance. Our new plan is that 
we will revoke all certs unless we can confirm the CAA record was permissive at 
the time of issuance. I don’t know the number of certs that we will revoke yet. 
I’ll post an update when we compare the Splunk data to the issuance data. 

 

Thanks for answering. I was hoping you had a more thorough analysis ;) I do 
have other questions about the implementation details, but I'll add those to 
the bug, so we can focus this discussion on the immediate remediation steps.

 

I guess my reservation with such an approach (and this is more a metapoint) is 
consider issuing an EV certificate without having the supporting documentation 
and/or without validating the documentation. You later come back to the 
documents, validate them, and find out you got lucky - the information was 
actually correct, even though the controls failed and the process wasn't 
followed. Do you revoke the certificates, on the basis the process failed, or 
do you not revoke them, because they were eventually consistent?

 

This might sound like a hypothetical, but it's a question this industry has 
faced in the past [1][2], and browsers have reached different conclusions than 
CAs. It's not immediately clear to me how the proposed response here differs 
from those past responses, and may highlight some of the difference in 
philosophies here. An analysis that considered these past events, and how they 
were received by the community, and how there may be different facts here that 
lead to different conclusions, would be useful in both validating and 
justifying the proposed course of action.

 

The real problem was the CA would kick off a request to the CAA checker. If the 
CA encountered an error, the request would time out. The CAA record may still 
have checked the CAA records appropriately but the CA never pulled the 
information to verify issuance authorization. So it’s a mis-issuance unless we 
can pull the data and prove it wasn’t. Combing through the archive data will 
take a while.

 

[1] 
https://wiki.mozilla.org/CA:Symantec_Issues#Issue_C:_Unauthorized_EV_Issuance_by_RAs_.28January_2014_-_February_2015.29
 

[2]

RE: CAA record checking issue

2019-05-10 Thread Jeremy Rowley via dev-security-policy
The analysis was basically that all the verification documents are still good, 
which means if we issued the cert today, the issuance would pass without 
further checks (since the data itself is good for 825 days). Because of this, 
customers with domains that didn’t prohibit Digicert in their CAA record 
(anywhere in the chain) could simply reissue the certificate without a problem. 
We could require this of all customers. For the 16, issuance would fail if the 
CAA check was performed today. Therefore, we want to revoke those.

 

The one reason I wanted more time to respond is that we think we may have most 
CAA records in our Splunk data for the time of issuance. Our new plan is that 
we will revoke all certs unless we can confirm the CAA record was permissive at 
the time of issuance. I don’t know the number of certs that we will revoke yet. 
I’ll post an update when we compare the Splunk data to the issuance data.  

 

The real problem was the CA would kick off a request to the CAA checker. If the 
CA encountered an error, the request would time out. The CAA record may still 
have checked the CAA records appropriately but the CA never pulled the 
information to verify issuance authorization. So it’s a mis-issuance unless we 
can pull the data and prove it wasn’t. Combing through the archive data will 
take a while.

 

Jeremy

 

From: Ryan Sleevi  
Sent: Friday, May 10, 2019 11:54 AM
To: Jeremy Rowley 
Cc: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: CAA record checking issue

 

 

 

On Thu, May 9, 2019 at 10:05 PM Jeremy Rowley via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org> > wrote:

We checked all the applicable CAA records and found 16 where the CAA record
would not permit us to issue if we were issuing a new cert today. What we
are proposing is to revoke these certificates and reissue them (if they pass
all the proper checks). The rest would pass if we issued today so we were
going to leave these where they are while disclosing them to the Mozilla
community. 

 

Could you share the risk analysis that helped you reach this latter conclusion?

 

That is, CAA is a time of check thing, and, as you note, you can't be sure they 
were appropriately authorized at the time of issuance. Thus, even if the site 
operator is a DigiCert customer now, or might have disabled CAA now, there's no 
ability to determine whether or not they previously approved it - or even 
whether the holder of that old certificate is still the authorized domain 
representative now (e.g. in the event of a domain transfer or sale)

 

In general, the default should be to revoke all. That said, if there's a 
thorough analysis that has considered this, and other scenarios, and that, on 
the whole, has led DigiCert to believe the current path is more appropriate, 
it'd be great if you could share that analysis. I think Tim's questions are 
useful as well, in understanding the reasoning.

 

Basically, without stating a position on whether your analysis is right or 
wrong, I'm hoping you can show your work in detail, and all the factors you 
considered. That sort of analysis is what helps the community build confidence 
that the chosen path, despite being a violation of the BRs, is a reflection of 
a CA thoughtfully considering all perspectives.



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: CAA record checking issue

2019-05-10 Thread Jeremy Rowley via dev-security-policy
Hey Tim, 

The issue was a call between the CA and CAA checker. The CAA checker would 
check the DNS and verify the DNSSEC chain. However, when retrieving the 
information from the CAA checker, the CA had the error, which means the CAA 
check was not evaluated correctly. Under normal operation the CAA check does 
the DNSSEC , CAA, and other DNS queries. Here it wasn't a DNS failure - it was 
a communication failure between the CA and CAA checker. 

I guess you could say there were two failures in this case. First that the CAA 
check timed out internally and second that the DNSSEC check never happened. The 
mis-issuance still amounts to the same thing. 

Normally, even if we get a DNS failure, we can usually check to see if the zone 
is signed (at least at the root zone). If there is a signed root zone, then we 
treat the entire zone as signed (meaning we fail on error).  

Jeremy

-Original Message-
From: Tim Shirley  
Sent: Friday, May 10, 2019 7:30 AM
To: Jeremy Rowley ; 
mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: CAA record checking issue

Jeremy,

Thanks for sharing this.  After reading your description, I'm curious how your 
system was previously (or is now) satisfying the third criteria needed to issue 
in the face of a record lookup failure: confirming that the domain's zone does 
not have a DNSSEC validation chain to the ICANN root.  Wouldn't any issuance 
require at least one successful DNS query in order to confirm the lack of a DS 
record somewhere between the TLD and the domain you're checking so you know the 
domain doesn't have a valid DNSSEC chain?  If the CAA checking service was 
down, wouldn't those have all timed out?  Or are those checks being done from a 
different system that wasn't down?

Regards,
Tim

On 5/9/19, 10:05 PM, "dev-security-policy on behalf of Jeremy Rowley via 
dev-security-policy"  wrote:

FYI, we posted this today:

 


https://scanmail.trustwave.com/?c=4062&d=99zU3MWO5ZnJnVq-TZZut0-4BjNGA3S27plK9QDITw&s=5&u=https%3a%2f%2fbugzilla%2emozilla%2eorg%2fshow%5fbug%2ecgi%3fid%3d1550645

 

Basically we discovered an issue with our CAA record checking system. If the
system timed out, we would treat the failure as a DNS failure instead of an
internal failure. Per the BRs Section 3.2.2:

"CAs are permitted to treat a record lookup failure as permission to issue
if: 

. the failure is outside the CA's infrastructure; 

. the lookup has been retried at least once; and 

. the domain's zone does not have a DNSSEC validation chain to the ICANN
root"

 

The failure was not outside our infrastructure so issuance was improper. 

 

We checked all the applicable CAA records and found 16 where the CAA record
would not permit us to issue if we were issuing a new cert today. What we
are proposing is to revoke these certificates and reissue them (if they pass
all the proper checks). The rest would pass if we issued today so we were
going to leave these where they are while disclosing them to the Mozilla
community. 

 

Other suggestions are welcome. 

 

The issue was put into the code back when CAA record checking became
mandatory (Sept 2017).  We generally have a peer review of our code so that
at least one other developer has looked at the system before release. In
this case, neither PM nor a second reviewer was involved in the development.
We've since implemented more stringent development processes, including
ensuring a PM reviews and brings questions about projects to the compliance
team. 

 

Anyway, let me know what questions, comments, etc you have.

 

Thanks!

Jeremy





smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAA record checking issue

2019-05-10 Thread Jeremy Rowley via dev-security-policy
Okay. I'm working on something and will post it soon.

From: Ryan Sleevi 
Sent: Friday, May 10, 2019 11:54:14 AM
To: Jeremy Rowley
Cc: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: CAA record checking issue



On Thu, May 9, 2019 at 10:05 PM Jeremy Rowley via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
 wrote:
We checked all the applicable CAA records and found 16 where the CAA record
would not permit us to issue if we were issuing a new cert today. What we
are proposing is to revoke these certificates and reissue them (if they pass
all the proper checks). The rest would pass if we issued today so we were
going to leave these where they are while disclosing them to the Mozilla
community.

Could you share the risk analysis that helped you reach this latter conclusion?

That is, CAA is a time of check thing, and, as you note, you can't be sure they 
were appropriately authorized at the time of issuance. Thus, even if the site 
operator is a DigiCert customer now, or might have disabled CAA now, there's no 
ability to determine whether or not they previously approved it - or even 
whether the holder of that old certificate is still the authorized domain 
representative now (e.g. in the event of a domain transfer or sale)

In general, the default should be to revoke all. That said, if there's a 
thorough analysis that has considered this, and other scenarios, and that, on 
the whole, has led DigiCert to believe the current path is more appropriate, 
it'd be great if you could share that analysis. I think Tim's questions are 
useful as well, in understanding the reasoning.

Basically, without stating a position on whether your analysis is right or 
wrong, I'm hoping you can show your work in detail, and all the factors you 
considered. That sort of analysis is what helps the community build confidence 
that the chosen path, despite being a violation of the BRs, is a reflection of 
a CA thoughtfully considering all perspectives.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


CAA record checking issue

2019-05-09 Thread Jeremy Rowley via dev-security-policy
FYI, we posted this today:

 

https://bugzilla.mozilla.org/show_bug.cgi?id=1550645

 

Basically we discovered an issue with our CAA record checking system. If the
system timed out, we would treat the failure as a DNS failure instead of an
internal failure. Per the BRs Section 3.2.2:

"CAs are permitted to treat a record lookup failure as permission to issue
if: 

. the failure is outside the CA's infrastructure; 

. the lookup has been retried at least once; and 

. the domain's zone does not have a DNSSEC validation chain to the ICANN
root"

 

The failure was not outside our infrastructure so issuance was improper. 

 

We checked all the applicable CAA records and found 16 where the CAA record
would not permit us to issue if we were issuing a new cert today. What we
are proposing is to revoke these certificates and reissue them (if they pass
all the proper checks). The rest would pass if we issued today so we were
going to leave these where they are while disclosing them to the Mozilla
community. 

 

Other suggestions are welcome. 

 

The issue was put into the code back when CAA record checking became
mandatory (Sept 2017).  We generally have a peer review of our code so that
at least one other developer has looked at the system before release. In
this case, neither PM nor a second reviewer was involved in the development.
We've since implemented more stringent development processes, including
ensuring a PM reviews and brings questions about projects to the compliance
team. 

 

Anyway, let me know what questions, comments, etc you have.

 

Thanks!

Jeremy



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Reported Digicert key compromise but not revoked

2019-05-09 Thread Jeremy Rowley via dev-security-policy
No argument from me there. We generally act on them no matter what.
Typically any email sent to supp...@digicert.com requesting revocation is
forwarded to rev...@digicert.com. That's the standard procedure. This one
was missed unfortunately.

-Original Message-
From: dev-security-policy  On
Behalf Of Daniel Marschall via dev-security-policy
Sent: Thursday, May 9, 2019 4:16 PM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: RE: Reported Digicert key compromise but not revoked

I personally do think that it matters to this forum. A CA - no matter what
kind of certificates it issues - must take revocation requests seriously and
act immediately, even if the email is sent to the wrong address. If an
employee at the help desk is unable to forward revocation requests, or needs
several weeks to reply, then there is something not correct with the CA, no
matter if the revocation request is related to a web certificate or code
signing certificate. That's my opinion on this case.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Reported Digicert key compromise but not revoked

2019-05-09 Thread Jeremy Rowley via dev-security-policy
Thanks Wayne. We’ll update our CPS to keep it clear.

 

From: Wayne Thayer  
Sent: Thursday, May 9, 2019 5:04 PM
To: Andrew Ayer 
Cc: Jeremy Rowley ; Jeremy Rowley via 
dev-security-policy 
Subject: Re: Reported Digicert key compromise but not revoked

 

DigiCert CPS section 1.5.2 [1] could also more clearly state that 
rev...@digicert.com <mailto:rev...@digicert.com>  is the correct address for 
'reporting suspected Private Key Compromise, Certificate misuse, or other types 
of fraud, compromise, misuse, inappropriate conduct, or any other matter 
related to Certificates.' Since both email addresses are listed in that 
section, it's not difficult to mistake supp...@digicert.com 
<mailto:supp...@digicert.com>  as the problem reporting mechanism, even though 
the last sentence in 1.5.2.1 implies that rev...@digicert.com 
<mailto:rev...@digicert.com>  is for problem reporting. 

 

- Wayne

 

[1] https://www.digicert.com/wp-content/uploads/2019/04/DigiCert_CPS_v418.pdf

 

On Thu, May 9, 2019 at 3:46 PM Andrew Ayer via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org> > wrote:

On Thu, 9 May 2019 14:47:05 +
Jeremy Rowley via dev-security-policy
mailto:dev-security-policy@lists.mozilla.org> > wrote:

> Hi Han - the proper alias is rev...@digicert.com <mailto:rev...@digicert.com> 
> . The support alias
> will sometimes handle these, but not always.

Is that also true of SSL certificates?  supp...@digicert.com 
<mailto:supp...@digicert.com>  is listed
first at
https://ccadb-public.secure.force.com/mozilla/ProblemReportingMechanismsReport

That should be fixed if supp...@digicert.com <mailto:supp...@digicert.com>  is 
not the right place to
report problems with SSL certificates.

Regards,
Andrew
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> 
https://lists.mozilla.org/listinfo/dev-security-policy



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Reported Digicert key compromise but not revoked

2019-05-09 Thread Jeremy Rowley via dev-security-policy
Thanks Andrew. Yes - it should be rev...@digicert.com

-Original Message-
From: Andrew Ayer  
Sent: Thursday, May 9, 2019 4:46 PM
To: Jeremy Rowley 
Cc: Jeremy Rowley via dev-security-policy

Subject: Re: Reported Digicert key compromise but not revoked

On Thu, 9 May 2019 14:47:05 +
Jeremy Rowley via dev-security-policy
 wrote:

> Hi Han - the proper alias is rev...@digicert.com. The support alias 
> will sometimes handle these, but not always.

Is that also true of SSL certificates?  supp...@digicert.com is listed first
at
https://ccadb-public.secure.force.com/mozilla/ProblemReportingMechanismsRepo
rt

That should be fixed if supp...@digicert.com is not the right place to
report problems with SSL certificates.

Regards,
Andrew



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Reported Digicert key compromise but not revoked

2019-05-09 Thread Jeremy Rowley via dev-security-policy
Hi Han - the proper alias is rev...@digicert.com. The support alias will
sometimes handle these, but not always. We picked up the request from your
post here and are working on it.

Of course, this is out of scope of the Mozilla policy since its code signing
only. 

-Original Message-
From: dev-security-policy  On
Behalf Of Ryan Sleevi via dev-security-policy
Sent: Thursday, May 9, 2019 8:37 AM
To: Han Yuwei 
Cc: mozilla-dev-security-policy

Subject: Re: Reported Digicert key compromise but not revoked

On Thu, May 9, 2019 at 8:59 AM Han Yuwei via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hi m.d.s.p
> I have reported a key compromise incident to digicert by contacting 
> support(at)digicert.com at Apr.13, 2019 and get replied at same day. 
> But it seems like this certificate is still valid.
> This certificate is a code signing certificate and known for signing 
> malware. So I am here to report this to Digicert. If private key is 
> needed I will attach it.
>
> Certificate Info.
> CN:Beijing Founder Apabi Technology Limited
> SN: 06B7AA2C37C0876CCB0378D895D71041
> SHA1: 8564928AA4FBC4BBECF65B402503B2BE3DC60D4D
>

Typically, we have not dealt with issues related to code signing in this
forum - particularly the evaluation and enforcement of policies. For
example, the information provided doesn't allow us to distinguish whether
there is even a remote chance of overlap with the activity here (e.g. with
respect to audits and the CP/CPS)

Have you considered reporting this to Microsoft, as I presume that's the
platform concern?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Policy 2.7 Proposal: Clarify Revocation Requirements for S/MIME Certificates

2019-05-06 Thread Jeremy Rowley via dev-security-policy
I think it should be added by Mozilla. The CAB Forum is a long way from having 
an s/MIME policy in place (there's not even a working group yet). Having no 
policy for cert revocation related to s/MIME ignores that s/MIME are part of 
the Mozilla ecosystem and sequesters them from the rest of the policy.  Without 
a revocation policy, there's no requirement to revoke a certificate mis-issued 
that's non-TLS.

-Original Message-
From: dev-security-policy  On 
Behalf Of Wayne Thayer via dev-security-policy
Sent: Friday, May 3, 2019 12:44 PM
To: mozilla-dev-security-policy 
Subject: Re: Policy 2.7 Proposal: Clarify Revocation Requirements for S/MIME 
Certificates

Kathleen and Pedro,

Thank you for raising these legitimate concerns. I continue to believe that a 
literal reading of the current requirement is that it already does apply to 
S/MIME certificates, and the discussion I mentioned supports that 
interpretation.

I propose two new options to solve this problem:
* Remove S/MIME certificates from the scope of section 6 and wait for the CAB 
Forum to publish baseline requirements for S/MIME. I suspect that is a few 
years away given that the working group is still in the process of being 
chartered. However, this option is consistent with some people's interpretation 
of the existing requirements.
* Add a subsection on revocation specific to S/MIME certificates and populate 
it with a version of the BR requirements tailored to S/MIME. We'd probably need 
to include requirements for S/MIME Intermediates as well as leaf certificates.

A third option would be to specify the parts of the BR revocation requirements 
that don't apply to S/MIME certificates, but after reviewing section 4.9.1, I 
think this would be confusing, at best.

I would appreciate everyone's input on the best way to address this issue.

- Wayne


On Fri, May 3, 2019 at 5:12 AM Pedro Fuentes via dev-security-policy < 
dev-security-policy@lists.mozilla.org> wrote:

> Hello,
> my main concern about applying this would be that this would lead to 
> forbid the option to suspend a personal certificate.
>
> On a side note about suspension... I was not active in the forums when 
> this was discussed and adopted and I'm sure there was a clear benefit 
> on disallowing suspensions, but it's kind of a hassle already because 
> of the application of this rule to the whole hierarchy. We'd like for 
> example to have the capability to suspend a subordinate CA that is 
> under investigation or that is pending of an audit, but right now we 
> can't do it... extending these rules to personal certificates is not 
> something I'm personally too enthusiastic.
>
> Best,
> Pedro
>
> El jueves, 2 de mayo de 2019, 17:32:43 (UTC+2), Kathleen Wilson  escribió:
> > Just want to make it very clear to everyone, that the proposal, to 
> > add the following text to section 6 of Mozilla's Root Store Policy 
> > would mean that certs constrained to id-kp-emailProtection 
> > (end-entity and intermediate), i.e. S/MIME certs, would be subject 
> > to the same BR rules and revocation timelines as TLS/SSL certs.
> >
> > > This requirement applies to certificates that are not otherwise
> required
> > >> to comply with the BRs.
> >
> > For example, Section 4.9.1.1 of the BRs says:
> > ""
> > MUST revoke a Certificate within 5 days if one or more of the 
> > following
> > occurs: ...
> >
> > 1. The Certificate no longer complies with the requirements of 
> > Sections
> > 6.1.5 and 6.1.6;
> >  ...
> > 7. The CA is made aware that the Certificate was not issued in 
> > accordance with these Requirements ""
> >
> > I interpret "these Requirements" to mean the BRs. Therefore, my 
> > interpretation of the proposed additional text is that certs that 
> > are constrained to S/MIME would also have to be issued in full 
> > accordance with the BRs and would have to be revoked within the 
> > timeline as specified in the BRs when found to be not in full 
> > compliance with the BR issuance rules.
> >
> > Section 1.1 of Mozilla's root store policy limits the scope of the 
> > policy such that the proposed additional text would only 
> > specifically add the rules to S/MIME certs. Certs with no EKU 
> > extension or anyExtendedKeyUsage are considered technically capable 
> > of issuing TLS certs, so already subject to the rules of the BRs.
> >
> > Therefore, my concern is that the proposed additional text would 
> > mean that all of the BR issuance rules and revocation rules would 
> > also apply to S/MIME certs. I do not think that S/MIME certs have 
> > been taken into account in the BRs, so I do not think we should 
> > impose all the BR issuance and revocation rules on S/MIME certs.
> >
> > Kathleen
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-

RE: Arabtec Holding public key? [Weird Digicert issued cert]

2019-04-15 Thread Jeremy Rowley via dev-security-policy
A possibility. They could have pasted something in the root chain. Note that 
the required handshake would have caught that if it'd been implemented. Overall 
it doesn't matter too much if was malicious or innocent, the cert holder can't 
do anything without the private key.

-Original Message-
From: dev-security-policy  On 
Behalf Of Jakob Bohm via dev-security-policy
Sent: Monday, April 15, 2019 4:58 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Arabtec Holding public key? [Weird Digicert issued cert]

Thanks for the explanation.

Is it possible that a significant percentage of less-skilled users simply 
pasted in the wrong certificates by mistake, then wondered why their new 
certificates newer worked?

Pasting in the wrong certificate from an installed certificate chain or 
semi-related support page doesn't seem an unlikely user error with that design.

On 12/04/2019 18:56, Jeremy Rowley wrote:
> I don't mind filling in details.
> 
> We have a system that permits creation of certificates without a CSR that 
> works by extracting the key from an existing cert, validating the domain/org 
> information, and creating a new certificate based on the contents of the old 
> certificate. The system was supposed to do a handshake with a server hosting 
> the existing certificate as a form of checking control over the private key, 
> but that was never implemented, slated for a phase 2 that never came. We've 
> since disabled that system, although we didn't file any incident report (for 
> the reasons discussed so far).
> 
> -Original Message-
> From: dev-security-policy 
>  On Behalf Of Wayne 
> Thayer via dev-security-policy
> Sent: Friday, April 12, 2019 10:39 AM
> To: Jakob Bohm 
> Cc: mozilla-dev-security-policy 
> 
> Subject: Re: Arabtec Holding public key? [Weird Digicert issued cert]
> 
> It's not clear that there is anything for DigiCert to respond to. Are we 
> asserting that the existence of this Arabtec certificate is proof that 
> DigiCert violated section 3.2.1 of their CPS?
> 
> - Wayne
> 
> On Thu, Apr 11, 2019 at 6:57 PM Jakob Bohm via dev-security-policy < 
> dev-security-policy@lists.mozilla.org> wrote:
> 
>> On 11/04/2019 04:47, Santhan Raj wrote:
>>> On Wednesday, April 10, 2019 at 5:53:45 PM UTC-7, Corey Bonnell wrote:
>>>> On Wednesday, April 10, 2019 at 7:41:33 PM UTC-4, Nick Lamb wrote:
>>>>> (Resending after I typo'd the ML address)
>>>>>
>>>>> At the risk of further embarrassing myself in the same week, while 
>>>>> working further on mimicking Firefox trust decisions I found this 
>>>>> pre-certificate for Arabtec Holding PJSC:
>>>>>
>>>>> https://crt.sh/?id=926433948
>>>>>
>>>>> Now there's nothing especially strange about this certificate, 
>>>>> except that its RSA public key is shared with several other 
>>>>> certificates
>>>>>
>>>>>
>> https://crt.sh/?spkisha256=8bb593a93be1d0e8a822bb887c547890c3e706aad2
>> d
>> ab76254f97fb36b82fc26
>>>>>
>>>>> ... such as the DigiCert Global Root G2:
>>>>>
>>>>> https://crt.sh/?caid=5885
>>>>>
>>>>>
>>>>> I would like to understand what happened here. Maybe I have once 
>>>>> again made a terrible mistake, but if not surely this means either 
>>>>> that the Issuing authority was fooled into issuing for a key the 
>>>>> subscriber doesn't actually have or worse, this Arabtec Holding 
>>>>> outfit has the private keys for DigiCert's Global Root G2
>>>>>
>>>>> Nick.
>>>>
>>>> AFAIK there's no requirement in the BRs or Mozilla Root Policy for 
>>>> CAs
>> to actually verify that the Applicant actually is in possession of 
>> the corresponding private key for public keys included in CSRs (i.e., 
>> check the signature on the CSR), so the most likely explanation is 
>> that the CA in question did not check the signature on the 
>> Applicant-submitted CSR and summarily embedded the supplied public 
>> key in the certificate (assuming Digicert's CA infrastructure wasn't 
>> compromised, but I think that's highly unlikely).
>>>>
>>>> A very similar situation was brought up on the list before, but 
>>>> with
>> WoSign as the issuing CA:
>> https://groups.google.com/d/msg/mozilla.dev.security.policy/zECd9J3KB
>> W
>> 8/OlK44lmGCAAJ
>&

RE: Arabtec Holding public key? [Weird Digicert issued cert]

2019-04-12 Thread Jeremy Rowley via dev-security-policy
Unfortunately yes.  We plan on updating our CPS and bringing it up with our 
auditors during this audit, who is on-site next week.

 

From: Wayne Thayer  
Sent: Friday, April 12, 2019 11:30 AM
To: Jeremy Rowley 
Cc: Jakob Bohm ; mozilla-dev-security-policy 

Subject: Re: Arabtec Holding public key? [Weird Digicert issued cert]

 

Jeremy: do you consider the fact that DigiCert signed certs without proof of 
private key possession to have been a violation if its CPS?

 

On Fri, Apr 12, 2019 at 10:04 AM Jeremy Rowley mailto:jeremy.row...@digicert.com> > wrote:

The net result were some people created private certs with our root cert public 
key. We signed new certs using that public key after verifying domain control. 
We saw the process happen a few times but didn't worry about it too much as the 
requesters didn't control the private key. We ended up shutting off the no-CSR 
path because we figured the issuance of these certs created a potential PR 
concern, even if there isn't a real security risk.

-Original Message-
From: dev-security-policy mailto:dev-security-policy-boun...@lists.mozilla.org> > On Behalf Of Jeremy 
Rowley via dev-security-policy
Sent: Friday, April 12, 2019 10:56 AM
To: Wayne Thayer mailto:wtha...@mozilla.com> >; Jakob 
Bohm mailto:jb-mozi...@wisemo.com> >
Cc: mozilla-dev-security-policy mailto:mozilla-dev-security-pol...@lists.mozilla.org> >
Subject: RE: Arabtec Holding public key? [Weird Digicert issued cert]

I don't mind filling in details.

We have a system that permits creation of certificates without a CSR that works 
by extracting the key from an existing cert, validating the domain/org 
information, and creating a new certificate based on the contents of the old 
certificate. The system was supposed to do a handshake with a server hosting 
the existing certificate as a form of checking control over the private key, 
but that was never implemented, slated for a phase 2 that never came. We've 
since disabled that system, although we didn't file any incident report (for 
the reasons discussed so far).  

-Original Message-
From: dev-security-policy mailto:dev-security-policy-boun...@lists.mozilla.org> > On Behalf Of Wayne 
Thayer via dev-security-policy
Sent: Friday, April 12, 2019 10:39 AM
To: Jakob Bohm mailto:jb-mozi...@wisemo.com> >
Cc: mozilla-dev-security-policy mailto:mozilla-dev-security-pol...@lists.mozilla.org> >
Subject: Re: Arabtec Holding public key? [Weird Digicert issued cert]

It's not clear that there is anything for DigiCert to respond to. Are we 
asserting that the existence of this Arabtec certificate is proof that DigiCert 
violated section 3.2.1 of their CPS?

- Wayne

On Thu, Apr 11, 2019 at 6:57 PM Jakob Bohm via dev-security-policy < 
dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> > wrote:

> On 11/04/2019 04:47, Santhan Raj wrote:
> > On Wednesday, April 10, 2019 at 5:53:45 PM UTC-7, Corey Bonnell wrote:
> >> On Wednesday, April 10, 2019 at 7:41:33 PM UTC-4, Nick Lamb wrote:
> >>> (Resending after I typo'd the ML address)
> >>>
> >>> At the risk of further embarrassing myself in the same week, while 
> >>> working further on mimicking Firefox trust decisions I found this 
> >>> pre-certificate for Arabtec Holding PJSC:
> >>>
> >>> https://crt.sh/?id=926433948
> >>>
> >>> Now there's nothing especially strange about this certificate, 
> >>> except that its RSA public key is shared with several other 
> >>> certificates
> >>>
> >>>
> https://crt.sh/?spkisha256=8bb593a93be1d0e8a822bb887c547890c3e706aad2d
> ab76254f97fb36b82fc26
> >>>
> >>> ... such as the DigiCert Global Root G2:
> >>>
> >>> https://crt.sh/?caid=5885
> >>>
> >>>
> >>> I would like to understand what happened here. Maybe I have once 
> >>> again made a terrible mistake, but if not surely this means either 
> >>> that the Issuing authority was fooled into issuing for a key the 
> >>> subscriber doesn't actually have or worse, this Arabtec Holding 
> >>> outfit has the private keys for DigiCert's Global Root G2
> >>>
> >>> Nick.
> >>
> >> AFAIK there's no requirement in the BRs or Mozilla Root Policy for 
> >> CAs
> to actually verify that the Applicant actually is in possession of the 
> corresponding private key for public keys included in CSRs (i.e., 
> check the signature on the CSR), so the most likely explanation is 
> that the CA in question did not check the signature on the 
> Applicant-submitted CSR and summarily embedded the supplied

RE: Arabtec Holding public key? [Weird Digicert issued cert]

2019-04-12 Thread Jeremy Rowley via dev-security-policy
The net result were some people created private certs with our root cert public 
key. We signed new certs using that public key after verifying domain control. 
We saw the process happen a few times but didn't worry about it too much as the 
requesters didn't control the private key. We ended up shutting off the no-CSR 
path because we figured the issuance of these certs created a potential PR 
concern, even if there isn't a real security risk.

-Original Message-
From: dev-security-policy  On 
Behalf Of Jeremy Rowley via dev-security-policy
Sent: Friday, April 12, 2019 10:56 AM
To: Wayne Thayer ; Jakob Bohm 
Cc: mozilla-dev-security-policy 
Subject: RE: Arabtec Holding public key? [Weird Digicert issued cert]

I don't mind filling in details.

We have a system that permits creation of certificates without a CSR that works 
by extracting the key from an existing cert, validating the domain/org 
information, and creating a new certificate based on the contents of the old 
certificate. The system was supposed to do a handshake with a server hosting 
the existing certificate as a form of checking control over the private key, 
but that was never implemented, slated for a phase 2 that never came. We've 
since disabled that system, although we didn't file any incident report (for 
the reasons discussed so far).  

-Original Message-
From: dev-security-policy  On 
Behalf Of Wayne Thayer via dev-security-policy
Sent: Friday, April 12, 2019 10:39 AM
To: Jakob Bohm 
Cc: mozilla-dev-security-policy 
Subject: Re: Arabtec Holding public key? [Weird Digicert issued cert]

It's not clear that there is anything for DigiCert to respond to. Are we 
asserting that the existence of this Arabtec certificate is proof that DigiCert 
violated section 3.2.1 of their CPS?

- Wayne

On Thu, Apr 11, 2019 at 6:57 PM Jakob Bohm via dev-security-policy < 
dev-security-policy@lists.mozilla.org> wrote:

> On 11/04/2019 04:47, Santhan Raj wrote:
> > On Wednesday, April 10, 2019 at 5:53:45 PM UTC-7, Corey Bonnell wrote:
> >> On Wednesday, April 10, 2019 at 7:41:33 PM UTC-4, Nick Lamb wrote:
> >>> (Resending after I typo'd the ML address)
> >>>
> >>> At the risk of further embarrassing myself in the same week, while 
> >>> working further on mimicking Firefox trust decisions I found this 
> >>> pre-certificate for Arabtec Holding PJSC:
> >>>
> >>> https://crt.sh/?id=926433948
> >>>
> >>> Now there's nothing especially strange about this certificate, 
> >>> except that its RSA public key is shared with several other 
> >>> certificates
> >>>
> >>>
> https://crt.sh/?spkisha256=8bb593a93be1d0e8a822bb887c547890c3e706aad2d
> ab76254f97fb36b82fc26
> >>>
> >>> ... such as the DigiCert Global Root G2:
> >>>
> >>> https://crt.sh/?caid=5885
> >>>
> >>>
> >>> I would like to understand what happened here. Maybe I have once 
> >>> again made a terrible mistake, but if not surely this means either 
> >>> that the Issuing authority was fooled into issuing for a key the 
> >>> subscriber doesn't actually have or worse, this Arabtec Holding 
> >>> outfit has the private keys for DigiCert's Global Root G2
> >>>
> >>> Nick.
> >>
> >> AFAIK there's no requirement in the BRs or Mozilla Root Policy for 
> >> CAs
> to actually verify that the Applicant actually is in possession of the 
> corresponding private key for public keys included in CSRs (i.e., 
> check the signature on the CSR), so the most likely explanation is 
> that the CA in question did not check the signature on the 
> Applicant-submitted CSR and summarily embedded the supplied public key 
> in the certificate (assuming Digicert's CA infrastructure wasn't 
> compromised, but I think that's highly unlikely).
> >>
> >> A very similar situation was brought up on the list before, but 
> >> with
> WoSign as the issuing CA:
> https://groups.google.com/d/msg/mozilla.dev.security.policy/zECd9J3KBW
> 8/OlK44lmGCAAJ
> >>
> >
> > While not a BR requirement, the CA's CPS does stipulate validating
> possession of private key in section 3.2.1 (looking at the change 
> history, it appears this stipulation existed during the cert 
> issuance). So something else must have happened here.
> >
> > Except for the Arabtec cert, the other certs looks like cross-sign 
> > for
> the Digicert root.
> >
>
> Why still no response from Digicert?  Has this been reported to them 
> directly?
>
>
>
> Enjoy
>
> Jakob
> --
> Ja

RE: Arabtec Holding public key? [Weird Digicert issued cert]

2019-04-12 Thread Jeremy Rowley via dev-security-policy
I don't mind filling in details.

We have a system that permits creation of certificates without a CSR that works 
by extracting the key from an existing cert, validating the domain/org 
information, and creating a new certificate based on the contents of the old 
certificate. The system was supposed to do a handshake with a server hosting 
the existing certificate as a form of checking control over the private key, 
but that was never implemented, slated for a phase 2 that never came. We've 
since disabled that system, although we didn't file any incident report (for 
the reasons discussed so far).  

-Original Message-
From: dev-security-policy  On 
Behalf Of Wayne Thayer via dev-security-policy
Sent: Friday, April 12, 2019 10:39 AM
To: Jakob Bohm 
Cc: mozilla-dev-security-policy 
Subject: Re: Arabtec Holding public key? [Weird Digicert issued cert]

It's not clear that there is anything for DigiCert to respond to. Are we 
asserting that the existence of this Arabtec certificate is proof that DigiCert 
violated section 3.2.1 of their CPS?

- Wayne

On Thu, Apr 11, 2019 at 6:57 PM Jakob Bohm via dev-security-policy < 
dev-security-policy@lists.mozilla.org> wrote:

> On 11/04/2019 04:47, Santhan Raj wrote:
> > On Wednesday, April 10, 2019 at 5:53:45 PM UTC-7, Corey Bonnell wrote:
> >> On Wednesday, April 10, 2019 at 7:41:33 PM UTC-4, Nick Lamb wrote:
> >>> (Resending after I typo'd the ML address)
> >>>
> >>> At the risk of further embarrassing myself in the same week, while 
> >>> working further on mimicking Firefox trust decisions I found this 
> >>> pre-certificate for Arabtec Holding PJSC:
> >>>
> >>> https://crt.sh/?id=926433948
> >>>
> >>> Now there's nothing especially strange about this certificate, 
> >>> except that its RSA public key is shared with several other 
> >>> certificates
> >>>
> >>>
> https://crt.sh/?spkisha256=8bb593a93be1d0e8a822bb887c547890c3e706aad2d
> ab76254f97fb36b82fc26
> >>>
> >>> ... such as the DigiCert Global Root G2:
> >>>
> >>> https://crt.sh/?caid=5885
> >>>
> >>>
> >>> I would like to understand what happened here. Maybe I have once 
> >>> again made a terrible mistake, but if not surely this means either 
> >>> that the Issuing authority was fooled into issuing for a key the 
> >>> subscriber doesn't actually have or worse, this Arabtec Holding 
> >>> outfit has the private keys for DigiCert's Global Root G2
> >>>
> >>> Nick.
> >>
> >> AFAIK there's no requirement in the BRs or Mozilla Root Policy for 
> >> CAs
> to actually verify that the Applicant actually is in possession of the 
> corresponding private key for public keys included in CSRs (i.e., 
> check the signature on the CSR), so the most likely explanation is 
> that the CA in question did not check the signature on the 
> Applicant-submitted CSR and summarily embedded the supplied public key 
> in the certificate (assuming Digicert's CA infrastructure wasn't 
> compromised, but I think that's highly unlikely).
> >>
> >> A very similar situation was brought up on the list before, but 
> >> with
> WoSign as the issuing CA:
> https://groups.google.com/d/msg/mozilla.dev.security.policy/zECd9J3KBW
> 8/OlK44lmGCAAJ
> >>
> >
> > While not a BR requirement, the CA's CPS does stipulate validating
> possession of private key in section 3.2.1 (looking at the change 
> history, it appears this stipulation existed during the cert 
> issuance). So something else must have happened here.
> >
> > Except for the Arabtec cert, the other certs looks like cross-sign 
> > for
> the Digicert root.
> >
>
> Why still no response from Digicert?  Has this been reported to them 
> directly?
>
>
>
> Enjoy
>
> Jakob
> --
> Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com 
> Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10 This 
> public discussion message is non-binding and may contain errors.
> WiseMo - Remote Service Management for PCs, Phones and Embedded 
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Pre-Incident Report - GoDaddy Serial Number Entropy

2019-03-13 Thread Jeremy Rowley via dev-security-policy
No one wants to paint a target on their back. If I announce we're 100%
compliant with everything, that's asking to be shot in the face. You're
welcome to look at ours. I think we fully comply with 7.1 (I've double
checked everything) and would love to find out if we're not. I like the
feedback and research so feel free to peel away at the DigiCert parfait. 

-Original Message-
From: dev-security-policy  On
Behalf Of Ryan Sleevi via dev-security-policy
Sent: Wednesday, March 13, 2019 8:03 PM
To: Peter Gutmann 
Cc: mozilla-dev-security-pol...@lists.mozilla.org; Richard Moore

Subject: Re: Pre-Incident Report - GoDaddy Serial Number Entropy

On Wed, Mar 13, 2019 at 6:09 PM Peter Gutmann via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Richard Moore via dev-security-policy < 
> dev-security-policy@lists.mozilla.org> writes:
>
> >If any other CA wants to check theirs before someone else does, then 
> >now
> is
> >surely the time to speak up.
>
> I'd already asked previously whether any CA wanted to indicate 
> publicly that they were compliant with BR 7.1, which zero CAs 
> responded to (I counted them twice).  This means either there are very 
> few CAs bothering with
> dev-security-
> policy, or they're all hunkering down and hoping it'll blow over, 
> which given that they're going to be forced to potentially carry out 
> mass revocations would be the game-theoretically sensible approach to 
> take:


To be fair, this is not an either/or proposition. The third option is that
they could be ignoring you specifically, which may not be an unreasonable
position, game-theoretically speaking of course.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: GoDaddy Revocation Disclosure

2019-03-12 Thread Jeremy Rowley via dev-security-policy
Not looking for blanket approval – I stated it’d still be part of the audit 
report. We also aren’t directly impacted by this particular incident (which is 
why I brought it up here). The actual evaluation of the CA would remain up to 
Mozilla of course, but the really good discussion about 63 bits (especially the 
proposed ballot language) got me thinking about how we could apply this more 
generally to incident reports and how CAs can use them before deciding on a 
course of action. The underscore discussion was definitely good as well, and I 
felt had a great outcome.

 

I think the primary change I’m proposing is that the initial report shouldn’t 
be an incident report. Instead, the initial report can be short blurb posted to 
Mozilla along with a description on what the Ca plans to do. Then the community 
can talk about the plan in addition to the incident, rather than just the 
incident. 

 

Jeremy

 

From: Ryan Sleevi  
Sent: Tuesday, March 12, 2019 2:31 PM
To: Jeremy Rowley 
Cc: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: GoDaddy Revocation Disclosure

 

 

 

On Tue, Mar 12, 2019 at 4:17 PM Jeremy Rowley via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org> > wrote:

A new flow that includes the community more fully could be:
1) Post to Mozilla, the post must include an initial proposed plan of action
2) Create an incident report (to track bugs)
3) Discuss on the Mozilla forum the proposed plan and post updated plans based 
on member suggestions
4) Post a final draft to Bugzilla
5) Post updates per a timeline set in the incident report
6) Wayne closes the bug.

This is probably a lot more work for the CA, but I know we'd find the community 
feedback on how to resolve issues useful. Maybe it could also change into a 
continuous flow of "How can X CA do better - here's some suggestions" instead 
of "Better put up the lightning rod and get through this". 

 

So, I think many of these elements are already captured in the current process, 
as the lengthy discussion with DigiCert regarding underscores [1], and this 
provides a model for engaging with the community and gathering feedback and 
concerns about the response.

 

CAs are responsible for drafting their initial incident reports, gathering 
feedback, and making a decision - much as DigiCert did with underscores. The CA 
is judged based on how well they considered and balanced the risks, there is 
opportunity for concerns about improving (an area DigiCert encountered with its 
own reports), and we move forward.

 

It would seem, from your broader message, that this is looking for some sort of 
blanket approval, independent of the CA or facts specific to that CA, and I 
think that's something that we've been explicitly trying to avoid - as the 
context matters. There are a number of hazards, which Matt Palmer highlighted 
during the discussion of underscores [2][3][4], and I think those still apply 
now as much as they did two and a half months ago.

 

[1] 
https://groups.google.com/d/msg/mozilla.dev.security.policy/0oy4uTEVnus/pnywuWbmBwAJ
 

[2] 
https://groups.google.com/d/msg/mozilla.dev.security.policy/0oy4uTEVnus/APSWO4SYCgAJ

[3] 
https://groups.google.com/d/msg/mozilla.dev.security.policy/0oy4uTEVnus/voFCTMFVAwAJ

[4] 
https://groups.google.com/d/msg/mozilla.dev.security.policy/0oy4uTEVnus/ZqO9fHZMAwAJ



smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: GoDaddy Revocation Disclosure

2019-03-12 Thread Jeremy Rowley via dev-security-policy
One item that I think could bear a useful discussion from these incident 
reports is how the community can get more involved in discussing and helping 
with incident reports. For example, the 63 bit serial number issue is leading 
to a lot of certs potentially being revoked with little benefit to the 
community (IMO of course). There are definite downsides to revocation that may 
or may not be fully considered when people are responding to incidents. For 
example, adding a bunch of certs to a CRL for a minor issue seems like a 
pointless increase in CRL size. There's also the customer disruption and other 
issues to consider that are probably important for the community to know when 
looking at incident reports.  

I'm wondering if we (the community or CABForum) should have some mechanism of 
evaluating these risks and  proposed incident plans before/while the plan is 
executed. For example, the pros and cons of revocation of the certs could be 
discussed. Actual revocation would be up to the CA , course, and any 
non-compliances would be noted on the audit report, but this part of the policy 
could be a community effort: "That you will perform an analysis to determine 
the factors that prevented timely revocation of the certificates, and include a 
set of remediation actions in the final incident report that aim to prevent 
future revocation delays." 
(https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation). We could 
have the CA propose a rough draft to the community where they engage in a QA 
about the incident and then have members make a recommendation to the CA about 
remediation. All voluntary on the advice. This is probably the way it is 
supposed to currently work, but right now the  flow seems like:
1) Post to Mozilla
2) Create an incident report
3) Community discussion about compliance and why CAs need to do better 😊
4) Update incident report until Wayne closes it

A new flow that includes the community more fully could be:
1) Post to Mozilla, the post must include an initial proposed plan of action
2) Create an incident report (to track bugs)
3) Discuss on the Mozilla forum the proposed plan and post updated plans based 
on member suggestions
4) Post a final draft to Bugzilla
5) Post updates per a timeline set in the incident report
6) Wayne closes the bug.

This is probably a lot more work for the CA, but I know we'd find the community 
feedback on how to resolve issues useful. Maybe it could also change into a 
continuous flow of "How can X CA do better - here's some suggestions" instead 
of "Better put up the lightning rod and get through this". 

Thoughts? Again, probably how this is supposed to work already, but if we can 
turn it into more actionable feedback about what's next, then I'd find that 
super useful. 

Jeremy

 

-Original Message-
From: dev-security-policy  On 
Behalf Of Daymion Reynolds via dev-security-policy
Sent: Monday, August 20, 2018 10:27 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: GoDaddy Revocation Disclosure

On Saturday, August 18, 2018 at 2:27:05 PM UTC-7, Ben Laurie wrote:
> On Fri, 17 Aug 2018 at 18:22, Daymion Reynolds via dev-security-policy 
> < dev-security-policy@lists.mozilla.org> wrote:
> 
> > Revoke Disclosure
> >
> > GoDaddy has been proactively performing self-audits. As part of this 
> > process, we identified a vulnerability in our code that would allow 
> > our validation controls to be bypassed. This bug would allow for a 
> > Random Value that was generated for intended use with Method 
> > 3.2.2.4.6 and 3.2.2.4.7 and was validated using Method 3.2.2.4.2 by 
> > persons who were not confirmed as the domain contact. This bug was 
> > introduced November 2014 and was leveraged to issue a total of 865 
> > certificates. The bug was closed hours after identification, and in 
> > parallel we started the scope and revocation activities.
> >
> > In accordance with CA/B Forum BR, section 4.9.1.1, all miss-issued 
> > certificates were revoked within 24 hours of identification.
> >
> > A timeline of the Events for Revocation are as follows:
> >
> > 8/13 9:30am – Exploit issue surfaced as possible revocation event.
> > 8/13 9:30-4pm – Issue scope identification (at this point it was 
> > unknown), gathering certificate list
> > 8/13 4pm – Certificate list finalized for revoke total 825 certs, 
> > Revoke notification sent to cert owners.
> >
> 
> I presume you mean domain owners?
> 
> Do we know if any of these certs were used? If so, how?
> 
> 
> > 8/14 1:30pm – All certificates revoked.
> >
> > Further research identified 40 certificates which contained re-use 
> > of suspect validation information.
> > 8/15 – 2pm – Additional certificates identified due to re-use.
> > 8/15 – 2:30pm – Customers notified of pending revoke.
> > 8/16 – 12:30pm – All certificated revoked.
> >
> > We stand ready to answer any questions or concerns.
> > Daymion
> >

Yes, domain owners.

Yes, some of the certs were being used as typical se

  1   2   3   4   5   >