Re: Policy 2.7.1: MRSP Issue #153: Cradle-to-Grave Contiguous Audits

2021-03-05 Thread Matt Palmer via dev-security-policy
On Fri, Mar 05, 2021 at 08:46:26AM -0800, Bruce via dev-security-policy wrote:
> At the beginning, I think that CAs will generate one or many keys, but
> will not assign them to CAs.  The gap period could be days to years. 
> Since the requirement says "from the time of CA key pair generation", do
> we want an audit of an unassigned key?  Or should the audit start once the
> key has been assigned and the CA certificate has been generated?

I think it's reasonable that keys that are bound to CA certificates have an
unbroken history of audits demonstrating that the key has always been
managed in a way that minimises the chances of disclosure, along with
evidence that the key being bound was initially generated in a secure manner
(good RNG, etc).

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Summary of Camerfirma's Compliance Issues

2021-01-19 Thread Matt Palmer via dev-security-policy
On Tue, Jan 19, 2021 at 07:28:17AM -0800, Ramiro Muñoz via dev-security-policy 
wrote:
> Camerfirma is not the member with the highest number of
> incidents nor the member with the most severe ones.

No, but Camerfirma's got a pretty shocking history of poor incident
response, over an extended period, with no substantive evidence of
improvement.  *That* makes me not want to trust Camerfirma, because I have
no confidence that problems are being handled in a manner befitting a
globally-trusted CA.  Further, Camerfirma's continued insistence that "it's
all better now", in the face of all the contrary evidence, does not inspire
confidence that there will be future improvement.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Summary of Camerfirma's Compliance Issues

2021-01-18 Thread Matt Palmer via dev-security-policy
On Sun, Jan 17, 2021 at 12:51:29AM -0800, Ramiro Muñoz via dev-security-policy 
wrote:
> We don’t ask the community to  disregard the data, on the contrary we ask
> the community to analyze the data thoroughly including the impacts
> produced.

OK, I'll bite.  As a member of the community, I've analyzed the data
thoroughly, and I'm not impressed.  Camerfirma does not appear to grasp the
fact that "nothing bad has happened yet" is a *bad take*.  "Nothing bad has
happened yet" is how every CA starts its life.  It is not something to be
proud of, it's the absolute bare minimum.  The volume of incidents that
Camerfirma has had is troubling, but it's the repetition of the nature of
the incidents, and the lacklustre way in which they have been responded to,
that causes me to think that Camerfirma has no place in the Mozilla trust
store.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1:MRSP Issue #205: Require CAs to publish accepted methods for proving key compromise

2020-11-16 Thread Matt Palmer via dev-security-policy
On Mon, Nov 16, 2020 at 02:17:37AM +, Nick Lamb wrote:
> On Mon, 16 Nov 2020 10:13:16 +1100
> Matt Palmer via dev-security-policy
>  wrote:
> > I doubt it.  So far, every CA that's decided to come up with their own
> > method of proving key compromise has produced something entirely
> > proprietary to themselves.
> 
> At least two CAs (and from what I can tell likely more) offer ACME APIs

ACME isn't a CA's "own" method of proving key compromise, hence is not
covered by the above sentence.

> I appreciate that this is less convenient to your preferred method of 
> working, but it doesn't seem proprietary to agree on a standard way to 
> do something and my impression was that you could talk to ACME now? 

Well, it's "less convenient" in the sense that I'd have to figure out how to
securely provide online access to several million private keys, which I
continue to believe is going to end up being a Really Really Bad Idea,
especially when there's such a simple alternate solution (just storing
appropriately-formed CSRs online).  In any event, on the current evidence,
CAs are *not* universally going with "just use ACME", so it's something of a
moot point.
 
> > I have no reason to believe that, absent
> > method stipulation from a trust store, that we won't end up with
> > different, mutually-incompatible, unworkable methods for
> > demonstrating key compromise equal to (or, just as likely, exceeding)
> > the number of participating CA organisations.
> 
> OK, so in your opinion the way forward on #205 is for Mozilla policy to
> mandate acceptance of specific methods rather than allowing the CA to
> pick? Or at least, to require them to pick from a small set?

Yes.  The BRs' "any other method" of domain control validation provides a
useful historical analog for why I believe this is going to end in tears for
everyone.

> > Of course, the current way in which key compromise evidence is
> > fracturing into teeny-tiny incompatible shards is, for my purposes, a
> > significant *regression* from the current state of the art.
> > Currently, I can e-mail a (link to a) generic but
> > obviously-not-for-a-certificate CSR containing and signed by the
> > compromised key, and it gets revoked.  No CA has yet to come back to
> > me and say "we can't accept this as evidence of key compromise".
> 
> But your earlier postings on this subject suggest that this is far from
> the whole story on what happens, not least because you sometimes weren't
> immediately able to figure out where to email that CSR to anyway or the
> responses, though not "we can't accept this" were... far from ideal.

Yes, but those issues are not related to providing the evidence of
compromise, and hence not relevant to this discussion.

> If expressions of interest are worth anything I can offer to read an
> Internet Draft and provide feedback but you might not like my feedback.

Bring it on.  Preferably with suggestions for alternate language.

> For example the current text says:
> 
> "Given a valid signature, the subjectPKInfo in the CSR MUST be compared
> against the subjectPublicKey info of the key(s) which are to be checked
> for compromise."
> 
> But formats I've seen for keys (as opposed to certificates) do not
> contain a "subjectPublicKey info" and so I guess what you actually want
> to do here is compare the entire key. Explaining how to do that exactly
> while being neutral about how that key might be stored will be
> difficult, you might have to just pick a format.

An SPKI can be constructed from any other form of the key material, and
(modulo the rather annoying compressed-vs-uncompressed issue with EC keys)
produces the Right Result in every situation I'm aware of (otherwise DER's
got some splainin' to do).

> Also is RFC 7169 really the best available poison extension for this
> purpose? You understand it was intentionally published on April 1st
> right?

Yes, I am aware of the date of publication of that RFC.  What I'm not aware
of is anything that says that RFCs published on April 1st are banned from
being referenced elsewhere.  It's a document that appears to suit the
purpose to which it is being used, hence I used it.  It seems somewhat
wasteful of everyone's time to produce essentially the same document but
with a different publication date.

I'm also aware of the need for standards-track RFCs to not use Informational
or Experimental RFCs as normative references; in the exceedingly unlikely
event that my proposal were to end up on that track, 7169 would need to be
re-issued as a standards-track document as well, but that bridge can be
burned if we come to it.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1:MRSP Issue #205: Require CAs to publish accepted methods for proving key compromise

2020-11-15 Thread Matt Palmer via dev-security-policy
On Sun, Nov 15, 2020 at 04:52:38AM +, Nick Lamb via dev-security-policy 
wrote:
> This makes clear that the CA must have at least one of these "clearly
> specified" accepted methods which ought to actually help Matt get some
> traction.

I doubt it.  So far, every CA that's decided to come up with their own
method of proving key compromise has produced something entirely proprietary
to themselves.  I have no reason to believe that, absent method stipulation
from a trust store, that we won't end up with different,
mutually-incompatible, unworkable methods for demonstrating key compromise
equal to (or, just as likely, exceeding) the number of participating CA
organisations.

Of course, the current way in which key compromise evidence is fracturing
into teeny-tiny incompatible shards is, for my purposes, a significant
*regression* from the current state of the art.  Currently, I can e-mail a
(link to a) generic but obviously-not-for-a-certificate CSR containing and
signed by the compromised key, and it gets revoked.  No CA has yet to come
back to me and say "we can't accept this as evidence of key compromise".

This format allows the pre-generation of compromise attestations, so that I
don't need to keep a couple of million (ayup, there's a lot of keys out
there) private keys online-accessible 24x7 to generate a real-time
compromise attestation in whatever hare-brained scheme the affected CA has
come up with -- not to mention the entertainment value of writing code to
generate the compromise attestations for each of those schemes.

In an attempt to keep the madness under some sort of control, I've tried to
codify my thoughts around best practices in a pre-draft RFC
(https://github.com/pwnedkeys/key-compromise-attestation-rfc) but so far it
doesn't look like anyone's interested in it, and every time I think "oh, I
should probably just go and submit it as an Experiment through the IETF
individual stream and see what happens" the datatracker's not accepting
submissions.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1:MRSP Issue #205: Require CAs to publish accepted methods for proving key compromise

2020-11-14 Thread Matt Palmer via dev-security-policy
On Sat, Nov 14, 2020 at 09:42:48PM +, Nick Lamb via dev-security-policy 
wrote:
> This boilerplate does not actually achieve any of those things, and
> you've offered no evidence that it could do so. If anything it
> encourages CAs *not* to actually offer what we wanted: a clearly
> documented but secure way to submit acceptable proof of key compromise.
> Why not? It will be easier to write only "Any method at our discretion"
> to fulfil this requirement and nothing more, boilerplate which
> apparently makes you happy but doesn't help the ecosystem.

Whilst it wouldn't make me *happy* to see such boilerplate, it would at
least serve to make it clear which CAs were just painting by numbers, as
opposed to those which understand their own operations and are willing to
meaningfully document them.  It would also serve as a suitable jumping-off
point for a discussion amongst trust stores (well, Mozilla at least) when a
key compromise revocation request is rejected by a CA as to how good, bad,
or otherwise a CA's discretion is.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: TLS certificates for ECIES keys

2020-11-01 Thread Matt Palmer via dev-security-policy
On Thu, Oct 29, 2020 at 05:04:32PM -0700, Bailey Basile via dev-security-policy 
wrote:
> We specifically chose not to issue Apple certificates for these keys
> because we did not want users to have to trust only Apple's assertion that
> this key is for a third party.

Can you explain how a DV certificate demonstrates that the key in the
certificate is for a third party?  Because I can't see how that's possible.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: TLS certificates for ECIES keys

2020-10-29 Thread Matt Palmer via dev-security-policy
On Thu, Oct 29, 2020 at 01:56:53PM -0500, Matthew Hardeman via 
dev-security-policy wrote:
> IFF the publicly trusted certificate for the special domain name is
> acquired in the normal fashion and is issued from the normal leaf
> certificate profile at LE, I don't see how the certificate could be claimed
> to be "misused" _by the subscriber_.

The way I read Jacob's description of the process, the subscriber is
"misusing" the certificate because they're not going to present it to TLS
clients to validate the identity of a TLS server, but instead they (the
subscriber) presents the certificate to Apple (and other OS vendors?) when
they know (or should reasonably be expected to know) that the certificate is
not going to be used for TLS server identity verification -- specifically,
it's instead going to be presented to Prio clients for use in some sort of
odd processor identity parallel-verification dance.

Certainly, whatever's going on with the certificate, it most definitely
*isn't* TLS, and so absent an EKU that accurately describes that other 
behaviour,
I can't see how it doesn't count as "misuse", and since the subscriber has
presented the certificate for that purpose, it seems reasonable to describe
it as "misuse by the subscriber".

Although misuse is problematic, the concerns around agility are probably
more concerning, IMO.  There's already more than enough examples where
someone has done something "clever" with the WebPKI, only to have it come
back and bite everyone *else* in the arse down the track -- we don't need to
add another candidate at this stage of the game.  On that basis alone, I
think it's worthwhile to try and squash this thing before it gets any more
traction.

Given that Apple is issuing another certificate for each processor anyway, I
don't understand why they don't just embed the processor's SPKI directly in
that certificate, rather than a hash of the SPKI.  P-256 public keys (in
compressed form) are only one octet longer than a SHA-256 hash.  But
presumably there's a good reason for not doing that, and this isn't the
relevant forum for discussing such things anyway.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Sectigo to Be Acquired by GI Partners

2020-10-12 Thread Matt Palmer via dev-security-policy
On Fri, Oct 09, 2020 at 06:33:22AM -0700, Tim Callan via dev-security-policy 
wrote:
> We anticipate no meaningful changes required to policies, operations, or 
> personnel.

[...]

> In this case the required changes are virtually nothing.

These statements concern me somewhat, as reasonable people may have
differing thresholds for "meaningful" and "virtually".  Whilst publicly
enumerating every possible change is impossible, I would urge Sectigo to err
on the side of caution when it comes to evaulating whether a change is
"meaningful".  Given Sectigo's long and storied history of failures to
meaningfully engage with the Mozilla community on Sectigo's misadventures, I
doubt there is much appetite for a future in which "oh, we didn't think
*that* was a meaningful change" figures heavily in incident reports.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Proposal for a standard proof of compromised key/revocation request format

2020-08-12 Thread Matt Palmer via dev-security-policy
On Wed, Aug 12, 2020 at 06:25:00PM -0700, cbon...--- via dev-security-policy 
wrote:
> > I'm yet to have a CA baulk at accepting a CSR as proof of compromise. It
> > has the benefit of not having nearly as many superfluous fields as a
> > certificate, as well. In terms of being able to deal with the format, I'd
> > expect that a CA would be at least as able to deal with validating a CSR as 
> > a
> > certificate, given that their job is to validate CSR signatures, but not
> > necessarily to validate certificate signatures... 
> 
> The rationale for using a self-signed certificate as opposed to a CSR was
> so that the proof/attestation of compromise could not be confused as input
> into the certificate request process.  Ideally, CA support personnel
> wouldn't manually be analyzing the revocation tokens but instead be using
> some sort of automated checking (such as the proof of concept I
> implemented).  Given this, I didn't feel the overhead of specifying and
> verifying additional fields in the certificates as opposed to the simpler
> structure of the CSR would be an issue if automation were in place.

I think we're a *long* way away from being able to assume pervasive
automation.  So far as I can determine, we've got one CA that's possibly
automating processing CSRs, and one CA that's using automation to try and
get around the need to investigate key compromises.  Beyond that...
crickets.

> > The "standardised" key compromise attestation format I've been leaning
> > towards is a CSR whose subject DN is something like "CN=This key is
> > compromised!", and which includes a critical "poison" extension, to minimise
> > the risks of accidental acceptance still further. I'm still unsure about
> > how to control for the possibility of social engineered mis-signing, because
> > the idea of someone managing to pull such a trick concerns me, but there's
> > an argument to be made that if someone can be tricked into signing arbitrary
> > content, that key is kinda busted anyway.
> 
> I don't believe the critical poison extension will help much to prevent
> the CSR from being used as input for a lot of CAs, as the typical CSR
> input process only consumes the public key and perhaps a list of
> SANs/subject CN.  For example, it appears that Boulder doesn't look at the
> criticality bit of requested extensions at all when parsing a CSR [1].

Yeah, CSR processing is disturbingly hit-and-miss.  My thinking with adding
the extension is to make it as unambiguous as possible that this is CSR is
weird and special and not for ordinary use.  Beyond that...

At any rate, I'm yet to come up with a *new* attack that a standardised
CSR-based attestation of compromise creates.  I've got:

* Use a key-compromise attestation to get a certificate for a compromised
  key: the key's compromised, if you *really* want to use that key, you can
  probably use the actual key, and CAs shouldn't be issuing certs for
  compromised keys.

* Find a "real" CSR and trick the CA into using that as "proof" of
  compromise: that's at least as likely to happen now, since CAs already
  accept CSRs as proof of compromise.  To the extent that CAs are automating
  CSR processing for key compromise already, a standardised format can only
  help, because there's fewer corner cases they'd have to worry about
  catching in their automation.


> Nonetheless, it sounds like we're both in agreement that the ideal format
> is a standard container as opposed to something bespoke.

Yes, my early experiments with using something that was unambigously unrelated
to the X.509 ecosystem did not go well.

As an aside, I've got a first draft of my own key compromise attestation
format at https://github.com/pwnedkeys/key-compromise-attestation-rfc; I'm
probably going to ask LAMPS if they want to recharter to take it on,
otherwise I'll just run it through as an independent experimental
submission.  Commentary gratefully received (probably best as GitHub
PRs/issues).

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-12 Thread Matt Palmer via dev-security-policy
On Sun, Jul 12, 2020 at 10:13:59PM +0200, Oscar Conesa via dev-security-policy 
wrote:
> Some CAs may want to assume a leadership role in the sector and unilaterally
> assume more additional strict security controls. That is totally legitimate.
> But it is also legitimate for other CAs to assume a secondary role and limit
> ourselves to complying with all the requirements of the Root Program. You
> cannot remove a CA from a Root Program for not meeting fully SUBJETIVE
> additional requirements.

I fear that your understanding of the Mozilla Root Store Policy is at odds
with the text of that document.

"Mozilla MAY, at its sole discretion, decide to disable (partially or fully)
or remove a certificate at any time and for any reason."

I'd like to highlight the phrase "at its sole discretion", and also "for any
reason".

If the CA Module owner wakes up one day and, having had a dream which causes
them to dislike the month of July, decides that all CAs whose root
certificates have a notBefore in July must be removed, the impacted CAs do
not have any official cause for complaint.  I have no doubt that such an
arbitrary decision would be reversed, and the consequences would not make it
into production, but the decision would not be reversed because it "cannot"
happen, but rather because it is contrary to the interests of Mozilla and
the user community which Mozilla serves.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New Blog Post on 398-Day Certificate Lifetimes

2020-07-10 Thread Matt Palmer via dev-security-policy
On Fri, Jul 10, 2020 at 10:48:39AM -0600, Ben Wilson via dev-security-policy 
wrote:
> Some people have asked whether two-year certificates existing on August 31
> would remain valid.  The answer is yes. Those certificates will remain
> valid until they expire. The change only applies to certificates issued on
> or after Sept. 1, 2020.

A histogram of the number of certificates grouped by their notBefore date is
going to show a heck of a bump on August 31, I'll wager.  Will be
interesting to correlate notBefore with SCTs.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-07 Thread Matt Palmer via dev-security-policy
On Mon, Jul 06, 2020 at 10:53:50AM -0700, zxzxzx9--- via 
dev-security-policy wrote:
> Can't the affected CAs decide on their own whether to destroy the
> intermediate CA private key now, or in case the affected intermediate CA
> private key is later compromised, revoke the root CA instead?

No, because there's no reason to believe that a CA would follow through on
their decision, and rapid removal of trust anchors (which is what "revoke
the root CA" means in practice) has all sorts of unpleasant consequences
anyway.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-05 Thread Matt Palmer via dev-security-policy
On Mon, Jul 06, 2020 at 03:48:06AM +, Peter Gutmann wrote:
> Matt Palmer via dev-security-policy  
> writes:
> >If you're unhappy with the way which your interests are being represented by
> >your CA, I would encourage you to speak with them.
> 
> It's not the CAs, it's the browsers, and many other types of clients.

How, exactly, is it not CAs fault that they claim to represent their
customers in the CA/B Forum, and then fail to do so effectively?

> Ever tried connecting to a local (RFC1918 LAN) IoT device that has a
> self-signed cert?

If we expand "IoT device" to include, say, IPMI web-based management
interfaces, then yes, I do so on an all-too-regular basis.  But mass-market
web browsers are not built specifically for that use-case, so the fact that
they don't do a stellar job is hardly a damning indictment on them.

That IoT/IPMI devices piggyback on mass-market web browsers (and the Web PKI
they use) is, as has been identified previously, an example of externalising
costs, which doesn't always work out as well as the implementers might have
liked.  That it doesn't end well is hardly the fault of the Web PKI, the
BRs, or the browsers.

Your question is roughly equivalent to "ever tried fitting a screw with a
hammer?", or perhaps "ever tried making a request to https://google.com
using telnet and a pen and paper?".  That your arithmetic skills might not
be up to doing a TLS negotiation by hand is not the fault of TLS, it's that
you're using the wrong tool for the job.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Matt Palmer via dev-security-policy
On Sat, Jul 04, 2020 at 07:42:12PM -0700, Peter Bowen wrote:
> On Sat, Jul 4, 2020 at 7:12 PM Matt Palmer via dev-security-policy
>  wrote:
> >
> > On Sat, Jul 04, 2020 at 08:42:03AM -0700, Mark Arnott via 
> > dev-security-policy wrote:
> > > I was informed yesterday that I would have to replace just over 300
> > > certificates in 5 days because my CA is required by rules from the CA/B
> > > forum to revoke its subCA certificate.
> >
> > The possibility of such an occurrence should have been made clear in the
> > subscriber agreement with your CA.  If not, I encourage you to have a frank
> > discussion with your CA.
> >
> > > In the CIA triad Availability is as important as Confidentiality.  Has
> > > anyone done a threat model and a serious risk analysis to determine what a
> > > reasonable risk mitigation strategy is?
> >
> > Did you do a threat model and a serious risk analysis before you chose to
> > use the WebPKI in your application?
> 
> I think it is important to keep in mind that many of the CA
> certificates that were identified are constrained to not issue TLS
> certificates.  The certificates they issue are explicitly excluded
> from the Mozilla CA program requirements.

Yes, I'm aware of that.

> I don't think it is reasonable to assert that everyone impacted by
> this should have been aware of the possibly of revocation

At the limits, I agree with you.  However, to whatever degree that there is
complaining to be done, it should be directed at the CA(s) which sold a
product that, it is now clear, was not fit for whatever purpose it has been
put to, and not at Mozilla.

> it is completely permissible under all browser programs to issue
> end-entity certificates with infinite duration that guarantee that they
> will never be revoked, even in the case of full key compromise, as long as
> the certificate does not assert a key purpose in the EKU that is covered
> under the policy.  The odd thing in this case is that the subCA
> certificate itself is the certificate in question.

And a sufficiently[1] thorough threat modelling and risk analysis exercise
would have identified the hazard of a subCA certificate that needed to be
revoked, assessed the probability of that hazard occurring, and either
accepted the risk (and thus have no reasonable cause for complaint now), or
would have controlled the risk until it was acceptable.

That there are people cropping up now demanding that Mozilla do a risk
analysis for them indicates that they themselves didn't do the necessary
risk analysis beforehand, which pegs my irony meter.

I wonder how these Masters of Information Security have "threat modelled"
the possibility that their chosen CA might get unceremoniously removed from
trust stores.  Show us yer risk register!

- Matt

[1] one might also substitute "impossibly" for "sufficiently" here; I've
done enough "risk analysis" to know that trying to enumerate all possible
threats is an absurd notion.  The point I'm trying to get across is
that someone asking Mozilla to do what they can't is not the iron-clad,
be-all-and-end-all argument that some appear to believe it is.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Matt Palmer via dev-security-policy
On Sat, Jul 04, 2020 at 08:42:03AM -0700, Mark Arnott via dev-security-policy 
wrote:
> I was informed yesterday that I would have to replace just over 300
> certificates in 5 days because my CA is required by rules from the CA/B
> forum to revoke its subCA certificate.

The possibility of such an occurrence should have been made clear in the
subscriber agreement with your CA.  If not, I encourage you to have a frank
discussion with your CA.

> In the CIA triad Availability is as important as Confidentiality.  Has
> anyone done a threat model and a serious risk analysis to determine what a
> reasonable risk mitigation strategy is?

Did you do a threat model and a serious risk analysis before you chose to
use the WebPKI in your application?

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Matt Palmer via dev-security-policy
On Sat, Jul 04, 2020 at 12:51:32PM -0700, Mark Arnott via dev-security-policy 
wrote:
> I think that the lack of fairness comes from the fact that the CA/B forum
> only represents the view points of two interests - the CAs and the Browser
> vendors.  Who represents the interests of industries and end users? 
> Nobody.

CAs claim that they represent what I assume you mean by "industries" (that
is, the entities to which WebPKI certificates are issued).  If you're
unhappy with the way which your interests are being represented by your CA,
I would encourage you to speak with them.  Alternately, anyone can become an
"Interested Party" within the CA/B Forum, which a brief perusal of the CA/B
Forum website will make clear.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Proposal for a standard proof of compromised key/revocation request format

2020-06-28 Thread Matt Palmer via dev-security-policy
On Sun, Jun 28, 2020 at 05:14:47AM -0700, Corey Bonnell via dev-security-policy 
wrote:
> Feedback and suggestions for improvements are greatly appreciated.  Even
> if this format is not adopted as a standard mechanism for providing proof
> of key compromise, I hope that by posting it here there will be robust
> discussion on the topic of developing and adopting such a standard
> mechanism.

I'm yet to have a CA baulk at accepting a CSR as proof of compromise.  It
has the benefit of not having nearly as many superfluous fields as a
certificate, as well.  In terms of being able to deal with the format, I'd
expect that a CA would be at least as able to deal with validating a CSR as a
certificate, given that their job is to validate CSR signatures, but not
necessarily to validate certificate signatures...

Also, in an *extreme* nit-pick, something about the use of the word
"proof" in the name doesn't sit right.  I refer to the artifacts that I
produce as "attestations of compromise", as I feel that's a better
description than a "proof".  However, it doesn't produce nearly as good an
acronym.

The "standardised" key compromise attestation format I've been leaning
towards is a CSR whose subject DN is something like "CN=This key is
compromised!", and which includes a critical "poison" extension, to minimise
the risks of accidental acceptance still further.  I'm still unsure about
how to control for the possibility of social engineered mis-signing, because
the idea of someone managing to pull such a trick concerns me, but there's
an argument to be made that if someone can be tricked into signing arbitrary
content, that key is kinda busted anyway.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use of information collected from problem reporting addresses for marketing?

2020-06-02 Thread Matt Palmer via dev-security-policy
On Tue, Jun 02, 2020 at 06:38:12PM -0700, Benjamin Seidenberg via 
dev-security-policy wrote:
> Today, I received a marketing email from one of the CAs in Mozilla's
> program (Sectigo). As far as I know, the only interactions I've ever had
> with this CA where they would have gotten my name and email address would
> be from me submitting problem reports to them (for compromised private
> keys). Therefore, I can only assume that they mined their problem report
> submissions in order to generate their marketing contact lists.

I've sent several hundred certificate problem reports to a number of CAs in
the past few months, and I'm yet to get marketing spam from Sectigo as a
result.  I have had one (suspected) scrape-from-problem-report incident from
a different CA, but I can't be 100% sure, since I was at that time still
sending out problem reports from my personal address.  I now use per-report
plus-addressed addresses that go to a dedicated account -- its possible that
the spamcannons don't recognise + as a valid local-part character, though. 


> 1.) Is anyone aware of any policies that speak to this practice? I'm not
> aware of anything in the BRs or Mozilla policy that speak to this, but
> there are many other standards, documents, audit regimes, etc., which are
> incorporated by reference that I am not familiar with, and so it's possible
> one of them has something to say on this issue.

No, I am not aware of anything specific to CAs/PKIs that would prohibit such
a practice.  You'd need to fall back to general data-handling legislation
like GDPR, California's new statute, and so on (as relevant to your
jurisdiction).

> 2.) While I felt like this practice (if it happened the way I assumed) is
> inappropriate, is there a consensus from others that that is the case? If
> so, is there any interest in adding requirements to Mozilla's Policy about
> handling of information from problem reports received by CAs?

It's certainly dumb as rocks, because the sort of people who are reporting
problems to CAs are not, by and large, the sort of people who are going to
be purchasing managers for things like managed PKI, and those same people
are also probably going to be the sort of people who are not fans of getting
spammed.  However, Rule 1, I believe, is that spammers are dumb.  If they
weren't, they wouldn't scrape whois data for abuse reporting addresses...

As far as making requirements in Mozilla Policy, I have my doubts that it'd
really fly.  As you note, the far more risky problem of having problem
reporters exposed to potential unpleasantness from incompetent subscribers
being unhappy at the wrong people:

> I do recall a discussion a while back on this list where a reporter had
> their information forwarded on to the certificate owner and got
> unpleasant emails in response and was asking whether the CAs were obligated
> to protect the identity of the reporters, but I don't recall any
> conclusions being reached.

was not conclusively addressed, and so I doubt there would be much interest
in a rule that said "thou shalt not spam people who report problems".

For all those reasons and more, I've switched to a separate e-mail account
and per-reort addresses -- no (obvious) human to threaten with spurious
lawsuits, and if I get spam it's blindingly obvious where it came from.  The
automated reporting system I've setup also watches OCSP for revocation times
and keeps full and complete records of all correspondence and timestamps, so
I can tell exactly what (for example) the reporting timeframes were, and
whether the BR requirements were met.

On that front, actually, would it be of any use to you (or others) if there
was a way to route your problem reports through my Revokinator system?  It'd
give you some amount of protection against spam and the such like, and
built-in OCSP / revocation time tracking.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: GoDaddy: Failure to revoke certificate with compromised key within 24 hours

2020-05-21 Thread Matt Palmer via dev-security-policy
On Thu, May 21, 2020 at 02:01:49PM -0700, Daniela Hood via dev-security-policy 
wrote:
> After that we followed the Baseline Requirements 4.9.1 That says: "The CA
> obtains evidence that the Subscriber's Private Key corresponding to the
> Public Key in the Certificate suffered a Key Compromise;" We obtained the
> evidence that the key was compromised when we finished our investigation
> at 16:55 UTC, that was the time we set 24 hours revocation of the
> certificate, the same was revoked at May 8th at 16:55 UTC.

BRs 4.9.5:

"The period from receipt of the Certificate Problem Report or
revocation-related notice to published revocation MUST NOT exceed the time
frame set forth in Section 4.9.1.1".

> can be confirmed here: https://crt.sh/?id=2366734355

Can you explain why the revocation reason is "cessationOfOperation", rather
than "keyCompromise"?  To not provide a revocation reason at all is one
thing, but to indicate a factually incorrect one is... something else
entirely.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: GoDaddy: Failure to revoke certificate with compromised key within 24 hours

2020-05-20 Thread Matt Palmer via dev-security-policy
On Tue, May 19, 2020 at 07:33:00PM -0700, sandybar497--- via 
dev-security-policy wrote:
> Here are the original headers (omitting my email)
> 
> ***
> 
> MIME-Version: 1.0
> Date: Thu, 7 May 2020 12:07:07 +
> Message-ID: 
> 
> Subject: Certificate Problem Report - compromised key
> From: sandy 
[...]
> https://crt.sh/?spkisha256=e92984ace6f80c75b092df972962f2d3f1365ba08c8bbf9b98cdf3aec20d2d2d

crt.sh sez:

Revoked (cessationOfOperation)  2020-05-08  16:55:17 UTC

Got to say, that definitely does look like over 24 hours from e-mail to
revocation.  Unfortunately, because you're using gmail, it's tricky to be
able to demonstrate when GoDaddy *actually* received the e-mail -- I don't
know of a way to get at the MTA logs to show when it was delivered to the
remote MTA.

I'd be curious to hear from GoDaddy as to why the revocation reason here is
marked as "cessationOfOperation", rather than "keyCompromise".  That
seems... fishy.

> Content-Type: application/octet-stream; 
> name="e92984ace6f80c75b092df972962f2d3f1365ba08c8bbf9b98cdf3aec20d2d2d.pem"
> Content-Disposition: attachment; 
> filename="e92984ace6f80c75b092df972962f2d3f1365ba08c8bbf9b98cdf3aec20d2d2d.pem"
> Content-Transfer-Encoding: base64
> X-Attachment-Id: f_k9wq5sjj0
> Content-ID: 

Somewhere along the line this got lost.  It'd be good to have a copy of it,
for completeness.  Since it's in PEM format, you can include it in the body
of an e-mail -- the Mozilla lists are a bit finicky with attachments.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Digicert issued certificate with let's encrypts public key

2020-05-17 Thread Matt Palmer via dev-security-policy
On Mon, May 18, 2020 at 03:46:46AM +, Peter Gutmann via dev-security-policy 
wrote:
> I assume this is ACME that allows a key to be certified without any proof that
> the entity requesting the certificate controls it?

ACME requires a CSR to be submitted in order to get the certificate issued. 
A quick scan doesn't show anything like "the signature on the CSR MUST be
validated against the key", but it does talk about policy considerations
around weak signatures on CSRs and such, suggesting that it was at least the
general intention of ACME to require signatures on CSRs to be validated.

In any event, given that the certs involved were issued by Digicert, not
Let's Encrypt, and Digicert's ACME issuance pipeline is somewhat of a niche
thing at present, I think it's more likely the problem lies elsewhere.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: AIA CA Issuers URL gives 403 (Microsoft)

2020-05-13 Thread Matt Palmer via dev-security-policy
On Wed, May 13, 2020 at 08:28:03AM -0400, Ryan Sleevi wrote:
> On Tue, May 12, 2020 at 11:47 PM Matt Palmer via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> > 1. As Hanno said, it's a public resource, and as such it should, in
> > general,
> > be available to the public.
> 
> This is worded as a statement of fact, but it’s really an opinion, right?

It's a statement of fact, in my opinion.  

> You might think I’m nitpicking, but this is actually extremely relevant
> and meaningful. The requirements in 7.1.2 are only a SHOULD level, and do
> not currently specify access requirements. Your position seems to be that
> they’re better by omitting AIA than including a URL you can’t access, or
> that they’re prohibited from including URLs you can’t access, and neither
> of those requirements actually exist.

On the contrary, unless there's an override of RFC5280 4.2.2.1 in the BRs
that I can't find, the requirement of universal access does exist.  RFC5280
4.2.2.1 says, in relevant part:

"Where the information is available via HTTP or FTP, accessLocation MUST be
a uniformResourceIdentifier and the URI MUST point to either a single DER
encoded certificate [or a "certs-only" CMS]"

A CA is permitted to carefully weigh "the full implications" before deciding
to not abide by the SHOULD in BRs 7.1.2.3(c), but if they choose to put
caIssuers into AIA, it is an "absolute requirement" that the URI provide a
DER-encoded certificate (or a "certs-only" CMS).  A URI that returns a 403
is not a URI that "point[s] to [...] a single DER encoded certificate".

That sounds to me an *awful* lot like a requirement that the URL be
accessible and return a DER-encoded certificate (and, by construction, "that
they’re prohibited from including URLs [I] can’t access").

Of course, one could argue that blocking attackers violates this.  If you
block an attacker, but nobody else hears, does it make a violation?  I guess
we'll find out when a botnet operator complains to m.d.s.p that they got
blocked.

> > 2. wget is a legitimate tool for downloading files, thus blocking the
> > wget user agent is denying legitimate users access to the resource.
> 
> This seems to be saying that there can be zero negative side-effects,
> regardless of the abuse. I don’t find this compelling either.

You appear to be trying to extract a general rule from a specific
statement.

I stated that the apparently-permanent blocking of a general purpose UA from
being able to access a caIssuers URL was unacceptable.  I gave several
reasons why, in my professional experience dealing with Internet abuse,
blocking a general purpose UA has both noticeable impact on legitimate
users, and a very limited impact on attackers.

If you'd like a more general rule to go by, here's my take: if a CA's method
of blocking abuse negatively impacts legitimate users, the CA has an
obligation to demonstrate that no other method of blocking abuse, one with
less negative impact on legitimate users, is capable of reasonably dealing
with the threat.

I use the present tense in the preceding paragraph, but it's past tense in
this particular case -- as far as I can discern, the UA blocking has been
removed from the URLs listed in the OP.

> If we say it’s ok to not be accessible to any, because it’s not present,
> where’s the harm in not being accessible to some, when it is?

Conversely, if there's no harm in it not being available to some, where's
the harm in it not being available to any?

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: AIA CA Issuers URL gives 403 (Microsoft)

2020-05-12 Thread Matt Palmer via dev-security-policy
On Tue, May 12, 2020 at 11:37:23PM -0400, Ryan Sleevi wrote:
> On Tue, May 12, 2020 at 10:30 PM Matt Palmer via dev-security-policy
>  wrote:
> >
> > On Tue, May 12, 2020 at 07:35:50AM +0200, Hanno Böck via 
> > dev-security-policy wrote:
> > > After communicating with Microsoft it turns out this is due to user
> > > agent blocking, the URLs can be accessed, but not with a wget user
> > > agent.
> > > Microsoft informed me that "the wget agent is explicitly being blocked
> > > as a bot defense measure."
> > >
> > > I leave it up to the community to discuss whether this is acceptable.
> >
> > I'm firmly on the "nope, unacceptable" side of the fence on this one.
> 
> Could you share your reasoning?

Sure, plenty of reasons:

1. As Hanno said, it's a public resource, and as such it should, in general,
be available to the public.

2. wget is a legitimate tool for downloading files, thus blocking the wget
user agent is denying legitimate users access to the resource.

3. For a miscreant, blocking by user agent is barely a speed bump, as
changing UA to something innocuous / harder to block is de rigeur.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: AIA CA Issuers URL gives 403 (Microsoft)

2020-05-12 Thread Matt Palmer via dev-security-policy
On Tue, May 12, 2020 at 07:35:50AM +0200, Hanno Böck via dev-security-policy 
wrote:
> After communicating with Microsoft it turns out this is due to user
> agent blocking, the URLs can be accessed, but not with a wget user
> agent.
> Microsoft informed me that "the wget agent is explicitly being blocked
> as a bot defense measure."
> 
> I leave it up to the community to discuss whether this is acceptable.

I'm firmly on the "nope, unacceptable" side of the fence on this one.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: AIA CA Issuers field

2020-05-11 Thread Matt Palmer via dev-security-policy
On Mon, May 11, 2020 at 02:50:19PM +, Corey Bonnell via dev-security-policy 
wrote:
> > * Are there rules that CAs must adhere to in regards to referencing the
> >   intermediate in the AIA field? Does it need to be available? Does it
> >   need to be there at all?
> 
> It's optional (SHOULD-level), as Baseline Requirements 7.1.2.3 (c) [1] states:
>   It (AIA extension) SHOULD also contain the HTTP URL of the Issuing
>   CA’s certificate (accessMethod = 1.3.6.1.5.5.7.48.2).
> 
> I'd think it's a reasonable expectation/implicit requirement that if the
> caIssuers field is present, the issuing CA cert should be generally
> available on the global internet at the specified URL.

I read RFC5280 4.2.2.1 as *requiring* a URL in caIssuers to return the
issuing CA cert.  So a cert SHOULD have caIssuers, and if caIssuers is
present and contains a HTTP URL that URL MUST return a DER-encoded cert (or
"certs-only" CMS).

The only corner case I can find is that if the URL returns a DER-encoded
cert (etc), I can't see anything that explicitly requires that DER-encoded
cert (etc) to be the issuing CA certificate.  It's strongly implied by "the
additional information lists certificates that were issued to the CA that
issued the certificate containing this extension", but it's not as clear and
obvious as the rest of the requirements (no "MUST" in there, for instance). 
I don't encourage anyone to try *making* that argument, though...

> > * RfC 5280 says certificates should be served as
> >   "application/pkix-cert". Is it a violation of any rule if they are
> >   not? (application/x-x509-ca-cert is common, no content type and
> >   completely bogus content types linke text/html also happen.)
> 
> Since this a SHOULD-level requirement, it's not prohibited to use other
> content-types (although discouraged).

I wonder if it's worth starting to require violations of SHOULDs be
explained.  After all, "SHOULD" indicates that

> there may exist valid reasons in particular circumstances to ignore a
> particular item, but the full implications must be understood and
> carefully weighed before choosing a different course.

(RFC2119, natch)

As a result, if someone violates a SHOULD, then it is reaonable to assume
that the violator can explain the thought processes that caused them to
carefully understand and weigh the implications before rejecting the
recommendation.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: GRCA: Out-of-date CPS provided in CCADB

2020-05-10 Thread Matt Palmer via dev-security-policy
On Sun, May 10, 2020 at 09:16:41AM -0700, irvinfly--- via dev-security-policy 
wrote:
> Hi, I'm researching the status of Taiwan GCA and coincidence to find this
> issue.  I will try to find a relative staff at National Development
> Council to get back.

Coincidentally, I happened to stumble over
https://bugzilla.mozilla.org/show_bug.cgi?id=1463975, which if I'm reading
it correctly, indicates that the GRCA has more-or-less ceased operating. 
I'm of the opinion that that their pending removal does not absolve them of
the need to abide by Mozilla Policy and community norms in the meantime,
however practically speaking I'd be surprised if you got much of a useful
response.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


GRCA: Out-of-date CPS provided in CCADB

2020-05-07 Thread Matt Palmer via dev-security-policy
In trying to validate the problem reporting e-mail address for
https://crt.sh/?id=657220608, I grovelled through the CCADB CSV-o'-Doom
(freshly downloaded for that "new CSV" smell ), and the CPS link
therein refers to http://grca.nat.gov.tw/download/GPKI_CP_eng_v1.7.pdf
which, at the time of writing, is dated "January 31, 2013".

It also has no Section 1.5.2 (at all), and Section 1.4, "Contact Details",
does not have any contact details in it, but merely refers the interested
reader to http://grca.nat.gov.tw/, which... is in (I assume) Chinese, which
I sadly cannot read.

This all makes it rather difficult to report a key compromise, and I'd
really appreciate it if (a) GRCA could fix this up ASAP, and (b) other CAs
could cast an eye over their CPSes to make sure they're not six years
out-of-date.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Filtering on problem reporting e-mail addresses

2020-05-07 Thread Matt Palmer via dev-security-policy
This has happened twice now, with two different CAs, so I'm going to raise
it as a general issue.

I've had one CA reject e-mails because the HELO name wasn't to their liking
(which I was able to fix).  The other, which has just started happening now,
is utterly inscrutible -- "550 Administrative prohibition - envelope
blocked".  Given that the envelope sender and recipient hasn't changed from
the numerous other problem reports I've sent over the past month or so, I'm
really at a loss as to how to proceed.

Questions that arise in my mind:

1. To what extent is it reasonable for a CA to reject properly-formed
e-mails to the CPS-published e-mail address for certificate problem
reporting?

2. What is a reasonable response from a problem reporter to a rejected
problem report e-mail?

3. In what ways are the required timelines for revocation impacted by the
rejection of a properly-formed certificate problem report?

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Sectigo: Failure to revoke certificate with compromised key

2020-05-05 Thread Matt Palmer via dev-security-policy
On Mon, May 04, 2020 at 08:45:34AM -0700, sandybar497--- via 
dev-security-policy wrote:
> Additionally, Sectigo referred to pwnedkeys as
> some sort of authority that they say it’s not compromised.

Bless their little cotton socks, pwnedkeys is now such an authority that
Sectigo thinks I've got every compromised key in existence.  I feel so
validated.

> The necessary evidence was provided to Sectigo and they have thus far
> failed to deal with the evidence or clearly articulate reasons for
> concluding this case to not be a compromise.

What I've found works best when reporting these cases to m.d.s.p is to
provide all the (substantive) correspondence, exactly as it was
sent/received, along with UTC timestamps.  That allows for independent
assessment that Sectigo has, in fact, fallen down on the job, rather than it
being possible that there's just a big ol' misunderstanding going on. 
Here's an example of the sort of thing I mean:

https://groups.google.com/forum/#!topic/mozilla.dev.security.policy/wtM7uX1stIA

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DRAFT May 2020 CA Communication/Survey

2020-05-01 Thread Matt Palmer via dev-security-policy
On Fri, May 01, 2020 at 04:48:28PM +, Corey Bonnell via dev-security-policy 
wrote:
> I have briefly reviewed and would like to ask what is the intent of Item 4
> and the associated sub-items?  The Browser Alignment draft ballot is under
> discussion in the CAB Forum, so the intent behind the shift of the
> location of discourse to the Mozilla forum is unclear.

I also had a similar "hmm, that seems odd" reaction to point 4 of the draft
CA communication.  At first glance, it does seem somewhat redundant to ask
CAs to tell Mozilla about their concerns rather than tell the CA/B Forum
directly.  However, when I reflected on it at some length, I came to the
conclusion that it is a good thing for Mozilla to survey the CAs in its root
program in this manner, for several reasons:

1. It is my understanding that not all CAs in Mozilla's root store are
   members of the CA/B Forum.  You could wonder why that is, you can even
   think that those CAs that aren't members of the Forum are being various
   forms of irresponsible, however the fact remains, and Mozilla reaching
   out to gather the opinions of all CAs in its root store helps to surface
   the opinions of those CAs that do not have a voice in the CA/B Forum
   directly.

2. There are multiple examples of CA members of the CA/B Forum failing to
   express their objections to a ballot until the voting period, which has
   on at least one occasion let to the failure of a ballot.  As there is no
   mechanism within the CA/B Forum to "force" members to express their
   objections to a draft ballot in a timely manner, it seems likely that
   this will occur again in the future.  Mozilla's CA communication and
   survey, being mandatory for all CAs in Mozilla's trust store to complete,
   requires those CAs to carefully consider the issues and voice their
   objections, which reduces the chances of objectionable draft ballots
   making it to the voting stage only to fail at that hurdle.

Thus, despite having initial reservations about Mozilla's action here, I've
come to the conclusion that it is a Good Thing(TM) that Mozilla is doing
this, and it can only be to the benefit of Mozilla, relying parties, and the
Web PKI for Section 4 of the draft May CA Communication to go out to CAs
as-is.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Proposal: Make readable CPSes easier to find

2020-04-20 Thread Matt Palmer via dev-security-policy
On Tue, Apr 21, 2020 at 01:23:49AM -0400, Ryan Sleevi wrote:
> On Mon, Apr 20, 2020 at 10:04 PM Matt Palmer via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> > 1. Make cPSuri mandatory
> 
> We really don’t need to be stuffing everything into subscriber
> certificates, especially when it’s relevant to who the issuer is. We also
> need to make sure we are optimizing for the right case - the vast majority
> of certificates who, for their entire lifetime, have no need to express the
> CPS URI, and would waste countless bytes (and electrons and fossil fuel)
> unnecessarily.

That ship sailed so very, very long ago, though.  Practically every
certificate out there already provides a (far less useful) cPSuri, and many
certificates are also jammed full of all sorts of other cruft, like Explicit
Text.

> 2. Make the cPSuri actually point to the relevant CPS
> 
> That doesn’t really capture what a CPS is. There can be many relevant CPSes
> to a single certificate, both for a single path and multiple paths. That’s
> literally how audits came to be - to support the model of multiple CPSes.

>From what I can see in a CSV o' Doom, a CA can only provide a single CPS
link for a given intermediate.  That does rather suggest that there's only
one CPS for a given certificate.

> The problem is that a CA's repository, or "online information provided by
> > the CA", typically looks something like this:
> >
> >  * CPS for Device PKI
> >  * Frambingaling CP and CPS v2.1
> >  * Latest Certificate Practice Statement for Small Furry Creatures
> >  * Subscriber Agreement and Addendum for Something Something
> >
> > ... and so on.  How I get from "I have a certificate that I need to
> > report",
> > which contains an issuer CN and not much else, to the correct document out
> > of that list above, is a non-trivial problem.  Having the cPSuri point *to
> > the CPS* would completely solve that.
> 
> Do you disagree? If Mozilla Policy made normative that there be some form
> of binding problem reporting statement for each issuer certificate, would
> that address your problem or not?

Not particularly, because while problem reporting addresses are the major
part of why I have gone looking for CPSes in the recent past, it is not the
only reason.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Proposal: Make readable CPSes easier to find

2020-04-20 Thread Matt Palmer via dev-security-policy
A major difficulty I found in trying to report compromised keys to CAs was
in finding a reporting address to use.  Now, by itself, that could be solved
by making CCADB reporting addresses be authoritative, but that would also
require standardisation of reporting types, and it's a whole rabbit hole. 
There are also many other reasons why someone might want to examine the CPS
that pertains to a particular certificate, so it makes sense to be able to
easily find that document as and when required.

Being unable to find the correct CPS for a CA had several causes:

1. Certificates don't *have* to have a cPSuri (although the vast majority
do);

2. cPSuri, when present, doesn't point at a CPS, but rather at a CA's
repository, which often contains a myriad of documents which often are
unclearly related to the certificate in hand;

3. When a relevant-looking CPS is found, it can be in a non-English
language, with no clear pointers to the location of the (Mozilla-required)
non-authoritative English translation of that document.

My various sub-proposals are, unsurprisingly, closely aligned to these
sub-issues.

1. Make cPSuri mandatory

I assume there was a Very Good Reason for cPSuri to be optional, but for the
life of me I can't think of what it would be.  Unless someone comes up with
the killer argument against it (and no, "it bloats certificates" doesn't
count, IMAO), I think this one's a no brainer.

When there's no link to a CPS, in any way shape or form, I'm left flailing
around in the giant CCADB CSV o' doom to figure out where the CPS lives, and
that...  ain't easy.  In one case, I sent a problem report to *completely*
the wrong CA.  If each certificate had a link to the right place, that
problem, at least, couldn't happen.

2. Make the cPSuri actually point to the relevant CPS

It seems odd to me that the BRs have relaxed what RFC5280 has to say about
cPSuri, which is "The CPS Pointer qualifier contains a pointer to a
Certification Practice Statement (CPS) published by the CA", to say "HTTP
URL for the Subordinate CA's Certification Practice Statement, Relying Party
Agreement or other pointer to online information provided by the CA".  I'm
not a fan of 5280's laxity regarding specifying *which* CPS published by the
CA should be linked to, but at least it's pretty clear that the content at
the end of the link should be *a* CPS, rather than the BRs allowing a CA to
link to basically anything they like.

The problem is that a CA's repository, or "online information provided by
the CA", typically looks something like this:

 * CPS for Device PKI
 * Frambingaling CP and CPS v2.1
 * Latest Certificate Practice Statement for Small Furry Creatures
 * Subscriber Agreement and Addendum for Something Something

... and so on.  How I get from "I have a certificate that I need to report",
which contains an issuer CN and not much else, to the correct document out
of that list above, is a non-trivial problem.  Having the cPSuri point *to
the CPS* would completely solve that.

There is a bit of a side point here, about whether the correct CPS to link
to is CPS-at-time-of-issuance, or CPS-at-time-of-retrieval.  Given that CAs
don't seem to ever provide old CPSes in their repository, I assume that the
general consensus is that the appropriate CPS to review is *always* the
current version.  Presumably, if a CA publishes a non-backwards-compatible
CPS (ie the new CPS would not permit issuance of a certificate that was
OK under the old CPS), the CA is obliged to revoke all those now-invalid
certificates.

3. Require non-English language CPSes to link to the English translation

I am sympathetic to CAs which operate primarily in a non-English market, in
that they want their primary materials to be in the language of their
market.  That's no problem.

However, there is a problem when I need to go from "here is a CPS I found in
a language I don't speak" to "here is the corresponding CPS in English". 
So, I'd like to see it a requirement that the "primary language" CPS have,
in some easily-findable-by-monoglots-like-myself spot, a link to the
corresponding English transation for *this CPS*.

A link to the "English language translations" repository doesn't help, for
much the same reason that cPSuri linking to the CA's repository doesn't
help.  if I don't know what the translation of the relevant CPS' title is,
I'm not going to be able to pick out the right English language CPS from the
list.

If the word "English" is in the text surrounding the link, that'd
make it easy enough: ^F, "English", copy-paste URL, done.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Proposal: prohibit issuance of new certificates with known-compromised keys, and for related purposes

2020-04-09 Thread Matt Palmer via dev-security-policy
On Thu, Apr 09, 2020 at 04:55:51PM +0100, Nick Lamb via dev-security-policy 
wrote:
> Right-sizing of Bloom filters is an issue, but you only need to get
> ballpark accuracy. If we genuinely aren't sure if there will be a
> thousand or a billion RSA private keys compromised next year then yup
> that's a problem to address early.

"Like bloom filters?  Then you'll *love* new Scalable Bloom Filters!"

http://gsd.di.uminho.pt/members/cbm/ps/dbloom.pdf

They are a rather neat trick.

Bucketing keys isn't a bad idea, either.

> A Bloom filter doesn't solve the whole problem unless you're
> comfortable being a bit savage. You *can* say "If it matches the bloom
> filter, reject as possibly compromised" and set your false positive
> ratio in the sizing decision as a business policy. e.g. "We accept
> that we'll reject 1-in-a-million issuances for false positive". But I'd
> suggest CAs just slow-path these cases, if it's a match to the Bloom
> filter you do the real check, and maybe that's not fast enough for goal
> response times in your customer service, but in most cases issuance
> fails anyway because somebody was trying to re-use a bad key. Customers
> who just got tremendously unlucky get a slightly slower issuance. "Huh,
> these are normally instant. What's up with... oh, there is goes".

If you're storing all the keys on local disk anyway, providing an index of
all the keys that's going to be Fast Enough(TM) isn't hard.  The database
isn't *that* big, and the lookup rate isn't *that* huge, that you need to
have a bloom filter in the middle to cut down on the lookup rate.  I'm
interested in bloom filters for pwnedkeys because of the additional latency
that a lookup over HTTP introduces.

> Is it necessary to spell out that even though _Private_ key compromise
> is what we care about the things you need to be keeping in filters and
> databases to weed out compromised keys are the corresponding _Public_
> keys?

You don't even need to keep the actual public keys; the (SHA256) SPKI
fingerprint is entirely sufficient.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Proposal: prohibit issuance of new certificates with known-compromised keys, and for related purposes

2020-04-06 Thread Matt Palmer via dev-security-policy
On Mon, Apr 06, 2020 at 12:56:02PM -0400, Ryan Sleevi wrote:
> On Mon, Mar 30, 2020 at 5:32 PM Matt Palmer via dev-security-policy
>  wrote:
> > Righto, the goals are:
> >
> > * Make it a policy violation for CAs to issue a certificate using a public
> >   key they've revoked before.
> >
> > * Clarify the language around key compromise revocation to make it obvious
> >   that CAs have to revoke everything with a given private key.
> >
> > * Clarify the language around revocation time limits to make it obvious that
> >   the time period starts when the communication leaves the reporter.
> 
> I've attempted the first two points with
> https://github.com/sleevi/cabforum-docs/pull/12 . You can seem some of
> the discussion at
> https://github.com/sleevi/cabforum-docs/pull/12/files#r401919547 and
> https://github.com/sleevi/cabforum-docs/pull/12/files#diff-7f6d14a20e7f3beb696b45e1bf8196f2R1425

Thanks, I've made a comment on part of that change.

> > I, personally, do not have a problem with mandating that CAs keep records of
> > compromised keys in perpetuity.
> 
> I've not yet addressed this part in the above PR, because I think
> there's still work to sort out the objectives and goals. We've seen
> some CAs express concerns (not unreasonably) at the potential of an
> unbounded growth of key sizes creating a disproportionate load on CAs.
> I think more napkin math is needed here in terms of load and lookups.

Well, I can say that indexing 1.3M private keys, at least, isn't a
particular challenge, even comparing that against the entire corpus of known
certificates.  I don't know how many private keys CAs' customers are
planning on dumping onto the public Internet, though...

> It's not as easy as saying "use a bloom filter" if a bloom filter
> takes X amount of time to generate.

A bloom filter could potentially be a *part* of an approach, but it
certainly can't be the entire solution (for a great many reasons).  As an
aside, if anyone wants to try out a bloom filter, I've generated one
containing all the Debian weak keys I've generated so far:
https://assets.pwnedkeys.com/public/debian-weak-keys.pkf (file format
documented at https://pwnedkeys.com/filter.html).

> > > While I appreciate the suggestion, I'm worried this does more to sow
> > > confusion rather than bring clarity. As you note, 4.9.1.1 is liked to
> > > evidence about the Private Key, not the Certificate, so this is
> > > already a "clear" requirement.
> >
> > What about it sows (additional) confusion?
> 
> Every time we've seen an existing requirement stated in two places,
> CAs have tried to argue they're disjoint requirements OR they become
> disjoint requirements through a lack of consistent updating. If a CA
> can't read 4.9.1.1 and reach the conclusion, I've got worries, but if
> a CA has reached that, they should be offering suggestions here as to
> why.

Yes, well, I suppose we should ask the CAs that have come to that conclusion
as to how they managed that.  I'm not hopeful, however; whenever I've asked
questions about how a CA came to a particular conclusion, the best I've
gotten back is "lol we dunno".  The most common response is radio silence.

> Unfortunately, I think that's where we might disagree. More words
> often creates more opportunity here for getting creative, so I want to
> figure out how to tighten up or entirely reframe requirements, rather
> than merely state them multiple times in slightly-different ways that
> might be seen as offering slightly different requirements. In short,
> I'd like to avoid the CA-equivalent of the Genesis creation narrative
> debate :)

I see your point.

> > > > Step 3: Insert a new paragraph at the end of section 4.9.1.1, with the
> > > > following contents (or a reasonable facsimile thereof):
> > >
> > > This isn't where the suggested requirements go. That's covered in 4.9.5
> >
> > I'm not sure what you mean by this.  Could you expand a little?
> 
> Yeah, 4.9.5 is the "Time within which CA must process the revocation
> request" - that's where requirements for timeliness go.

Hmm, I think the cat's nose is poking out of the bag on that one, because of
all the mentions of timeframes for different sorts of revocation already in
4.9.1.1.  I don't have a problem with a clarification of exactly what
"receipt" means into 4.9.5, though, if it works better there.  The important
thing is that there *is* a clear definition of such things, because there
are CAs who aren't clear on what that means.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Revocation from the fuuutuuuuuure

2020-04-05 Thread Matt Palmer via dev-security-policy
I've recently been taking careful notice of the revocation information that
CAs are publishing, because what little life I did have is currently under
lockdown.  I've come across a rather curious behaviour that seems fairly
endemic: declarations of revocation that are effectively in the future.  I'm
not sure what to make of this, from many perspectives: standards compliance,
interoperability, as well as "what the heck does that even *mean*?"

Take, for example, these revocations:

 firstcheck | producedat  | thisupdate  |   
revocationtime
+-+-+-
 2020-04-04 21:34:02.65324  | 2020-04-04 21:33:00 | 2020-04-04 21:00:00 | 
2020-04-04 21:33:06
 2020-04-04 21:20:56.018248 | 2020-04-04 21:19:00 | 2020-04-04 21:00:00 | 
2020-04-04 21:19:54

firstcheck is the earliest timestamp at which the revokinator got a
"revoked" response from the OCSP responder; the other timestamps are all
taken from the validated OCSP response.

In each case, while the OCSP response was received by the revokinator after
the revocation time (which is good), the revocation time is later than both
the producedAt and thisUpdate times in the OCSP responses.  This behaviour
doesn't appear to match the semantics of these fields, as defined in
RFC6960:

thisUpdate  The most recent time at which the status being
indicated is known by the responder to have been
correct.

producedAt  The time at which the OCSP responder signed this
response.

revocationTime  The time at which the certificate was revoked or
placed on hold.

(Just in case anyone wants to object with "but RFC5019!", I'll save you the
effort and just note that the definitions in RFC5019 are substantively
similar to those above.)

Working from the bottom, the use of the word "was" in the definition of
revocationTime suggests that revocation is not supposed to happen in the
future.  It seems quite clearly past tense.

producedAt seems fairly straightforward -- time of the signature.  Not a lot
of nuance there.  Except... how can you sign something saying that a
certificate "***was*** revoked" before the time at which the revocation took
place?

The same logic, only more so, exists for thisUpdate -- how can the responder
*know* that the status of a certificate is "revoked at $time", twenty
minutes (or more) before that revocation took place?

Now, what I *think* is happening is simply that OCSP responders are
backdating producedAt and thisUpdate in OCSP responses.  For time skew
reasons, moving back the timestamps might indeed be a good idea, much as
Mozilla permits for certificates' notBefore ("Minor tweaking for technical
compatibility reasons is accepted").

The only problem with this sort of "compatibility backdating" is that (a)
the definitions for the timestamps in OCSP responses are more precise than
that of notBefore, and (b) there is no exception that I can find in Mozilla
Policy or the BRs that allow this practice -- and as such, regardless of
whether it's a good idea or not, it would appear to be something that CAs
are not supposed to be doing.

As a result of all this, I have a great many questions:

1. Does my analysis and reasoning above appear correct?  Does anyone have a
differing interpretation of the various documents and requirements, and/or
have I failed to find something that could bear upon the reasoning or
conclusions I've come to?

2. What could a revocationTime "in the future" even mean, within the context
of the RFC definitions as they currently exist?

3. What do common user agents do, in the real world, if they encounter a
revocation time which appears to be in the future?  Do common user agents
even check the revocationTime, or do they just look at cert_status and move
on?  I assume that user agents are at most only checking revocationTime
against their local clock, because if they were doing these "logic checks"
between revocationTime and thisUpdate/producedAt, they'd be rejecting these
backdated OCSP responses.

4. The horse has probably well and truly bolted on the client
interoperability front, but could a revocation date in the future, if
defined, help with, say, the problem of signalling to subscriber issuance
automation that a certificate is "pending revocation"?

5. (for CAs) why do you produce OCSP responses with these sorts of
timestamps?  Do you have a different interpretation of the definitions in
RFC6960, such that this behaviour is compliant?  If so, what is this
alternate interpretation?  If not, what was the reason for failing to abide
by the RFC?  Did you bring this conflict between the RFCs and your practice
to the attention of Mozilla or another body, such as your auditor or the
CA/B Forum?  If not, why not?

6. (for everyone) should Mozilla (or the BRs) make an explicit allowance for
a certain amount of "backdating" in 

Re: Let's Encrypt: Failure to revoke key-compromised certificates within 24 hours

2020-03-30 Thread Matt Palmer via dev-security-policy
On Tue, Mar 31, 2020 at 01:34:27PM +1100, Matt Palmer wrote:
> If someone would like to make the argument that it's a gray area because I
> submitted the revocation requests via ACME, rather than the CPS-provided
> e-mail address, well, I can switch to sending e-mails, but having a human
> process all the revocation requests is unlikely to be a better use of
> everyone's time.

... aaand they did
(https://bugzilla.mozilla.org/show_bug.cgi?id=1625322#c2).  Oh well, I guess
someone at Let's Encrypt is going to have a bit more to do from now on.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Let's Encrypt: Failure to revoke key-compromised certificates within 24 hours

2020-03-30 Thread Matt Palmer via dev-security-policy
On Mon, Mar 30, 2020 at 06:01:58PM -0400, Ryan Sleevi wrote:
> On Mon, Mar 30, 2020 at 5:43 PM Matt Palmer via dev-security-policy
>  wrote:
> >
> > On Mon, Mar 30, 2020 at 01:48:28PM -0700, Josh Aas via dev-security-policy 
> > wrote:
> > > Matt - It would be helpful if you could report issues like this to the CA
> > > in question, not just to mdsp.
> >
> > Helpful to *whom*, exactly?  I don't write up these reports to be helpful to
> > the CA in question; I write them to be helpful to the community.  I don't
> > see how reporting these problems to an individual CA is helpful to anyone
> > except that one CA -- which, as I said, is not a goal I am aiming for here.
> 
> I don't think that's quite a particularly helpful stance to take :)
> Or, put differently, "why not both"

Yes, I might have put it a bit harshly, and I apologise for that.  I was
somewhat taken aback by the implication of hiding problems by reporting to
them to the CA, rather than to the community.

> That said, your specific incident was in the gray area, where you'd
> already previously reported compromise and the CA issued certs with
> known compromised keys. You shouldn't "have" to report those new keys,
> but it's still good form.

I *have* been reporting additional certificates using the same compromised
key, using the ACME revocation endpoint provided by the CA, and indicating
that the reason for requesting certificate revocation was key compromise.  I
don't see how it's a "gray area", though, except insofar as multiple CAs
have misinterpreted the BRs in roughly the same way.

If someone would like to make the argument that it's a gray area because I
submitted the revocation requests via ACME, rather than the CPS-provided
e-mail address, well, I can switch to sending e-mails, but having a human
process all the revocation requests is unlikely to be a better use of
everyone's time.

> > At any rate, since (as I understand it) all CAs are supposed to be watching
> > mdsp anyway, sending a report here should be equivalent to sending it to all
> > CAs -- including Let's Encrypt -- anyway.
> 
> Ish? https://wiki.mozilla.org/CA/Incident_Dashboard specifically
> encourages reporters to file a new incident bug.

I don't see that that page *discourages* reporters from posting to mdsp.  My
point, though, is that Josh asked for a report to be sent to Let's Encrypt,
all CAs are supposed to watch mdsp, therefore sending to mdsp should satisfy
Josh's request for a copy to be sent to Let's Encrypt.

> https://wiki.mozilla.org/CA/Responding_To_An_Incident#Incident_Report
> allows CAs to post to m.d.s.p. and a member will convert to a bug, but
> I don't think it should be, nor do I want, m.d.s.p. to be the general
> catch-all reporting mechanism for general users :) For one, as you can
> see by my timeliness to the threads, it makes it hard to respond and
> triage appropriately.

I did look into creating CA Compliance bugs directly, however I'm not 100%
confident of what counts as a compliance issue (as you've seen with some of
my past posts).  Also, bugzilla uses a mail relay which is blocked on my
mail server (AWS' Spam Emission Service), and there's nothing in the SMTP
transaction I can use to whitelist *just* Bugzilla's emails.  So I can't
create an account, and so I can't create bugs (or make snarky comments about
why CAs haven't updated their open bugs in three weeks -- which you may
count as a positive, perhaps?)

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Let's Encrypt: Failure to revoke key-compromised certificates within 24 hours

2020-03-30 Thread Matt Palmer via dev-security-policy
On Mon, Mar 30, 2020 at 01:48:28PM -0700, Josh Aas via dev-security-policy 
wrote:
> Matt - It would be helpful if you could report issues like this to the CA
> in question, not just to mdsp.

Helpful to *whom*, exactly?  I don't write up these reports to be helpful to
the CA in question; I write them to be helpful to the community.  I don't
see how reporting these problems to an individual CA is helpful to anyone
except that one CA -- which, as I said, is not a goal I am aiming for here.

At any rate, since (as I understand it) all CAs are supposed to be watching
mdsp anyway, sending a report here should be equivalent to sending it to all
CAs -- including Let's Encrypt -- anyway.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Proposal: prohibit issuance of new certificates with known-compromised keys, and for related purposes

2020-03-30 Thread Matt Palmer via dev-security-policy
On Mon, Mar 30, 2020 at 10:59:02AM -0400, Ryan Sleevi wrote:
> On Mon, Mar 30, 2020 at 6:28 AM Matt Palmer via dev-security-policy
>  wrote:

> It's useful to focus on the goal, rather than the precise language, or
> where you see folks getting confused or misunderstanding things. That
> is, making sure we have a common understanding of the problems here.

Righto, the goals are:

* Make it a policy violation for CAs to issue a certificate using a public
  key they've revoked before.

* Clarify the language around key compromise revocation to make it obvious
  that CAs have to revoke everything with a given private key.

* Clarify the language around revocation time limits to make it obvious that
  the time period starts when the communication leaves the reporter.

> > Step 1: add a new section 4.2.3 (pushing down the existing 4.2.3 to be
> > 4.2.4, because RFC3647 says that "Finally, this subcomponent sets a time
> > limit", which I assume means that the time limit section needs to be last),
> > worded something like this:
> 
> Well, no, that doesn't work with https://tools.ietf.org/html/rfc3647#section-6

Drat.  Makes it hard to fit key checks into there anywhere, then.  Shoehorn
it into 4.2.2, perhaps?

> > I know that Section 5.5.2 only requires storage of records for seven years;
> > I figure if someone's going to hold onto a compromised private key for seven
> > years just so they can bring it out for another cert, then they've earned
> > the right to get their cert revoked again.  Requiring CAs to maintain a list
> > of keys in perpetuity may be considered an overly onerous burden.
> 
> I don't think we want the explicit dependency on 5.5.2, because it
> would be good to reduce that time in line with reducing certificate
> lifetimes.
> 
> The 7 years is derived from the validity period of the certificates
> being issued (at the time, up to 10 year certs; 7 years was a
> compromise between the 5 years the BRs were phasing in and the 10y+
> existing practice)

Huh, that's interesting; I figured the 7 year requirement was in line with
other standard business record-keeping requirements, like tax records.

> By this logic, Debian weak keys would not need to be blocked, because
> that even occurred in 2006.

Hmm, no, unless the CA had previously issued a certificate for every Debian
weak key and subsequently revoked them for key compromise.  The existing
provisions regarding Debian weak keys (as potentially to-be-amended in the
near future) would still be in force, with no expiry time limit.

I, personally, do not have a problem with mandating that CAs keep records of
compromised keys in perpetuity.

> Broadly, it seems your problem that this first proposal is trying to
> solve is that CAs don't see it as logical that they must maintain a
> database of revoked keys. Is that a fair statement?

Close, but I'll quibble with "logical", and I dislike talking about "revoked
keys" because it gives people the wrong mental shorthand -- you can't
"revoke" a key, as such.  Although I suppose published attestations of
compromise are pretty close to a kind of OKSP, if you wanted to think that
way.

I'd rather word it as: "CAs don't see it as necessary that they must
maintain a database of keys from *all* certificates they revoked for key
compromise, and that they must check that database before issuance."

> > Step 2: Replace the existing text of section 4.9.12 with something like the
> > following:
> >
> > > In the event that the CA obtains evidence of Key Compromise, all
> > > Certificates issued by the CA which contain the compromised key MUST be
> > > revoked as per the requirements of Section 4.9.1.1, including the time
> > > period requirements therein, even if no Certificate Problem Report for a
> > > given Certificate has been received by the CA.
> >
> > In a perfect world, this sentence wouldn't be necessary, because 4.9.1.1
> > doesn't say that the certificate has to be revoked if a problem report comes
> > in alleging key compromise, but since CAs don't appear to have interpreted
> > 4.9.1.1 in that way, we may as well use 4.9.12 for a good purpose and
> > clarify the situation.
> 
> While I appreciate the suggestion, I'm worried this does more to sow
> confusion rather than bring clarity. As you note, 4.9.1.1 is liked to
> evidence about the Private Key, not the Certificate, so this is
> already a "clear" requirement.

What about it sows (additional) confusion?

> I think we'd want to understand why/how CAs are misinterpreting this.
> I think we've seen a common thread, which is CAs thinking about
> systems in terms of Certificates, rather than thinking about Keys and
> Issuers. We've seen tha

Re: Musings on mass key-compromise revocations

2020-03-30 Thread Matt Palmer via dev-security-policy
On Sat, Mar 28, 2020 at 07:11:43PM +1100, Matt Palmer wrote:
> In concert with my (human-mediated) revocation notifications, I have been
> sending semi-automated revocation requests to Let's Encrypt, using the ACME
> protocol.  This has been extremely smooth and straightforward, and my life
> -- and, I presume, the lives of the staff at the CAs I've reported
> revocations to -- would be a lot easier if every CA had an equivalent
> facility available.  I think this is so useful, in fact, that I have started
> coding a program capable of receiving key compromise revocations and
> forwarding them via e-mail, which will be released as open source when it is
> in a fit state for deployment.

A follow-up to this part of my musings: I got a rush of the blood to the
head over the weekend, and an initial release of this software is now
available from https://github.com/tobermorytech/acmevoke.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Proposal: prohibit issuance of new certificates with known-compromised keys, and for related purposes

2020-03-30 Thread Matt Palmer via dev-security-policy
In my recent forays into mass-revocation for key compromise, one aspect that
was particularly frustrating and unnecessary was having to send revocation
requests for new certificates, issued by a CA using a private key which I
had previously reported as compromised to that same CA.  Once a key is
compromised, it's never going to get *less* compromised, so there is no
reason why a CA should ever be issuing another certificate using that same
key.  Also, the requirement to revoke all certificates with a compromised
private key, regardless of whether a given certificate is explicitly listed
in a certificate problem report, does not appear to be clear.  Finally, CAs
appear to have a variety of interpretations of the start time of the 24 hour
period in which revocation must be completed, and so I thought I'd include a
small change for that.

Thus, I would like to propose that either Mozilla Policy, or (preferably)
the Baseline Requirements be amended to prohibit issuance of a certificate
where the CA has previously revoked a certificate using the same key, and to
clarify that all certificates using a compromised key are subject to
revocation.

If such a modification were deemed appropriate for the BRs, I would suggest
that the following changes would fit the bill.  All sections, etc taken from
version 1.6.7 of the BRs.

Step 1: add a new section 4.2.3 (pushing down the existing 4.2.3 to be
4.2.4, because RFC3647 says that "Finally, this subcomponent sets a time
limit", which I assume means that the time limit section needs to be last),
worded something like this:

> In accordance with Section 5.5.2, the CA SHALL maintain an internal
> database of all Public Keys contained in Certificates which have been
> revoked due to the CA obtaining evidence of Key Compromise.  This
> requirement exists regardless of the revocation reason (if any) published
> in Certificate status information.
>
> The CA SHALL NOT issue a Certificate containing a Public Key which the CA
> has previously recorded as having been used in a Certificate revoked due
> to the CA obtaining evidence of Key Compromise.

I know that Section 5.5.2 only requires storage of records for seven years;
I figure if someone's going to hold onto a compromised private key for seven
years just so they can bring it out for another cert, then they've earned
the right to get their cert revoked again.  Requiring CAs to maintain a list
of keys in perpetuity may be considered an overly onerous burden.

Step 2: Replace the existing text of section 4.9.12 with something like the
following:

> In the event that the CA obtains evidence of Key Compromise, all
> Certificates issued by the CA which contain the compromised key MUST be
> revoked as per the requirements of Section 4.9.1.1, including the time
> period requirements therein, even if no Certificate Problem Report for a
> given Certificate has been received by the CA.

In a perfect world, this sentence wouldn't be necessary, because 4.9.1.1
doesn't say that the certificate has to be revoked if a problem report comes
in alleging key compromise, but since CAs don't appear to have interpreted
4.9.1.1 in that way, we may as well use 4.9.12 for a good purpose and
clarify the situation.

Step 3: Insert a new paragraph at the end of section 4.9.1.1, with the
following contents (or a reasonable facsimile thereof):

> The time periods specified in this Section SHALL BE measured from the time
> that the communication which requests, notifies, makes the CA aware,
> provides evidence which the CA later deems valid, or as otherwise
> described above, is first received by a system or person acting on the
> CA's behalf for the receipt of such communications.

I know that's a phat sentence, but I wanted to try and get all the
circumstances in there, to prevent another round of "BUT BUT BUT" in the
future.  Essentially, what I'm trying to get across is, "once it hits your
MTA or phone tree, the clock starts ticking".  No leaving a problem report
in the inbox over the weekend and then claiming that they didn't "obtain
evidence" until monday morning.

... and that's about it.  Please tear my wording, rationale, and choice of
font to pieces as you see fit.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Musings on mass key-compromise revocations

2020-03-28 Thread Matt Palmer via dev-security-policy
I've been asked to provide some "big-picture" thoughts on how the process
for key compromise revocations works, doesn't work, and could be improved. 
This is based on the work that I've done over the past month or so,
requesting revocation of certificates which have had their private keys
disclosed by being posted to some public location.

This e-mail is intended to provide a summary of what I did and how it all
went, with a summary of the things that I came across which I feel could
stand to be improved.  Most of these improvements will likely come through
changes to the Mozilla or CCADB policies, or via changes to the BRs.  A
couple are things that CAs themselves need to take on board, as they aren't
"policy" matters as such, but are still issues of concern.

As the exact nature of how a problem may be solved will, no doubt, engender
no small amount of debate, I intend (unless someone tells me I'm once again
being a goose), to provide separate e-mail thread-starters on the details
of the issues I found, along with my proposals for how the issues might be
solved.  I will be providing a summary of the issues I found, but all the
gory minutiae will be provided in later e-mail threads.

One final thing before I get started: I'd like to give a big shout-out to
Rob Stradling and anyone else who is involved in making crt.sh happen, and
Sectigo for sponsoring its existence.  Everything I've done here would have
been a zillion times harder if there wasn't already a database of every
CT-logged certificate available for me to hammer on.  It is an Internet
Treasure.

So to kick things off, let's have some stats.

In the past month, I've requested revocation of over 2,800 certificates and
pre-certs[1], across 11 different "CA organisations" (I'm counting one "CA
organisation" as any number of issuing CAs that share a common problem
reporting address).  These certificates were using a total of 1,217 distinct
private keys.  These keys come from multiple sources, but based on an
analysis of a sample of around 3% of those keys, the overwhelming majority
come from GitHub repositories which were at one time -- and in many cases
still are -- publicly available.

As a bit of an aside, at the time of writing, there are a further 52 SPKIs,
representing an unknown number of certificates, for which I have yet to
request revocation from the relevant CAs.  These are keys which have entered
the pwnedkeys database since around the 23rd March.  In addition, since the
23rd, I've automatically requested revocation of 17 certificates (from 16
SPKIs) through Let's Encrypt's automated ACME-based revocation API (and also
deactivated about eight Let's Encrypt accounts whose private keys were posted
publicly...)

An interesting thing to do is to compare "issuance volume" against
"disclosed keys".  This isn't a reflection of the CAs themselves, because a
CA can't control what their customers do with their keys.  Given the
differences in issuance methodologies, target markets, and business
practices between CAs, it's worth taking a look at different CAs'
"disclosure rate", I guess you'd call it, and consider what impact, if any,
tho differences between CAs' operations might have on the likelihood of
their customers disclosing their keys.

I've taken issuance volume as being the total number of unexpired
certificates (as of a few days ago) issued by a "CA organisation" (again,
all the issuing CAs that share a common problem reporting address). 
Pre-certs get in the way, unfortunately, but it's not trivial to say "only
count a pre-cert if there isn't a corresponding cert" in crt.sh, so we have
what we have.

Of the 11 CA orgs that I sent at least one revocation request to, here are
their numbers:

CA org  SPKIs   Issued  Issued / SPKI

Digicert832 3402992040901
QuoVadis23  73184   3181
GlobalSign  47  1650873 35124
DFN-Verein  3   52945   17648
GoDaddy 38  4264928 112234
Sectigo 128 41718165325923
Entrust 6   576093  96015
SECOM   1   118748  118748
Certum  5   329047  65809
Let's Encrypt   133 122438321   920588

I took the opportunity, also, to look at a couple of larger CAs (by issuance
volume) which have had zero certificates with keys I found: pki.goog (issued
1284676), and Amazon (issued 2308004).  Clearly, the best approach to
avoiding key disclosure is never giving the subscriber the key in the first
place...

Finally, I thought it worth checking as to how many certificates were issued
after the private key was already known to have been compromised by being
included in the pwnedkeys database.  Based on the apparently common
behaviour of "request cert, then stick cert+key into a public GitHub repo",
I didn't think there would be many of these.  Turns out I 

Sectigo: Failure to revoke certificate with previously-compromised key within 24 hours

2020-03-27 Thread Matt Palmer via dev-security-policy
At 2020-03-20 03:02:43 UTC, I sent a notification to sslab...@sectigo.com
that certificate https://crt.sh/?id=1659219230 was using a private key with
SPKI fingerprint
4c67cc2eb491585488bab29a89899e4e997648c7047c59e99a67c6123434f1eb, which was
compromised due to being publicly disclosed.  My e-mail included a link to a
PKCS#10 attestation of compromise, signed by the key at issue.  An MX server
for sectigo.com accepted this e-mail at 2020-03-20 03:02:50 UTC.

This certificate was revoked by Sectigo, with a revocation timestamp of
2020-03-20 19:37:48 UTC.

Subsequently, certificate https://crt.sh/?id=2614798141 was issued by
Sectigo, and uses a private key with the same SPKI as that previously
reported.  This certificate has a notBefore of Mar 23 00:00:00 2020 GMT, and
embeds two SCTs issued at 2020-03-23 05:55:53 UTC.  At the time of writing,
the crt.sh revocation table does not show this certificate as revoked either
via CRL or OCSP:

Mechanism   ProviderStatus  Revocation Date Last Observed 
in CRLLast Checked (Error)
OCSPThe CA  Goodn/a n/a 
2020-03-27  06:27:23 UTC
CRL The CA  Not Revoked n/a n/a 
2020-03-27  04:44:26 UTC

Based on previous discussions on m.d.s.p, I believe Sectigo's failure to
revoke this certificate within 24 hours of its issuance is a violation of
the BRs, and hence Mozilla policy.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Digicert: failure to revoke certificate with previously compromised key

2020-03-23 Thread Matt Palmer via dev-security-policy
On Mon, Mar 23, 2020 at 06:15:00PM +, Jeremy Rowley wrote:
> There are two things worth discussing in general:
> 
> 1. I’m very interested in seeing the Let’s Encrypt response to this issue
> since the biggest obstacle in trying to find all of the keys with the same
> private key is the sheer volume of the certs.  Trying to do a
> comprehensive search when a private key is provided leaves some window
> between when we start the analysis and when we revoke.

Well, Let's Encrypt has committed to automatically blacklisting keys
reported for keyCompromise in its CA software
(https://community.letsencrypt.org/t/116762), so it won't be nearly as
problematic for them as it appears to be for Digicert.  At any rate, though,
I can get a list of all certs with a certain SPKI out of crt.sh in a matter
of a couple of seconds, and crt.sh has a *lot* more certs in it than
Digicert has issued.  The schema for the certwatch database is publicly
available, so, if nothing else, you could stand up a copy of that database,
skip deploying the CT scrapers, and just stuff a copy of every cert you
issue into the certificate table, and you're off to the races.

> 1. Another issue in trying to report keys that aren’t affiliated with
> any cert is that the process becomes subject to abuse.  Without knowing a
> cert affiliated with a key, someone can continuously generate keys and
> submit them as compromised.  You end up just blacklisting random keys,
> DDOSing the revocation system as it kicks off another request to  search
> for those keys.  I don’t think it’s feasible.  This is why the disclosures
> need to be affiliated with actual certs.

I don't think that anyone has, as yet, claimed that it is a BR violation for
a CA to issue a certificate with a key for which they have not yet received
a valid certificate problem report.  Nor do I believe that anyone has
claimed that a certificate problem report without any indication of a
problematic certificate is valid.  So, it appears to me that you're arguing
against something that nobody has proposed.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Digicert: failure to revoke certificate with previously compromised key

2020-03-23 Thread Matt Palmer via dev-security-policy
On Mon, Mar 23, 2020 at 12:53:43PM -0400, Ryan Sleevi wrote:
> To make sure I understand the timeline correctly:
> 2020-03-20 02:05:49 UTC - Matt reports SPKI 4310b6bc0841efd7fcec6ba0ed1f36
> e7a28bf9a707ae7f7771e2cd4b6f31b5af, associated with
> https://crt.sh/?id=1760024320 , as compromised
> 2020-03-21 01:56:31 UTC - DigiCert issues https://crt.sh/?id=2606438724 with
> that same SPKI
> 2020-03-21 02:09:12 UTC - DigiCert revokes https://crt.sh/?id=1760024320
> 2020-03-23 03:16:18 UTC - DigiCert revokes https://crt.sh/?id=2606438724
> 
> Is that roughly correct?

Yes, that appears to be a correct summary of the timeline as I see it.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Digicert: failure to revoke certificate with previously compromised key

2020-03-23 Thread Matt Palmer via dev-security-policy
On Mon, Mar 23, 2020 at 03:01:34PM +, Jeremy Rowley wrote:
> Ryan's post was the part I thought was relevant, but I understood it
> differently.  The cert was issued, but we should have now revoked it (24
> hours after receiving notice).  I do see your interpretation though, and
> the language does support 24 hours after issuing the new cert.

Aha, righto.  Glad we've gotten on the same page there.

> What I need is a tool that scans after revocation to ensure there are no
> additional certs with the same key.

I can give you a certwatch SQL query that'll do that, if you like -- "show
me all certs with this SPKI (or set of SPKIs) which aren't expired or
OCSP-revoked".  I use pretty much that query to get periodic reports of new
certificates that have appeared with keys already in the dungeon.  It's not
ideal for your purposes, though -- you might get some false positives
because your OCSP responders aren't up-to-date, and false negatives are
possible if certwatch is backlogged (or a cert wasn't logged).

Beyond that, though, if your internal certificate archive isn't indexed on
SPKI fingerprint, and updated in near-real-time, those are problems that I
think you'll have to fix, because the Internet's propensity to post their
private keys on the Internet, and then reuse them to get new certs after the
old one got revoked for key compromise, does not seem to be one that is
going away any time soon.

> The frustration is that this was
> where the cert was issued after our scan of all keys but just before
> revocation.  As a side note, our system blacklists the keys when a cert is
> revoked for key compromise, which means I don't have a way to blacklist a
> key before a cert is ever issued.

To the software developers!  *blows trumpets*

> >> I don't think that supports your point, though, so I wonder if I've got
> >> the wrong part.  That last part of Ryan's: "shenanigans, such as CAs
> >> arguing the[y?] require per-cert evidence rather than systemic
> >> demonstrations", seems to me like it's describing your statement,
> >> above, that you (only?) "need to revoke a cert that is key compromised
> >> once we're the key is compromised *for that cert*" (emphasis added).  I
> >> don't read Ryan's use of "shenanigans" as approving of that sort of
> >> thing.
> 
> I don't think its shenanigans, but I do think it's a pretty common
> interpretation.  Such information would help determine the common
> interpretation of this section.  I agree that CAs should scan all certs
> for the same key and revoke where they find them.  Is that actually
> happening?

A lot to unpack here, let me make up some specific questions and answer them
as best I can.

"Have any other CAs failed to revoke certificates issued for keys for
which they had previously revoked a certificate for key compromise?"

Yes, one CA has failed to do so, and I've reported that to this list.

"Have any other CAs successfully revoked a certificate within the
BR-mandated (am I OK using that phrase now?) time period, that they issued
for a private key for which they had previously revoked a certificate due to
key compromise?"

I don't know, I haven't checked (yet).  It's on my (lengthy) list of Interesting
Questions For Which I Need To Write Insanely Complicated SQL Queries.

"Have any other CAs blacklisted a private key that was reported as
compromised and prevented issuance before they had issued a certificate for
that key?"

Naturally I can't answer that one directly, because I don't have internal
access to CA systems.  *But*, I have one test case, in which a private key
known to have "hopped" between CAs after revocation was reported to a number
of CAs proactively, but there hasn't been a resolution to that test case one
way or the other.  No new certs have been issued for the same name, with the
compromised key or any other, so it's impossible to infer what the outcome
may be.

Other very specific questions welcomed.  The answers will probably be "I
haven't looked yet", but there are a lot of questions I've got at least a
vague idea of how to answer, once I have time2SQL.

> Do other CAs object to there being a lack of specificity if you give the
> keys without a cert attached?

Since I have been sending (links to) CSR format compromise attestations, no
CA has communicated an objection to the format of my reports, nor have they
failed to act (in some fashion) to any of my reports, as far as I am aware.

> >> Bim bam bom, all done and dusted, and we can get back to washing our 
> >> hands. 
>
> That you're *not* doing that is perplexing, and a little disconcerting.

My wife is immuno-suppressed -- about the only time I'm not washing my
hands is when I'm typing at my keyboard (the suds get in the switches and
cause all sorts of problems).  If I could get a reliable supply of
sanitizer, I'd be swimming in the stuff.

> That's an oversimplification of the incident report process.  I'm not
> resisting the incident report itself since incident reports are a 

Re: Digicert: failure to revoke certificate with previously compromised key

2020-03-23 Thread Matt Palmer via dev-security-policy
On Mon, Mar 23, 2020 at 06:14:29AM +, Jeremy Rowley wrote:
> That's not the visible consensus IMO.  The visible consensus is we need to
> revoke a cert that is key compromised once we're informed the key is
> compromised for that cert
> (https://groups.google.com/forum/m/#!topic/mozilla.dev.security.policy/1ftkqbsnEU4).
>  

I think that link might not be doing what you expect, as it (at least for
me) is collapsing all the replies in that topic before Doug Beattie's post. 
The only response that seems relevant in that topic to was Ryan's reply to
me up-thread from Doug's post, which was, in (I believe) relevant part, when
I asked the question:

> 3. Can a CA be deemed to have "obtained evidence" of key compromise prior 
>to the issuance of a certificate, via a previously-submitted key
>compromise problem report for the same private key?  If so, it would
>seem that, even if the issuance of the certificate is OK, it is a
>failure-to-revoke incident if the cert doesn't get revoked within 24
>hours...

To which Ryan replied:

> Correct, that was indeed the previous conclusion around this. The CA can 
> issue, but then are obligated to revoke within 24 hours. There’s not a 
> statute of limitation on “obtains evidence” here, precisely because it 
> could allow a host of shenanigans, such as CAs arguing the require per-cert 
> evidence rather than systemic demonstrations. 

I don't think that supports your point, though, so I wonder if I've got the
wrong part.  That last part of Ryan's: "shenanigans, such as CAs arguing
the[y?] require per-cert evidence rather than systemic demonstrations",
seems to me like it's describing your statement, above, that you (only?)
"need to revoke a cert that is key compromised once we're the key is
compromised *for that cert*" (emphasis added).  I don't read Ryan's use of
"shenanigans" as approving of that sort of thing.

> The certificate you mentioned was issued before the keys were blacklisted
> and not part of a certificate problem report.  When revoking a cert we
> scan to see if additional certs are issued with the same key t, but this
> particular cert one was issued after the scan but before the revocation,
> largely because the way you are submitting certificate problem reports
> breaks automation.  We currently don't have a way to blacklist private
> keys until a certificate is revoked, although that would be a nice
> enhancement for us to add in the future.  Anyway, I don't think anything
> reported violated the  BR since 1) this cert was not part of a certificate
> problem report and 2) we will be revoking within 24 hours of your Mozilla
> posting.

The way I see it, once a CA is provided with accepted evidence of key
compromise, every certificate issued by that CA using the same key -- past,
present, and future -- is implicitly part of that certificate problem
report.  *I* think that's supported by Ryan's confirmation of my question,
quoted above, but presumably you disagree?

At the end of the day the only call that matters is that of the CA module
owner and peers, so I guess we'll have to leave it up to them to make the
call as to whether Digicert's behaviour that I described -- and the facts of
which you don't seem to dispute -- constitutes a BR violation.

Honestly, I'm a bit surprised you're trying so hard to argue against taking
responsibility for this occurrence (probably should use "incident" yet, I
guess).  Based on purely on what you wrote in the above paragraph, it seems
like it was a simple oversight in your systems for what is, I won't deny, a
bit of a corner case.  Even I, as gung-ho as I am about throwing out
misbehaving CAs, wouldn't argue that this in and of itself is anything worth
a rap over the knuckles for.

As far as I can see, the meat of an incident report for this occurrence
could be something like "our key blacklist is only populated at the time
revocation occurs; the certificate in question was issued between problem
report and revocation; we've added a story to the backlog to modify our CA
systems to allow keys to be blacklisted before revocation, and until then
we've modified our procedures to do a sweep of our certificate archive after
the initial revocation has taken place, to catch a similarly situated
certificate in the future."

Bim bam bom, all done and dusted, and we can get back to washing our hands. 
That you're *not* doing that is perplexing, and a little disconcerting.

> I support the idea of swift revocation of compromised private keys and do
> appreciate you reporting them.  I think this is helpful in ensuring the
> safety of users online.  However, using the SPKI to submit information
> breaks our automation, making finding and revoking certs difficult.  The
> more standards way (IMO) is the SHA2 thumbprint or serial number or a good
> old CSR.

This confuses me a bit; an SPKI *is* a "SHA2[56] thumbprint" of the
compromised key, and keys don't have serial numbers.  As for a CSR, I
provide one, signed 

Re: QuoVadis: Failure to revoke key-compromised certificates within 24 hours

2020-03-22 Thread Matt Palmer via dev-security-policy
On Mon, Mar 23, 2020 at 02:02:18AM +, Stephen Davidson via 
dev-security-policy wrote:
> Summary:  The certificates noted in Matt Palmer's email below were not in
> his original problem report to QuoVadis.

While this may be true in an extremely narrow and literal sense, I don't
believe this is a reasonable description of the situation.  It is true that
I did not list the certificates that QuoVadis failed to revoke in my
certificate problem report.  However, I did not list *any* certificates in
my initial problem report.  Despite that, QuoVadis were able to revoke a
number of certificates based on the information I did provide, including
other certificates with the same public key to those that they did not
revoke.

What I did provide was a list of SPKI fingerprints of private keys in my
possession, along with a method of constructing crt.sh URLs which could be
used to lookup impacted certificates by SPKI fingerprint and a method of
constructing URLs which would provide CSR-format attestations of compromise. 
This appeared to be sufficient for QuoVadis to revoke the vast majority of
the certificates impacted, and I do not have any record of QuoVadis
objecting to the form or substance of the information I provided.

> The certificates he reported
> were revoked in a time manner, and we acknowledged that additional
> certificates existed using the compromised private keys, and that they
> would be revoked as we identified them.

I'm not sure that "we know there are more here somewhere, we'll revoke them
as we find them, and we'll take 24 hours from when we find them to do it"
meets the letter of the BRs, let alone the spirit.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Paessler (was Re: Let's Encrypt: Failure to revoke key-compromised certificates within 24 hours)

2020-03-22 Thread Matt Palmer via dev-security-policy
On Sun, Mar 22, 2020 at 07:47:49AM +0100, Hanno Böck via dev-security-policy 
wrote:
> FWIW: Given that with the private key it's easily possible to revoke
> certificates from Let's Encrypt I took the key yesterday and iterated
> over all of them and called the revoke command of certbot.

Yes, I play revocation whack-a-mole every day or two.  I hammer crt.sh's
pgsql replicas each time to get an up-to-date list of all new certs with
keys in the pwnedkeys database, and do the needful.

> I strongly recommend Let's Encrypt (and probably all other CAs)
> blacklists that key if they haven't already done so.

That'll always be the dream... but since at least one CA can't seem to
prevent a customer from getting a new certificate for the same key *while
they're revoking a cert for the same name with the same key because it's
compromised*, I think it's going to take a BR change that forbids reusing a
reported-compromised key before CAs bake in any sort of sensible key
blacklisting.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


QuoVadis: Failure to revoke key-compromised certificates within 24 hours

2020-03-21 Thread Matt Palmer via dev-security-policy
Three certificates were reported as having private keys which had
been publicly disclosed, by e-mailing complia...@quovadisglobal.com at
2020-03-20 03:05:14 UTC.  E-mail was received by a QuoVadis server at
2020-03-20 03:05:18 UTC.  As of 2020-03-22 05:17:37, OCSP still shows all of
these certificates as being "Good".

The unrevoked certificates are:

https://crt.sh/?id=2605016622
https://crt.sh/?id=1757153116
https://crt.sh/?id=1432019792

Interestingly, at least one other certificate using the same private key as
each of the above certificates, and also issued by QuoVadis, are now showing
as revoked, suggesting that (a) QuoVadis did indeed consider the private
keys as compromised, and (b) there are no caching or delayed publishing
issues at play here.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Digicert: failure to revoke certificate with previously compromised key

2020-03-21 Thread Matt Palmer via dev-security-policy
Certificate https://crt.sh/?id=2606438724, issued either at 2020-03-21
00:00:00 UTC (going by notBefore) or 2020-03-21 01:56:31 UTC (going by
SCTs), is using a private key with SPKI
4310b6bc0841efd7fcec6ba0ed1f36e7a28bf9a707ae7f7771e2cd4b6f31b5af, which was
reported to Digicert as compromised on 2020-03-20 02:05:49 UTC (and for
which https://crt.sh/?id=1760024320 was revoked for keyCompromise soon after
certificate 2606438724 was issued).

As previously discussed on this list, the visible consensus is that,
according to the BRs, certificates for which the CA already had evidence of
key compromise must be revoked within 24 hours of issuance.  That 24 hour
period has passed for the above certificate, and thus it would appear that
Digicert has failed to abide by the BRs.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Paessler (was Re: Let's Encrypt: Failure to revoke key-compromised certificates within 24 hours)

2020-03-21 Thread Matt Palmer via dev-security-policy
On Sat, Mar 21, 2020 at 07:20:27PM +, Nick Lamb wrote:
> On Sat, 21 Mar 2020 13:40:21 +1100
> Matt Palmer via dev-security-policy
>  wrote:
> > There's also this one, which is another reuse-after-revocation, but
> > the prior history of this key suggests that there's something *far*
> > more interesting going on, given the variety of CAs and domain names
> > it has been used for (and its current residence, on a Taiwanese
> > traffic stats server):
> > 
> > 
> > https://crt.sh/?spkisha256=69fc5edbd904577629121b09c49b711e201c46213e5b175bbee08a4d1d30b3c7
> > 
> > If anyone figures out the story with that last key, I'd be most
> > pleased to hear about it.
> 
> Sure.

[snip story]

Ha ha!  Nice detective work.  It was the old wildcard for `*.new-access.net`
that threw me for a loop, but I suppose if someone's going to reuse a key,
why not reuse one for a wildcard?

Thanks, I can now sleep a little bit sounder now that I know there isn't
another Debian-style weak PRNG out there.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Let's Encrypt: Failure to revoke key-compromised certificates within 24 hours

2020-03-20 Thread Matt Palmer via dev-security-policy
On Sat, Mar 21, 2020 at 01:53:31AM +, Nick Lamb wrote:
> On Sat, 21 Mar 2020 09:25:26 +1100
> Matt Palmer via dev-security-policy
>  wrote:
> 
> > These two certificates:
> > 
> > https://crt.sh/?id=2602048478=ocsp
> > https://crt.sh/?id=2601324532=ocsp
> > 
> > Were issued by Let's Encrypt more than 24 hours ago, and remain
> > unrevoked, despite the revocation of the below two certificates,
> > which use the same private key, for keyCompromise prior to the above
> > two certificates being issued:
> > 
> > https://crt.sh/?id=2602048478=ocsp
> > https://crt.sh/?id=2599226028=ocsp
> > 
> > As per recent discussions here on m.d.s.p, I believe this is a breach
> > of BR s4.9.1.1.
> 
> I haven't looked at the substance of your concern yet, but the 1st and
> 3rd links you gave above both look identical to me whereas your text
> implies they should differ. Perhaps this is a copy-paste error?

Oh the facepalm, it burns (probably too much hand sanitizer)... let me try
that again.

Recently issued and as-yet-unrevoked certificate, the first:

https://crt.sh/?id=2602048478=ocsp

Previously revoked certificate for the same key:

https://crt.sh/?id=2599363087=ocsp

Recently issued and as-yet-unrevoked certificate, the second:

https://crt.sh/?id=2601324532=ocsp

Previously revoked certificate for the same key:

https://crt.sh/?id=2599226028=ocsp

I've also, since my initial report, come across some more keys that have
been successfully re-used by Let's Encrypt customers after being revoked for
key compromise.  You can pull the details out of the recent history:


https://crt.sh/?spkisha256=c5b2c5acc5a35409cb18c7f820b93a3d53e2fd17d99df165875881d60ff91ca2

https://crt.sh/?spkisha256=35e61785dc449d235568dc5919f9f4bca31a234f0768e6c057f1d9e39491d76d

https://crt.sh/?spkisha256=bb84a7d81dafd4e59877bb31595545eb5a205a4cc7db881b027fa499c5086c1c

There's also this one, which is another reuse-after-revocation, but the
prior history of this key suggests that there's something *far* more
interesting going on, given the variety of CAs and domain names it has been
used for (and its current residence, on a Taiwanese traffic stats server):


https://crt.sh/?spkisha256=69fc5edbd904577629121b09c49b711e201c46213e5b175bbee08a4d1d30b3c7

If anyone figures out the story with that last key, I'd be most pleased to
hear about it.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Let's Encrypt: Failure to revoke key-compromised certificates within 24 hours

2020-03-20 Thread Matt Palmer via dev-security-policy
These two certificates:

https://crt.sh/?id=2602048478=ocsp
https://crt.sh/?id=2601324532=ocsp

Were issued by Let's Encrypt more than 24 hours ago, and remain unrevoked,
despite the revocation of the below two certificates, which use the same
private key, for keyCompromise prior to the above two certificates being
issued:

https://crt.sh/?id=2602048478=ocsp
https://crt.sh/?id=2599226028=ocsp

As per recent discussions here on m.d.s.p, I believe this is a breach of BR
s4.9.1.1.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DFN-Verein: CPS/CP link in CCADB not in English

2020-03-19 Thread Matt Palmer via dev-security-policy
On Thu, Mar 19, 2020 at 12:33:29PM -0400, Ryan Sleevi wrote:
> I'm not sure an incident report is necessary. The CCADB policy allows both
> to be provided, and the mechanisms that CCADB uses (both for CAs and for
> Root Stores) permit a host of expressiveness (and further changes are being
> made).

I guess we're working on different meanings for "provide", in this
sentence of the CCADB policy:

> CAs must provide English versions of any Certificate Policy, Certification
> Practice Statement and Audit documents which are not originally in English

The way I was looking at it was that a CPS is "provided" to the CCADB by
linking to it.  If a translated CPS exists, but it isn't linked to from the
CCADB (or, as far as I can tell, anywhere sensible on the CA's site), can it
really be said to have been "provided"?  Especially when (as is the case for
DFN-Verein) the cert itself doesn't include cPSuri, indicating where the CPS
repository even is?

Perhaps the CCADB needs to be augmented, to specifically include an "English
language version" of CP/CPS/Audit statements?

> This is something that the proposed Browser Alignment ballots in the CA/B
> Forum,
> https://github.com/cabforum/documents/compare/master...sleevi:2019-10-Browser_Alignment
> ,
> would address. It incorporates the Mozilla Policy, Microsoft Policy, and
> CCADB policy within the BRs itself.
> 
> In that branch, see the revised Section 8.6

As far as I can see, s8.6 only discussed audit reports, not CP/CPS.  Which
is fine and necessary, but when I'm trying to figure out where to send
"y'all have a pile of certs that need revoking because your customers leave
their keys on pastebin" e-mails, a CPS that I can read is what I need.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DFN-Verein: CPS/CP link in CCADB not in English

2020-03-19 Thread Matt Palmer via dev-security-policy
On Thu, Mar 19, 2020 at 11:10:05AM +, arnold.ess...@t-systems.com wrote:
> Thanks for pointing it out.  We changed the links so that they now refer
> to the English version of the CP and CPS.

Thanks for the quick update.  Do you have an ETA for the preliminary
incident report?

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Is issuing a certificate for a previously-reported compromised private key misissuance?

2020-03-19 Thread Matt Palmer via dev-security-policy
On Thu, Mar 19, 2020 at 05:30:31AM -0500, Ryan Sleevi wrote:
> On Thu, Mar 19, 2020 at 1:02 AM Matt Palmer via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> > 2. If there are not explicit prohibitions already in place, *should* there
> >be?  If so, should it be a BR thing, or a Policy thing?
> 
> https://github.com/cabforum/documents/issues/171 is filed to explicitly
> track this. That said, I worry the same set of negligent and irresponsible
> CAs will try to advocate for more CA discretion when revocation, such as
> allowing the CA to avoid revoking when they’ve mislead the community as to
> what they do (CP/CPS violations) or demonstrated gross incompetence (such
> as easily detected spelling issues in jurisdiction information).
> 
> I would hope no CA would be so irresponsible as to try to bring that up
> during such a discussion.

I shall fire up the popcorn maker in preparation.

> > 3. Can a CA be deemed to have "obtained evidence" of key compromise prior
> >to the issuance of a certificate, via a previously-submitted key
> >compromise problem report for the same private key?  If so, it would
> >seem that, even if the issuance of the certificate is OK, it is a
> >failure-to-revoke incident if the cert doesn't get revoked within 24
> >hours...
> 
> Correct, that was indeed the previous conclusion around this. The CA can
> issue, but then are obligated to revoke within 24 hours.

Excellent, thanks for that confirmation.  Incident report inbound.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


DFN-Verein: CPS/CP link in CCADB not in English

2020-03-19 Thread Matt Palmer via dev-security-policy
As I understand the CCADB Policy (which is included by reference in the
Mozilla Root Store Policy), CAs are required to provide an English
translation of their CP/CPS documents, and link to them in the CCADB.

At the time of writing, the "AllCertificateRecordsReport" CSV shows the
link for the "DFN-Verein Certification Authority 2" CP as being
https://www.pki.dfn.de/fileadmin/PKI/DFN-PKI_CP.pdf, which at present loads
a non-English PDF.  Similarly, the link for that same CA's CPS is
https://www.pki.dfn.de/fileadmin/PKI/DFN-PKI_CPS.pdf, which is also a
non-English document.

What is the procedure for poking DFN-Verein (or their parent CA, T-TeleSec)
to get them to provide links to suitably translated documents?

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Is issuing a certificate for a previously-reported compromised private key misissuance?

2020-03-19 Thread Matt Palmer via dev-security-policy
Since I started requesting revocation for certificates with
known-compromised private keys, I've noticed a rather disturbing pattern
emerging in a few cases:

1. I find a private key on the Internet.

2. I request revocation from the CA on the basis that the private key is
   compromised, and provide suitable evidence thereof.

3. The certificate is revoked.

4. Some time later, I discover that a new certificate, using the same
   private key, has been issued by the same CA.  (Mad props to CT!)

5. "Da wah?!?" I say, and scurry off to the BRs and Mozilla Root Store
   Policy, only to find that there doesn't appear to be anything explicitly
   covering this rather disconcerting situation.

So, I'm asking the combined wisdom of this esteemed community the following
questions:

1. *Are* there explicit prohibitions on issuing a certificate for a private
   key which has been previously submitted *to that CA* as compromised 
   (assuming, of course, that the prior submission was valid), and I'm just
   not good at finding said prohibitions?

2. If there are not explicit prohibitions already in place, *should* there
   be?  If so, should it be a BR thing, or a Policy thing?

3. Can a CA be deemed to have "obtained evidence" of key compromise prior to
   the issuance of a certificate, via a previously-submitted key compromise
   problem report for the same private key?  If so, it would seem that, even
   if the issuance of the certificate is OK, it is a failure-to-revoke
   incident if the cert doesn't get revoked within 24 hours...

I greatly appreciate answers and general commentary from the learned members
of this community.

Thanks,
- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Acceptable forms of evidence for key compromise

2020-03-17 Thread Matt Palmer via dev-security-policy
On Tue, Mar 17, 2020 at 03:51:13PM +, Tim Hollebeek wrote:
> For what it's worth, while we generally try to accept any reasonable proof
> of key compromise, we have seen quite a large variety of things sent to
> us.  This includes people actually sending us private keys in various
> forms, which is completely unnecessary and something we'd like to avoid.

You probably want to tell the people handling rev...@digicert.com to stop
asking for them, then.  I stopped counting after the first four instances I
found in my archive of revocation-related communications of Digicert
representatives asking for a copy of the private key.

> When we are unable to verify a proof, we provide explicit instructions on
> how to create a proof in a standardized form that's easy to very.  I
> believe it currently involves signing something with openssl, so it should
> be easy to carry out for anyone who is involved in these sorts of
> discussions and has access to the private key.

There's "access", and then there's "immediate and easy access".  I very
deliberately don't keep the 1.3M keys I've collected just lying around to be
poked at with openssl at a moment's notice.  Dragging a key out of cold
storage is deliberately not an easy operation.

(Incidentally, that also makes ACME's revocation-by-private-key operation
difficult when a cert is issued using a previously compromised key, but
that's a separate discussion I need to have with the ACME WG).

> I also think it's potentially useful to discuss standardizing lists of
> known compromised keys, and how to check them before issuance.  The
> problem of revoking them could be avoided entirely if they were never
> issued in the first place.

I don't have conclusive data (yet), but the impression I'm getting so far
is that most keys are, in fact, compromised after the certificate has been
issued, because people put the key+cert into a public GitHub repo or other
public online location (like pastebin), for use in their app.  I'm yet to
find the time to do an analysis of "certificates whose notBefore is later
than when they were discovered by the keyfinder".  Once that happens, I'll
be able to give a better indication of how much value there is in strict
pre-issuance checks for key compromise.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: ssl.com: Certificate with Debian weak key

2020-03-16 Thread Matt Palmer via dev-security-policy
On Mon, Mar 16, 2020 at 12:11:57PM -0700, Chris Kemmerer via 
dev-security-policy wrote:
> On Wednesday, March 11, 2020 at 5:41:00 PM UTC-5, Matt Palmer wrote:
> > On Wed, Mar 11, 2020 at 10:46:05AM -0700, Chris Kemmerer via 
> > dev-security-policy wrote:
> > > On Tuesday, March 10, 2020 at 8:44:49 PM UTC-5, Matt Palmer wrote:
> > > > On Tue, Mar 10, 2020 at 01:48:49PM -0700, Chris Kemmerer via 
> > > > dev-security-policy wrote:
> > > For what it's worth, we believe that the current language in the BRs could
> > > be less ambiguous as far as the Debian weak keys are concerned.  For
> > > example, it seems that the community's expectations are for CAs to detect
> > > and block weak Debian keys generated by vulnerable RNG using OpenSSL in
> > > popular architectures.
> > 
> > The problem with using the argument that "the BRs are ambiguous" to try and
> > defend a breach of them is that there are always potential ambiguities in
> > all language -- in many ways, "ambiguity is in the eye of the beholder"
> > ("ambiguer"?).  My understanding of the consensus from past discussions on
> > this list is that if a CA believes there is an ambiguity in the BRs, the
> > correct action is to raise that in the CA/B Forum *before* they fall foul of
> > it.
> > 
> > CAs should be reading the BRs, as I understand it, in a "defensive" mode,
> > looking for requirements that could be read multiple ways, and when they are
> > found, the CA needs to ensure either that they are complying with the
> > strictest possible reading, or else bringing the ambiguity to the attention
> > of the CA/B Forum and suggesting less ambiguous wording.
> > 
> > At any rate, it would be helpful to know what, precisely, SSL.com's
> > understanding of this requirement of the BRs prior to the commencement of
> > this incident.  Can you share this with us?  Presumably SSL.com did a
> > careful analysis of all aspects of the BRs, and came to a conclusion as to
> > precisely what was considered "compliant".  With regards to this
> > requirement, what was SSL.com's position as to what was necessary to be
> > compliant with this aspect of the BRs?
> 
> We have already described our understanding of the expectations expressed
> in BR 6.1.1.3 and the steps we took to comply with it.

Sorry, I must have missed the description of SSL.com's understanding.  Could
you quote or reference it here, for clarity?

> Our implementation did not meet these expectations, as it was missing
> direct checks of keys matching the "openssl-blacklist" package. 
> Immediately upon our coming to this understanding,

This is SSL.com's post-incident understanding; what I believe is important
to also know is SSL.com's *pre*-incident understanding of the BR
requirements.

> > > That could be added in the BRs:
> > > 
> > > Change:
> > > "The CA SHALL reject a certificate request if the requested Public Key
> > > does not meet the requirements set forth in Sections 6.1.5 and 6.1.6 or if
> > > it has a known weak Private Key (such as a Debian weak key, see
> > > http://wiki.debian.org/SSLkeys)"
> > > 
> > > to something like:
> > > "The CA SHALL reject a certificate request if the requested Public Key
> > > does not meet the requirements set forth in Sections 6.1.5 and 6.1.6 or if
> > > it has a known weak Private Key using, as a minimum set the Debian weak
> > > keys produced by OpenSSL in i386 and x64 architectures (see
> > > http://wiki.debian.org/SSLkeys)"
> > 
> > It would appear that SSL.com is a member in good standing of the CA/B 
> > Forum. 
> > Is there any intention on the part of SSL.com to propose this change as a
> > ballot?  While you're at it, if you could include a fix for the issue
> > described in https://github.com/cabforum/documents/issues/164, that would be
> > appreciated, since it is the same sentences that need modification, and for
> > much the same reasons.
> 
> Yes, this is reasonable, and we treated such key as compromised, revoking
> it within 24 hours.
> 
> We would support a ballot that makes this clear.  We also monitor the
> discussion in https://github.com/cabforum/documents/issues/164.

As Ryan mentioned, "we would support a ballot" is not the same as "we intend
to propose a ballot".  Thank you for clarifying that SSL.com intends to
propse a ballot in your reply to Ryan.

> > > Then, it would be clear that all CAs would need to block "at least" the
> > > vulnerable keys produced by OpenS

Re: Terms and Conditions that use technical measures to make it difficult to change CAs

2020-03-16 Thread Matt Palmer via dev-security-policy
On Mon, Mar 16, 2020 at 09:06:17PM +, Tim Hollebeek via dev-security-policy 
wrote:
> I'd like to start a discussion about some practices among other commercial
> CAs that have recently come to my attention, which I personally find
> disturbing.  While it's perfectly appropriate to have Terms and Conditions
> associated with digital certificates, in some circumstances, those Terms and
> Conditions seem explicitly designed to prevent or hinder customers who wish
> to switch to a different certificate authority.  Some of the most disturbing
> practices include the revocation of existing certificates if a customer does
> not renew an agreement, which can really hinder a smooth transition to a new
> provider of digital certificates, especially since the customer may not have
> anticipated the potential impact of such a clause when they first signed the
> agreement.  I'm particularly concerned about this behavior because it seems
> to be an abuse of the revocation system, and imposes costs on everyone who
> is trying to generate accurate and efficient lists of revoked certificates
> (e.g. Firefox).
> 
> I'm wondering what the Mozilla community thinks about such practices.

Utterly reprehensible, and should be called out loudly whenever it's found.

However, it might be tricky for Mozilla itself to create and enforce such a
prohibition, since it gets deep into the relationship between a CA and its
customer.  I know there are already several requirements around what must go
into a Subscriber Agreement in the BRs, etc, but they're a lot narrower than
a blanket "thou shalt not put anything in there that restricts a customer's
ability to move to a competitor", and a narrow ban on individual practices
would be easily gotten around by a CA that was out to lock in their
customers.

I recognise that it can be tricky for a CA to (be seen to) criticise their
competitors' business practices, but this really is a case where public
awareness of these kinds of shady practices are probably the best defence
against them.  Get enough people up in arms, hopefully hit the shonkster in
the hip pocket, and it'll encourage them to rethink the wisdom of this kind
of thing.

- Matt

-- 
A polar bear is a rectangular bear after a coordinate transform.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: ssl.com: Certificate with Debian weak key

2020-03-11 Thread Matt Palmer via dev-security-policy
On Wed, Mar 11, 2020 at 10:46:05AM -0700, Chris Kemmerer via 
dev-security-policy wrote:
> On Tuesday, March 10, 2020 at 8:44:49 PM UTC-5, Matt Palmer wrote:
> > On Tue, Mar 10, 2020 at 01:48:49PM -0700, Chris Kemmerer via 
> > dev-security-policy wrote:
> > > For the purpose of identifying whether a Private Key is weak, SSL.com uses
> > > a set of Debian weak keys that was provided by our CA software vendor as
> > > the basis for our blacklist.
> > 
> > I think it's worth getting additional, *very* detailed, information from
> > your CA software vendor as to where *they* got their Debian weak key list
> > from.  That appears to be the fundamental breakdown here -- you relied on a
> > third-party to give you good service, and they didn't.  So I think that
> > digging into your vendor's practices is an important line of enquiry to go
> > down.
> 
> As mentioned on our report, we used that list as a basis, and paid
> attention to augment it with other weak keys from available blacklists,

So presumably if there are other Mozilla-trusted CAs using the same CA
vendor, who *are* doing the bare minimum and just using the CA vendor's key
list, they're even more vulnerable to a potential misissuance.  As you
mentioned that your CA software vendor does read this list, I *really* hope
they speak up soon so we can figure out how they got their key list.

> weak keys from available blacklists, even for the ROCA vulnerability.

Sidenote: my understanding of ROCA is that it is of a different form to the
Debian weak key problem, in that you can't a priori enumerate ROCA-impacted
keys, but can only identify them as you find them.  As such, my
understanding is that there isn't, and in fact *cannot*, be a comprehensive
"blacklist", as such, of keys affected by ROCA.

Is your understanding of the ROCA vulnerability different to my description
above, and if not, can you explain how a "blacklist"-based approach is a
suitable mitigation for avoiding issuance of certificates using
ROCA-impacted private keys?

(Conversely, if it *is* possible get a comprehensive list of ROCA-impacted
keys, I know what I'm doing this weekend...)

> For what it's worth, we believe that the current language in the BRs could
> be less ambiguous as far as the Debian weak keys are concerned.  For
> example, it seems that the community's expectations are for CAs to detect
> and block weak Debian keys generated by vulnerable RNG using OpenSSL in
> popular architectures.

The problem with using the argument that "the BRs are ambiguous" to try and
defend a breach of them is that there are always potential ambiguities in
all language -- in many ways, "ambiguity is in the eye of the beholder"
("ambiguer"?).  My understanding of the consensus from past discussions on
this list is that if a CA believes there is an ambiguity in the BRs, the
correct action is to raise that in the CA/B Forum *before* they fall foul of
it.

CAs should be reading the BRs, as I understand it, in a "defensive" mode,
looking for requirements that could be read multiple ways, and when they are
found, the CA needs to ensure either that they are complying with the
strictest possible reading, or else bringing the ambiguity to the attention
of the CA/B Forum and suggesting less ambiguous wording.

At any rate, it would be helpful to know what, precisely, SSL.com's
understanding of this requirement of the BRs prior to the commencement of
this incident.  Can you share this with us?  Presumably SSL.com did a
careful analysis of all aspects of the BRs, and came to a conclusion as to
precisely what was considered "compliant".  With regards to this
requirement, what was SSL.com's position as to what was necessary to be
compliant with this aspect of the BRs?

> That could be added in the BRs:
> 
> Change:
> "The CA SHALL reject a certificate request if the requested Public Key
> does not meet the requirements set forth in Sections 6.1.5 and 6.1.6 or if
> it has a known weak Private Key (such as a Debian weak key, see
> http://wiki.debian.org/SSLkeys)"
> 
> to something like:
> "The CA SHALL reject a certificate request if the requested Public Key
> does not meet the requirements set forth in Sections 6.1.5 and 6.1.6 or if
> it has a known weak Private Key using, as a minimum set the Debian weak
> keys produced by OpenSSL in i386 and x64 architectures (see
> http://wiki.debian.org/SSLkeys)"

It would appear that SSL.com is a member in good standing of the CA/B Forum. 
Is there any intention on the part of SSL.com to propose this change as a
ballot?  While you're at it, if you could include a fix for the issue
described in https://github.com/cabforum/documents/issues/164, that would be
appreciated, since it is the same sentences that need modification, a

Re: GoDaddy: Failure to revoke key-compromised certificate within 24 hours

2020-03-10 Thread Matt Palmer via dev-security-policy
On Tue, Mar 10, 2020 at 05:53:13PM -0500, Matthew Hardeman via 
dev-security-policy wrote:
> Isn't the evident answer, if reasonable compromise is not forthcoming, just
> to publish the compromised private key.  There's no proof of a compromised
> private key quite as good as providing a copy of it.

Yes, going full-disc is one option.  I'm hopeful that there is a happy
middle ground somewhere that means I don't have to drop keys in full public
view, though.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: ssl.com: Certificate with Debian weak key

2020-03-10 Thread Matt Palmer via dev-security-policy
On Tue, Mar 10, 2020 at 01:48:49PM -0700, Chris Kemmerer via 
dev-security-policy wrote:
> We have updated https://bugzilla.mozilla.org/show_bug.cgi?id=1620772 with
> the findings of our current investigation.

Thanks for this update.  I have... comments.

Before I get into the nitty-gritty, though, I'd just like to say that I
found your response to be unnecessarily defensive.  So much of it reads --
to me, at least -- as "please don't blame us, it wasn't our fault!".  Fault
isn't really the issue at hand.  Discovering what went wrong, and making
sure that the issues are fixed, both for SSL.com and, if appropriate, other
CAs, is what I, at least, am trying to achieve.

Therefore, while a lot of my responses below address specific points that
you've made in your defence, please understand that I'm not trying to rebut
them in an attempt to say "nee nah nee nah, it *is* your fault!", but rather
to try and provide further information that SSL.com and other CAs could
benefit from considering for improvement in the future, given my experience
dealing with Debian weak keys.

> For the purpose of identifying whether a Private Key is weak, SSL.com uses
> a set of Debian weak keys that was provided by our CA software vendor as
> the basis for our blacklist.

I think it's worth getting additional, *very* detailed, information from
your CA software vendor as to where *they* got their Debian weak key list
from.  That appears to be the fundamental breakdown here -- you relied on a
third-party to give you good service, and they didn't.  So I think that
digging into your vendor's practices is an important line of enquiry to go
down.

Also, given that there is a non-zero chance that other CAs trusted by
Mozilla may be using the same software, with the same incomplete list of
weak keys, I think it's important that the fact that the vendor is using a
demonstrably incomplete list be circulated to the other customers of this
vendor.  How would you suggest this is best accomplished?

> This information was disclosed on 2020-03-07 to the person that submitted
> the Certificate Problem Report.

I don't see anything from SSL.com that looks relevant in my inbox, but it's
not an important point, just thought I'd mention it in passing.

> The above practices comply with our CP/CPS, version 1.8, section 6.1.1.2,
> which states:
>
> "SSL.com shall reject a certificate request if the request has a known
> weak Private Key".

Hmm, "known" is doing a lot of work in that sentence.  Known to *whom* is
the important, but unanswered, question.  It may be a question which should
be answered explicitly in SSL.com's CPS, as well as the CPS of all other CAs
(and even, potentially, in the BRs -- although my understanding is that that
little bit of the BRs may at some point in the near future get a tidy-up,
per https://github.com/cabforum/documents/issues/164).

If the appropriate answer is "known to SSL.com", then you could run your CA
software with an empty blacklist, issue certs for all manner of known-weak
keys, and still be compliant with your CPS.  As such, that's probably not an
acceptable answer to Mozilla.  "Known to SSL.com's CA vendor" is similarly
problematic.

If, on the other hand, the appropriate interpretation is "known to anyone",
then because the key was known to me, and I think I count as "anyone", your
CPS was not followed.

"Known to the vendor of the software which generated the known-bad key" is
also an answer that results in your violating your CPS and the BRs, because
the key you issued a certificate for was, as previously mentioned, included
in the `openssl-blacklist` Debian package.

Do you have another answer to the question "known to whom?" which SSL.com is
using in that sentence?

> Our understanding is that there is no single or complete list with all
> known Debian weak keys, either one that is normative for use by the CAs
> included in the Mozilla Root Program, nor one specified in the Baseline
> Requirements.

That is, I believe, correct, however (at the risk of tooting my own horn)
there is quite a comprehensive collection of Debian weak keys in the
pwnedkeys.com database.  You are welcome to encourage your CA software
vendor to perform lookups against the public API if you wish.  I don't claim
that any possible key you could generate from a buggy Debian system is
already in pwnedkeys, but I've accumulated a fair collection of likely
candidates, at great cost of (mostly emulated) CPU cycles.

> This can be demonstrated by submitting the following CSR

That CSR uses a 2048 bit key generated on an i386 system, using OpenSSH with
an exponent of 3, with PID 23747.  It might not be in certwatch, but it's in
pwnedkeys.  :grin:

> We have strong indications that there are several different lists of
> precalculated vulnerable keys whose precise populations depend on
> combinations of:
> 
> architecture

Yes, different CPU word sizes and endianness produce different keys. 
That is covered in 

Re: key compromise revocation methods [was: GoDaddy: Failure to revoke key-compromised certificate within 24 hours]

2020-03-10 Thread Matt Palmer via dev-security-policy
On Tue, Mar 10, 2020 at 05:18:51PM -0400, Ryan Sleevi via dev-security-policy 
wrote:
> I'm sympathetic to CAs wanting to filter out the noise of shoddy reports
> and shenanigans, but I'm also highly suspicious of CAs that put too
> unreasonable an onus on reporters.

If CAs want a 100% reliable and trustworthy means of receiving key
compromise reports, they can stand up a server which implements RFC8555
s7.6.  The backend doesn't have to immediately revoke the cert; it can
create a ticket in the CA's workflow management system saying "this cert has
been demonstrated to have a compromised private key, do the needful".  No
need for compliance specialists, PKI experts, or anyone to be on hand to
check what's going on.  Put a link to the ACME directory in s4.9.12 of their
CPS and in CCADB,  Job done.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CSRs as a means of attesting key compromise [was: GoDaddy: Failure to revoke key-compromised certificate within 24 hours]

2020-03-10 Thread Matt Palmer via dev-security-policy
On Tue, Mar 10, 2020 at 01:25:11PM -0700, bif via dev-security-policy wrote:
> Voluntarily providing CSR is not an ideal way to prove key compromise,
> because you could've simply found this CSR somewhere (I know, I know,
> super unlikely with your Subject...  but still could happen.)

Feel free to trawl the Internet looking for CSRs with a subject that
indicates that the private key is compromised, where that key is used in a
publicly-trusted certificate issued to a third party, and for which the
private key hasn't been compromised (feel free to confirm that last part via
the pwnedkeys.com search API).

When you find one -- just one -- we can continue debating the merits of
using a static CSR as a method of attesting key compromise.  Until then,
I'll keep using them as the least-worst available option.

> And while "compromised" is way too short (one can sign up to 32 bytes
> using it as a nonce in regular TLS session) to prove the key compromise,
> in the absence of the actual compromised private key, about the only way
> to ensure the possession is to get the reporter to sign some data chosen
> by the CA.  It very well may be a random CN in the CSR, or plain old
> openssl dgst.

That would require the database of all pwnedkeys to be available live, to
permit real-time generation of signatures in response to CA requests. 
Having that many keys online is a *spectacular* security risk, given the
importance of some of the certificates whose private keys have been publicly
disclosed (there's another gao.gov certificate yet to be revoked, not to
mention the one for *.avast.com!).  I deliberately and *very* purposely keep
the database of actual private keys out of harm's way, and requiring the
database to be kept online to respond to challenge-response interactions
with CAs seems like a spectacularly bad security compromise.

Further, given the lack of any sort of standardisation of revocation
workflows, even within a single CA, a challenge-response mechanism for key
compromise attestation would almost certainly require manual intervention on
my part.  Given the large backlog of certificates still to be revoked, and
the ongoing stream of new certificates and private keys that will,
seemingly, never stop, requiring manual steps from me for every one of those
is, I'm sure you can understand, not something I'd be keen on.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


GlobalSign: Failure to revoke certificate with compromised private key within 24 hours

2020-03-09 Thread Matt Palmer via dev-security-policy
A certificate with a publicly-disclosed private key was reported to
GlobalSign for revocation within the BR-mandated 24 hour period, however the
revocation took place over 46 hours after the report was sent.  Several
requests for information I had already provided were made by GlobalSign,
however the revocation eventually took place without any further information
being required.  Communication from GlobalSign then appeared to suggest that
the certificate had "already" been revoked, despite timestamps in the CRL
indicating otherwise.

I believe an incident report for this event is warranted, given that
GlobalSign was provided with sufficient information to revoke the
certificate in the initial problem report (based on the fact that revocation
eventually took place with no further information being provided by myself),
but failed to do so within the BR-mandated time period.

Excuciatingly detailed timeline follows.

2020-03-06 21:48:53Z E-mail sent to report-ab...@globalsign.com:

-8<-
Date: Sat, 7 Mar 2020 08:48:53 +1100
From: Matt Palmer 
To: report-ab...@globalsign.com
Subject: Problem Report for certificate(s) with compromised private key

One or more certificates issued by your CA are using a private key which has
been publicly disclosed.  The list of affected certificates can be retrieved
from

https://crt.sh/?spkisha256=6a02703a7a2ba3f368a2915305383549cf8ada8262422697d62d5ba410e4d93f

Included below is a CSR, signed by the compromised private key,
demonstrating proof of possession:

-BEGIN CERTIFICATE REQUEST-
MIIE0TCCArkCAQAwgYsxaTBnBgNVBAMMYFRoZSBrZXkgdGhhdCBzaWduZWQgdGhp
cyBDU1IgaGFzIGJlZW4gcHVibGljbHkgZGlzY2xvc2VkLiBJdCBzaG91bGQgbm90
IGJlIHVzZWQgZm9yIGFueSBwdXJwb3NlLjEeMBwGA1UECgwVaHR0cHM6Ly9wd25l
ZGtleXMuY29tMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA2OMM6yti
3q+GhnZsMPYrACVrZWYqn2yz2fH5J6kPONDvHm3P4UgPJb5j0OFUbmng3e41FwWf
QhD7UFbiEtH/fCJLnxuhAlCBZkVTwIBIwIYRpBmSp/shtNBJZvHBPgktF78qQBr5
HaX9jZOl/z0rLVw42wnzHlMyyeJNCQzBgRqA+Lcgig/9I2qxQvm3C53868i0EE3k
B418D63cEhz6hldoxELt7twoYulwyLk/PXWj/I0qHQZGT1weLD6UXINuxhmcFUDj
4i5V9UqNWhP4LT/QWjNtqE5y1OOT5qtkczjmSd3TS3GCik3o7v2M7JxwME1T/e/z
unTqhCarZF3HkrN5MxDB/28HsPaSRUpbxzmIUt+GApuVjNWnRW0awlzp8i5wQnmo
x7nNtSSht44DhlWETpPeT3n27LKM64no97aN0NS0LEKc5sFuOcS5sCj5FvsxNm/8
RhqfQkHXjkhZByTPhYvkQZTTA8Gxsh52Pnr0aTKrNz/fNpcJWzlKvbSmQn7i1Nmn
z6f9cTB3gW9+DjgSq/XjgVZJdGAWD9k5/i+v8b0zSbpprGNh2gkn39QYmWLlS2eu
XhtAhdWAroEBxm5pLA3T50KWcfM1IHsZSHIeneIcR3anUhqnA1vMjZdFdFkX+TCE
n/c6cotq/fESE+ieMdc7NjpTn4w2a+10xHECAwEAAaAAMA0GCSqGSIb3DQEBCwUA
A4ICAQCnPqJFlaTaNTz0ldS+PepRa8cpf4DXJ/shKBf8ChJ7ivY8+Q6qQWLU4WTM
DSChT+5K2Zlr5LRoIBeTsgyl3345agsPI8BKjw1OpRlxgVsMKlKOd6nCSJPw2NDl
+Ud+s/LbnZJsIn9nb4fQdF+mC4L6Q1GikCkTfQ1SD8RykVgwojiQFwsdaNRy1U2z
uw3QtlYXZ1s/zdgEITBB4x5js1r8+njue3X4hbgmTrnppEpxeaiuKIImLxFCOveo
pv6evi9g8mYCZ2hqvLO2RTO3iTSvbDAgbImr6D0Asem1qdCdNPbhiGXj/kxJNNUQ
P5hb1KmbcdCLIjvMz0+Z6TkIW0q4MowUpUeKx8Y18Pjt9D+nLN9sRLi8vfjvlnt4
eLENX2156CWMmJQg4n16UjYKaf6dSCvWJYC2TzYJzs+ZEKU71LCkUl/hdj7ZNLtZ
o3Z3C892nPZ56LdJES2wBMFgfMV5EWo4MrriFO7yhpkVp3NlOWkWVjIuTPDsm0gK
fLVgHQPfgpVR6LT/e2HWISdiogUrACsVFrb5vfehXY2PAewPghkD5Cn3LG6hnXYn
hmjgXDwz2dK5ud3ABJT1UxJtn82o3z3okUDISdeioxw43HBhCQ84p3G+JoRq9x6+
2ncweNmCQQ66tsX386ywKpPQJ4/1DrRsOKdSSy7siwwtR437Rg==
-END CERTIFICATE REQUEST-

Please revoke all affected certificates within 24 hours, as per the Baseline
Requirements.

- Matt
->8-

2020-03-06 21:49:04Z E-mail is accepted for delivery by a GlobalSign MX:

-8<-
Mar  6 21:49:04 minotaur postfix/smtp[26026]: 75BC71857EE:
to=,
relay=globalsign-com.mail.protection.outlook.com[104.47.93.36]:25,
delay=6.8, delays=0.47/0.01/0.9/5.4, dsn=2.6.0, status=sent (250 2.6.0
<20200306214853.kpohtnh5y2m3k...@hezmatt.org> [InternalId=34857954577034,
Hostname=HK0PR03MB2755.apcprd03.prod.outlook.com] 10967 bytes in 3.479,
3.078 KB/sec Queued mail for delivery)
->8-

2020-03-06 21:49:15Z Auto-ack e-mail received from GlobalSign:

-8<-
Dear Matt Palmer,

Thank you for reporting this issue to GlobalSign.  Case #04076325: "Problem
Report for certificate(s) with compromised private key" has been created and
a GlobalSign representative will investigate this immediately.  If requested
you will receive a response from a designated representative as soon as
possible.

Thank you,
Customer Service Team  GlobalSign
->8-

2020-03-06 22:08:06Z Human response from GlobalSign:

-8<-
Hello,

Thank you for contacting GlobalSign.

We have received your report of certificate abuse.  GlobalSign takes these
accusations very seriously.  We will be opening an investigation and will
keep you updated on any advances we make.

Sincerely,
Akshit Bhambota
GlobalSign Support Team
->8-

2020-03-06 22:21:22Z A rather odd form-looking e-mail is sent from
GlobalSign:

-8<-
Hello,

Thank you for submitting your report regarding the suspected fraudulent
activity or misuse of a GlobalSign certificate.  In furthe

Re: GoDaddy: Failure to revoke key-compromised certificate within 24 hours

2020-03-09 Thread Matt Palmer via dev-security-policy
Hi Joanna,

Thanks for responding.  When can this list, or Bugzilla, expect GoDaddy's
incident report?  Also, for the avoidance of further doubt, can you give an
exact timestamp at which GoDaddy considers that evidence of key compromise
was "obtained" for this certificate?

- Matt

On Mon, Mar 09, 2020 at 01:46:17PM -0700, Joanna Fox via dev-security-policy 
wrote:
> Matt,
> 
> Thank you for sharing your experience with our problem reporting mechanism on 
> this forum. It is due to this that we were able to get to the root of the 
> issue. Here is some detail into what we saw.   
> 
> Yesterday, we launched an investigation which included various members of the 
> team researching this issue. We took this investigation as far as we could 
> with the information we had and concluded that the CSR provided, as we read 
> it, was malformed. We ran this CSR through various tools but were unable to 
> successfully confirm validity.  
> 
> This morning, based on the statements in this forum, we discovered that our 
> email system had misinterpreted the CSR formatting due to it being pasted in 
> the body of the email. When we fix Base64 encoding, the CSR verifies.  
> 
> Upon this discovery we have initiated revocation to occur within the 
> guidelines of 24 hours from obtaining evidence that the private key was 
> compromised.  We take key compromises very seriously and recognize the 
> importance to the industry and health of the ecosystem. 
> 
> Lastly, we also noticed that the email you received was malformed, missing 
> some of the required content for the OpenSSL command.  This event has led to 
> a review of our email system to learn how we can avoid malformed encoding 
> issues in the future.
> 
> Thank you,
> Joanna Fox
> GoDaddy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Request to Include Microsec e-Szigno Root CA 2017 and to EV-enable Microsec e-Szigno Root CA 2009

2020-03-09 Thread Matt Palmer via dev-security-policy
On Mon, Mar 09, 2020 at 11:48:40AM -0700, Kathleen Wilson via 
dev-security-policy wrote:
> ==Bad==

This is a *very* long list of bad things.  Based on this list alone I think
it would be reasonable for Mozilla to reject this application.  I'd like to
highlight the things that are practically problematic, based on my recent
work attempting to revoke certificates for compromised keys.

> * BR section 4.9.3 requires CPS section 1.5.2 to contain instructions for
> reporting an issue such as key compromise to the CA. The Microsec CPS’ only
> state that questions related to the policy may be reported via the info in
> that section, and other email addresses
> (“highprioritycertificateproblemrep...@e-szigno.hu”,
> “revocat...@e-szigno.hu") are found in other sections of some documents.

Section 1.5.2 is where I go looking for contact information to revoke a
certificate.  If it's not there... I'm outta luck.

> Section 4.9.5 then states that revocation requests are only accepted at the
> address listed in section 1.2, but there is no email address in this
> section.

I like their clarity that they don't accept revocation requests to other
addresses, but then not listing any valid addresses does make it tricky. 
Especially since the previous paragraph gives an e-mail address to contact.

On that subject:

s4.9.5 of the DV/OV CPS states:

> In case of applications sent by electronic mail, the time of arrival is
> when the email is received to the dedicated email address
> revocat...@e-szigno.hu on the server of the Trust Service Provider. 
> Emails arriving out of office hours are considered as arrived at the
> beginning of the next business day.

I don't believe this is in alignment with the BRs, Mozilla Policy, or
general expectations around the availability of a CA.  Most of the CPSes
I've dug into recently make mention of maintaining "24x7x365" availability
for accepting and processing problem reports (and, to be fair, I do often
get responses from CAs at all hours).  "Next business day" processing is
woefully inadequate.

Rolling back to the beginning of s4.9 of the DV/OV CPS, let's go through it
in order.

s4.9:

> The usage of the private key belonging to the revoked Certificate shall be
> eliminated immediately.  If possible, the private key belonging to the
> revoked Certificate shall be destroyed immediately after revocation.

This is a bit odd to see in a CPS.  I am assuming there is something in the
subscriber agreement that makes this binding on a subscriber.  In any event,
I, personally, would consider any issuance of a new certificate using the
same private key as a revoked certificate to be misissuance (I don't know if
Mozilla would feel the same way).

s4.9.2 does not allow me, as an Internet Rando who happens to have an
unhealthy fascination with collecting published private keys, from
requesting revocation.  The CPS really must not dissuade third parties from
reporting problems with certificates, IMO.

s4.9.3, subsection "High-Priority Certificate Problem Report", does not
define what constitutes a "high priority" report, but intimates that it
involves requests where law enforcement may need to become involved.  If you
follow enough links, you get to a page that suggests that key compromise may
be considered a "high priority" report, however were I to be looking at this
CPS to find a way to report a key compromise, I would not consider using
this "high priority" channel, based on the information in this CPS.

Based on the above points, and the lengthy set of "bad" points previously
identified by Wayne, I would ask Mozilla to reject this application.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


GoDaddy: Failure to revoke key-compromised certificate within 24 hours

2020-03-08 Thread Matt Palmer via dev-security-policy
Following a problem report sent to the CPS-specified address, evidence of
key compromise for a private key used in a certificate issued by GoDaddy has
not been revoked within the 24 hour timeframe required by the Baseline
Requirements s4.9.1.1.

Whilst GoDaddy did provide a response within 24 hours, I do not believe it
meets the spirit of the requirement to provide a "preliminary report", as
per BRs s4.9.5.

A detailed timeline of communications, as well as my specific concerns about
GoDaddy's response, are given below.

2020-03-08 02:47:54Z Problem report sent:

-8<-
Date: Sun, 8 Mar 2020 13:47:54 +1100
From: Matt Palmer 
To: practi...@starfieldtech.com
Subject: Problem Report for certificate(s) with compromised private key

One or more certificates issued by your CA are using a private key which has
been publicly disclosed.  The list of affected certificates can be retrieved
from

https://crt.sh/?spkisha256=c7701317f3cdab8d47ae581dc9e92ac249e9cea65765a4e4ef1100246d003f16

Included below is a CSR, signed by the compromised private key,
demonstrating proof of possession:

-BEGIN CERTIFICATE REQUEST-
MIIC0TCCAbkCAQAwgYsxaTBnBgNVBAMMYFRoZSBrZXkgdGhhdCBzaWduZWQgdGhp
cyBDU1IgaGFzIGJlZW4gcHVibGljbHkgZGlzY2xvc2VkLiBJdCBzaG91bGQgbm90
IGJlIHVzZWQgZm9yIGFueSBwdXJwb3NlLjEeMBwGA1UECgwVaHR0cHM6Ly9wd25l
ZGtleXMuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAmIykJVXM
hnFnYQjuDD3uQw2DU7x4taco5sV0s5yzhVWoEOuBRjikAfXU0umzYv1mx80SzXy8
092U9E0noLVtRbZS1Z53WJonUcvtxyqYt1syCM2OuUEYwXoSMmBIK11YZSaBK91/
HECnJbJ5gAqddUUNcsVcKj9LZOEwiwQnVan42FCqLYJT6w9t2rbcMPJyajRCHaHI
do0KkzWY1cg4oc4S21/7sszVz4IFLMNvT5NoJ768D6atkqTVduMsOPEaourni8kl
4ANE3dCkyYh24/Y0tbrB6Vo2lTTVOUpYEfMPDgkaVMtEPtU03SfmfqZrJMp2KwmG
G4rgFne6SZ1yMQIDAQABoAAwDQYJKoZIhvcNAQELBQADggEBADTZKcOXd6cf/Zs/
keZv8hoTPalYsIjfBlsDyhkdzIRXlBhow+bJQxgA1X49E1qxht7YAuKuzJqH/1KE
18S7+e4krWSh6Thp9bv8UrRWVCyGGoXe0DQrmVQuA9jb32i0S8SioyQ/rrnQZM5L
gb0sFb+keHxOYFK7vaK+Iq0GTVOLq2M0dOzvsvzw1u1BfXLxXI9UryxDkKb6I9uy
QWfigG50G7CifguPyxeRnK4Js/8cAzXREJF1D9UUDj6rByEWSxEEnc9NyAqOmjsS
1Q8DxrSK6uMOPy3icngD+Ykq6PIWKeXe0eTme0W7feGjrsmVZChhvrbFjkPaFmrZ
83FAPKc=
-END CERTIFICATE REQUEST-

Please revoke all affected certificates within 24 hours, as per the Baseline
Requirements.

- Matt
->8-

2020-03-08 02:48:01Z E-mail is accepted by remote mail server:

-8<-
Mar  8 02:48:01 minotaur postfix/smtp[2204]: 9D7421859AB:
to=,
relay=smtp.secureserver.net[68.178.213.37]:25, delay=3.2,
delays=0.45/0.01/1.4/1.3, dsn=2.0.0, status=sent (250 2.0.0 Alyxjg6mq8nXu -
Alyxjg6mq8nXuAlyyjakiR mail accepted for delivery)
->8-

2020-03-08 02:48:42Z Auto-ack response from donore...@secureserver.net,
whose substantive content, once the HTML gak has been filed off, is as
follows:

-8<- 
Your inquiry has been received.  You should expect a response
within 72 hours.

This is your Incident ID: 41360505

If you wish to speak with a customer service representative, call +1 (480)
505-8825 and reference the Incident ID above.

Thanks,
Customer Service
->8-

2020-03-08 22:33:51Z Followup e-mail received from
donotre...@secureserver.net:

-8<-
We have received a certificate problem report for tell.gao.gov and are
required to investigate the facts and circumstances related to this problem
report.  Contingent on our findings, the affected certificate(s) may require
revocation within a strict timeline of 24 hours or up to 5 days, depending
upon severity.  Consequently, the current certificate(s) will cease to
function immediately upon revocation.

Our findings regarding this problem report conclude that we are unable to
proceed with a revocation as the provided CSR is not conclusive.  We are
able to duplicate this CSR without having access to the private key.  At
this time, we are requesting additional information from you in the form of
either Option A or Option B below.  Please respond within 24 hours to ensure
timely resolution.

Option A: Send us the private key in PEM format.

Option B: Run the command below and send us the output.  Please replace
PRIVATEKEY-FILE with the path to the compromised private key.  Please be
sure to use the exact same string so we can verify accordingly.
openssl dgst -sha256 -sign PRIVATEKEY-FILE <(echo -n 'compromised') openssl enc 
-base64

[my original e-mail, with formatting mangled, elided]

If you need to follow up, please reference incident ID 41360505 when you
contact our support team at +1 (480) 505-8825.

Please do not reply to this email.  Emails sent to this address will not be
answered.
->8-

At of the time of writing, which is approximately 25-1/2 hours after the
initial problem report was sent, https://crt.sh/?id=1166030780=ocsp
shows the certificate as having not been revoked.

I have three concerns about the nature of the GoDaddy response to this
problem report:

1. The CSR I provided in the original report is functionally identical to the
   signature that is being req

Re: ssl.com: Certificate with Debian weak key

2020-03-07 Thread Matt Palmer via dev-security-policy
On Sat, Mar 07, 2020 at 09:07:11AM -0500, Ryan Sleevi wrote:
> Thanks. I filed  https://bugzilla.mozilla.org/show_bug.cgi?id=1620772

I'll give points to SSL.com for a speedy initial response, but I'm a bit
disconcerted about this:

> The fingerpint of the claimed Debian weak key was not included in our 
> database.

I think it's worth determining exactly where SSL.com obtained their
fingerprint database of weak keys.  The private key in my possession, which
I generated for inclusion in the pwnedkeys.com database, was obtained by
using the script provided in the `openssl-blacklist` source package, with no
special options or modifications.

The key used in this certificate is not one for a niche architecture or
unusual configuration -- i386 was the dominant architecture at the time of
the flaw, and 2048 bits is a standard key size.  It would be somewhat more
understandable if the key was, say, a 4096 bit key generated on a MIPS
machine (it took *ges* to generate all of those), although it
would still be a Debian weak key and thus still be a BR violation to issue a
certificate for it.

On the off-chance that there *was* a mistake in my key generation procedure,
*and* a one-in-a-trillion collision of private keys, I've confirmed that the
public key of the certificate in question is in the `openssl-blacklist`
Debian package (https://packages.debian.org/jessie/openssl-blacklist,
uploaded in 2011) with the following command:

grep $(wget -O - -q https://crt.sh/?d=2531502044 \
| openssl x509 -noout -pubkey \
| openssl rsa -pubin -noout -modulus \
| sha1sum | cut -d ' ' -f 1 | cut -c 21-) \
  /usr/share/openssl-blacklist/blacklist.RSA-2048

As further independent confirmation, the crt.sh page for the certificate
shows that crt.sh *also* identifies the certificate as having a Debian weak
key.  My understanding is that crt.sh uses a database of keys that was
independently generated by the operator of the crt.sh service.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


When is a "weak key" a "compromised key"?

2020-03-06 Thread Matt Palmer via dev-security-policy
The BRs, s4.9.1.1, state that a CA has up to five days to revoke a
certificate where:

> The CA is made aware of a demonstrated or proven method that exposes the
> Subscriber's Private Key to compromise, methods have been developed that
> can easily calculate it based on the Public Key (such as a Debian weak
> key, see http://wiki.debian.org/SSLkeys), or if there is clear evidence
> that the specific method used to generate the Private Key was flawed.

My intuition is that this clause is meant to encompass key generation flaws
such as ROCA, in which you can identify "weak" keys by the properties of
their public key.  Such "weak" keys may still take some time to recover the
private key given the public key, however the usual cryptographic security
guarantees cannot be assumed to apply.

However, the specific example given in the BRs, that of Debian weak keys,
does not fit the general pattern above.  As far as I am aware, there is no
way to identify a Debian weak key purely by examining the properties of the
public key.  Debian weak keys can (only) be identified by referring to a
list of keys generated using the flawed technique.  Whilst you can, in
theory, provide such a lookup table without also providing the private keys
themselves, in practice anyone who has a lookup table will also have (access
to) the full private key material.

The problem is that a private key which is, for all intents and purposes,
public information, is a compromised key, and should presumably come under
the 24 hour revocation deadline part of s4.9.1.1.  However, a Debian weak
key is *explicitly* listed as coming under the five day revocation deadline. 

Therefore, the question I'm asking is: should Mozilla (aka the community and
CA module owner and peers) make a policy decision to treat certificates
issued with a known Debian weak key differently to that of the BRs, and
insist on revocation within 24 hours (as a compromised key) rather than
within five days (as a "Debian weak key")?

Thanks,
- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


ssl.com: Certificate with Debian weak key

2020-03-06 Thread Matt Palmer via dev-security-policy
(Pre) Certificate https://crt.sh/?id=2531502044 has been issued with a known
weak key, specifically Debian weak key 2048/i386/rnd/pid17691.  I believe
this issuance to be in contravention of SSL.com's CPS, version 1.8, section
6.1.1.2, which states "SSL.com shall reject a certificate request if the
request has a known weak Private Key".

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: About upcoming limits on trusted certificates

2020-03-03 Thread Matt Palmer via dev-security-policy
On Tue, Mar 03, 2020 at 01:53:49PM -0800, Clint Wilson wrote:
> On Mar 3, 2020, at 1:41 PM, Matt Palmer via dev-security-policy 
>  wrote:
> > On Tue, Mar 03, 2020 at 11:55:24AM -0800, Clint Wilson via 
> > dev-security-policy wrote:
> >> For additional information, please see 
> >> https://support.apple.com/en-us/HT211025.
> > 
> > I have a question regarding this part:
> > 
> >> TLS server certificates issued on or after September 1, 2020 00:00 GMT/UTC
> >> must not have a validity period greater than 398 days.
> > 
> > How is Apple determining when a certificate was issued?  That's
> > traditionally been pretty tricky to determine, exactly, so I'm curious to
> > know how Apple has solved it.
>
> This is determined using the notBefore value in the certificate; if the
> notBefore value is greater than or equal to September 1, 2020 00:00
> GMT/UTC, then the updated policy will apply.

It may be worth clarifying that in the support article.  Are Apple intending
on taking any active steps to dissuade CAs from backdating certificates? 
Relatedly, does Apple have a similar stance against backdating to that of
Mozilla, which lists it as a "potentially problematic practice"?

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: About upcoming limits on trusted certificates

2020-03-03 Thread Matt Palmer via dev-security-policy
On Tue, Mar 03, 2020 at 01:27:59PM -0700, Wayne Thayer via dev-security-policy 
wrote:
> I'd like to ask for input from the community: is this a requirement that we
> should add to the Mozilla policy at this time (effective September 1, 2020)?

I don't see any reason not to.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: About upcoming limits on trusted certificates

2020-03-03 Thread Matt Palmer via dev-security-policy
On Tue, Mar 03, 2020 at 11:55:24AM -0800, Clint Wilson via dev-security-policy 
wrote:
> For additional information, please see 
> https://support.apple.com/en-us/HT211025.

I have a question regarding this part:

> TLS server certificates issued on or after September 1, 2020 00:00 GMT/UTC
> must not have a validity period greater than 398 days.

How is Apple determining when a certificate was issued?  That's
traditionally been pretty tricky to determine, exactly, so I'm curious to
know how Apple has solved it.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Acceptable forms of evidence for key compromise

2020-03-02 Thread Matt Palmer via dev-security-policy
On Mon, Mar 02, 2020 at 07:48:23PM +, Corey Bonnell wrote:
> I do think there's value in developing some standard mechanism to request 
> revocation/demonstrate possession of the private key.

Interestingly, there (more-or-less) is one these days, as part of ACME.  It
requires the usual amount of faffing around that's required to do anything
non-trivial with a JWS, but it does get the job done.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Acceptable forms of evidence for key compromise

2020-03-02 Thread Matt Palmer via dev-security-policy
On Mon, Mar 02, 2020 at 07:35:06PM +, Nick Lamb wrote:
> On Mon, 2 Mar 2020 13:48:55 +1100
> Matt Palmer via dev-security-policy
>  wrote:
> > In my specific case, I've been providing a JWS[1] signed by the
> > compromised private key, and CAs are telling me that they can't (or
> > won't) work with a JWS, and thus no revocation is going to happen.
> > Is this a reasonable response?
> 
> I don't hate JWS, but I can see Ryan's point of view on this. Not every
> "proof" is easy to definitively assess, and a CA doesn't want to get
> into the game of doing detailed forensics on (perhaps) random unfounded
> claims.
> 
> Maybe it makes sense for Mozilla to provide in its policy (without
> limiting what else might be accepted) an example method of
> demonstrating Key Compromise which it considers definitely sufficient ?

I think it would be useful if Mozilla were to require that CPS have details
of acceptable methods of demonstrating key compromise.  There's even a
section which it would fit into nicely: 4.9.12, "Special Requirements for
Key Compromise".  It wouldn't solve the primary problem that I have --
having to special case every CA's pet method for requiring evidence -- but
it would, at least, close the "oh no wait we need *this* evidence" loophole,
and give reporting parties something to go off when reporting key
compromises.

Requiring that a CA's standards of evidence didn't require the use of one
specific tool (`openssl dgst` I'm looking at *you*) would be icing on the
cake.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Sectigo: Failure to process revocation request within 24 hours

2020-03-01 Thread Matt Palmer via dev-security-policy
Between 26 Feb 2020 00:48:11 UTC and 26 Feb 2020 21:10:18 UTC, I sent three
Certificate Problem Reports to sslab...@sectigo.com, reporting that
certificates issued by then were using keys which have been compromised due
to being publicly disclosed.  As of the time of writing, I have not received
a preliminary report of Sectigo's findings, as I believe is required by
section 4.9.5 of the Baseline Requirements.

In each case, I received an auto-acknowledgement e-mail containing a case
number, which indicates that Sectigo did, in fact, receive my problem
report.

Due to a mistake on my part, the evidence I provided to Sectigo was not
sufficient to verify that the key was in fact compromised, so I am not
claiming that Sectigo has fallen foul of BR s4.9.1.1.  However, as BR s4.9.5
require a report to be provided within 24 hours, I still believe Sectigo
has an operational deficiency which requires investigation.

The times of the e-mails I sent, the Sectigo case number I received in
response, and the further responses I have received from Sectigo, if any,
are detailed below.  All times are taken from the `Date` header of the
relevant e-mail, adjusted to UTC if required.

Case #00572387
  https://crt.sh/?id=2455920199
  Sent: 26 Feb 2020 00:48:11 +
  Auto-ack: 26 Feb 2020 00:48:24 +

  At 27 Feb 2020 19:15:10 +, I received an e-mail purporting to be from
  Sectigo Security, quoting my initial report, and saying "we will look into
  this right away".  Note that even this response, which I do not consider
  qualifies as a "preliminary report", was sent over 24 hours after the
  initial problem report.

  No further response has been received since then.


Case #00572465
  https://crt.sh/?id=2413850414
  Sent: 26 Feb 2020 05:07:34 +
  Auto-ack: 26 Feb 2020 05:07:45 +

  No further response has been received since the auto-acknowledgement.


Case #00573105
  https://crt.sh/?id=683622319
  Sent: Wed, 26 Feb 2020 21:10:18 +
  Auto-ack: Wed, 26 Feb 2020 21:10:32 +

  No further response has been received since the auto-acknowledgement.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Acceptable forms of evidence for key compromise

2020-03-01 Thread Matt Palmer via dev-security-policy
On Sun, Mar 01, 2020 at 11:14:12PM -0500, Ryan Sleevi wrote:
> On Sun, Mar 1, 2020 at 9:49 PM Matt Palmer via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> > The BRs, in s4.9.1.1, say:
> >
> > > The CA SHALL revoke a Certificate within 24 hours if one or more of the
> > > following occurs:
> > >
> > > [...]
> > > 3. The CA obtains evidence that the Subscriber's Private Key
> > > corresponding to the Public Key in the Certificate suffered a Key
> > > Compromise
> >
> > I've come to have some concerns about this clause of the BRs, in that it
> > seems very much up to the CA to decide what constitutes valid "evidence" of
> > key compromise, and there doesn't appear to be anything (other than
> > possible
> > mdsp censure, after the fact) preventing a CA using "your evidence is
> > insufficient" as a way of delaying revocation, potentially indefinitely.
> 
> That’s not quite correct. They also have to convince their auditor,
> although whether or not you see that as a barrier depends.

No, I don't see auditors as a barrier to anything.

> However, I get the feeling that you don’t put much stock into incident
> reports and browsers dim view of shenanigans. That might be worth expanding
> upon, if you believe the incident reporting process is not adequately
> protecting users or balancing tradeoffs.

No, it's not that.  I like the incident report system, and Mozilla does a
reasonable job of enforcing what rules there already are.  It's just that CAs
often argue that they didn't *know* that doing a bad thing was bad because
the rules didn't *say* that it was a bad thing, and when I started operating
in this area I found something that I thought was potentially a loophole,
and I wanted to discuss it before standing up and shouting "HOUSTON WE HAVE
AN INCIDENT!" -- because *that* is the sort of thing that devalues the
incident reporting system.

> > In my specific case, I've been providing a JWS[1] signed by the compromised
> > private key, and CAs are telling me that they can't (or won't) work with a
> > JWS, and thus no revocation is going to happen.  Is this a reasonable
> > response?
> 
> Honestly? Yes.

Fair enough.  Your honesty is appreciated.

> There’s no reason to expect a CA to be prepared to support Arbitrary Format
> Foo. While my distaste for the JOSE suite of boondoggles is deeply
> entrenched, I do think there’s benefit in using common PDUs that CA tooling
> already deals with. As discussed in the past, the CSR approach is not
> without merit.

I found the argument that it was risky to use anything that could possibly
be mistaken for a "real" web PKI artifact, to be quite compelling.  That is
why I kept away from using a CSR, self-signed cert ("poisoned" or
otherwise), or anything of that nature.

At any rate, it is at least easy enough to test whether CAs are able to
handle a CSR any better than a JWS.  There's plenty of fish still in the
barrel; I'll pull a few keys out of cold storage, generate CSRs from them,
and see if CAs are able to process those any easier.

> > Relatedly, if a CA were to receive a key compromise notification which
> > truly *doesn't* have sufficient evidence of compromise, but which at
> > least has some degree of legitimacy (rather than "I haxxed teh keys to
> > your CA, better pay me dogecoin!!!11!one!"), is there (or should there
> > be) any onus on the CA to respond to that notification within a certain
> > timeframe?
> 
> CAs are required to make a preliminary incident report available within 24
> hours, including the “sod off your report is garbage”, as covered in 4.9.5.

Aha, I'd forgotten about that bit.  That's pretty cut and dried.

> I ask this because I accidentally sent a couple of compromise notifications
> > with an incorrect URL.  While one notification got what appeared to be a
> > human saying "we'll look into it" (which itself was sent more than 24 hours
> > after I received the corresponding auto-ack), others have been greeted with
> > complete radio silence (other than the auto-ack).  This seems...
> > sub-optimal.
> 
> Did you use the CPS documented problem reporting mechanism?

I've been using the "Report a problem" data from crt.sh, which is populated,
I believe, from CCADB.  I just checked the CPS of the two CAs at issue, and
in both cases the initial reports were sent to the correct e-mail address.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Acceptable forms of evidence for key compromise

2020-03-01 Thread Matt Palmer via dev-security-policy
The BRs, in s4.9.1.1, say:

> The CA SHALL revoke a Certificate within 24 hours if one or more of the
> following occurs:
>
> [...]
> 3. The CA obtains evidence that the Subscriber's Private Key
> corresponding to the Public Key in the Certificate suffered a Key
> Compromise

I've come to have some concerns about this clause of the BRs, in that it
seems very much up to the CA to decide what constitutes valid "evidence" of
key compromise, and there doesn't appear to be anything (other than possible
mdsp censure, after the fact) preventing a CA using "your evidence is
insufficient" as a way of delaying revocation, potentially indefinitely.

In my specific case, I've been providing a JWS[1] signed by the compromised
private key, and CAs are telling me that they can't (or won't) work with a
JWS, and thus no revocation is going to happen.  Is this a reasonable
response?  If it is a reasonable response, what properties are necessary to
differentiate valid evidence from invalid?  Is it appropriate for a CA to be
able to unilaterally decide what constitutes valid evidence, and should they
be able to change that decision at any time?

Relatedly, if a CA were to receive a key compromise notification which truly
*doesn't* have sufficient evidence of compromise, but which at least has
some degree of legitimacy (rather than "I haxxed teh keys to your CA, better
pay me dogecoin!!!11!one!"), is there (or should there be) any onus on the
CA to respond to that notification within a certain timeframe?

I ask this because I accidentally sent a couple of compromise notifications
with an incorrect URL.  While one notification got what appeared to be a
human saying "we'll look into it" (which itself was sent more than 24 hours
after I received the corresponding auto-ack), others have been greeted with
complete radio silence (other than the auto-ack).  This seems...
sub-optimal.

I'd appreciate commentary and feedback on this area of the BRs and its
ramifications, to help me to decide whether to report these situations as
failures to revoke, or perhaps modify my procedures for reporting
publicly-trusted certificates whose keys have been published.

Thanks,
- Matt

[1] "JSON Web Signature", as specified in the standards-track RFC7515.  It
is used in a variety of places, including, but not limited to, the Automatic
Certificate Management Environment (ACME, RFC8555) -- although I wasn't
consciously aware of that at the time I chose to generate key compromise
attestations in that format.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Apple: Patch Management

2019-12-09 Thread Matt Palmer via dev-security-policy
On Fri, Dec 06, 2019 at 07:08:46PM -0800, Apple CA via dev-security-policy 
wrote:
> On Saturday, November 23, 2019 at 3:28:10 PM UTC-8, Matt Palmer wrote:
> > [aside: this is how incident reports should be done, IMHO]
> > 
> > On Fri, Nov 22, 2019 at 07:23:27PM -0800, Apple CA via dev-security-policy 
> > wrote:
> > > We did not have an accurate understanding of how the vulnerability scanner
> > > worked.  Our understanding of its capabilities lead us to believe it was
> > > scanning and detecting vulnerabilities in EJBCA.
> > 
> > There's a reasonable chance that other CAs may have a similar situation, so
> > I think it's worth digging deeper into the root causes here.  Can you expand
> > on how this misunderstanding regarding the vulnerability scanner came to
> > pass?  What was the information on which you were relying when you came to
> > the understanding of the vulnerability scanner's capabilities?  Were you
> > misled by the vendor marketing or technical documentation, or was it an
> > Apple-internal assessment that came to an inaccurate conclution?  Or
> > "other"?
> 
> In order to identify vulnerabilities, the vulnerability scanner (1)
> attempts to identify/profile software listening on ports and (2) compares
> software versions against public CVEs and proprietary data sources.  EJBCA
> is not broadly used software, and the vulnerability scanner did not have
> custom EJBCA detection logic.  Upon our deeper investigation, we
> discovered that it (1) only scans the HTTP service and not the EJBCA
> software, which we would consider insufficient on its own and (2) is not
> as effective at flagging vulnerabilities in EJBCA because CVEs are not
> published by EJBCA.  We don’t feel we were mislead by the vendor.

Thanks for clarifying how the security scanning software worked; that's
useful.  I'm not confident that we've determined any root causes for the
failure, though, especially things that other CAs in the ecosystem can learn
from.  I'll try a different phrasing, which will hopefully provide more
clarity as to what I'm trying to achieve:

What specific, actionable items would you recommend all CAs undertake to
remove or mitigate the risk of this, or a substantially similar, problem
occurring in their environment?

> CVEs are not published by EJBCA.

Does anyone else feel that this is a really, really, *really* bad idea?  The
CVE system, whilst far from perfect, seems to be the agreed upon medium for
managing these types of issues, and it disappoints me that EJBCA would
appear to be "opting out" of it.  Should discussions with EJBCA, either from
their customers or the wider community, be initiated?

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Apple: Patch Management

2019-11-23 Thread Matt Palmer via dev-security-policy
[aside: this is how incident reports should be done, IMHO]

On Fri, Nov 22, 2019 at 07:23:27PM -0800, Apple CA via dev-security-policy 
wrote:
> We did not have an accurate understanding of how the vulnerability scanner
> worked.  Our understanding of its capabilities lead us to believe it was
> scanning and detecting vulnerabilities in EJBCA.

There's a reasonable chance that other CAs may have a similar situation, so
I think it's worth digging deeper into the root causes here.  Can you expand
on how this misunderstanding regarding the vulnerability scanner came to
pass?  What was the information on which you were relying when you came to
the understanding of the vulnerability scanner's capabilities?  Were you
misled by the vendor marketing or technical documentation, or was it an
Apple-internal assessment that came to an inaccurate conclution?  Or
"other"?

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Firefox removes UI for site identity

2019-10-22 Thread Matt Palmer via dev-security-policy
On Tue, Oct 22, 2019 at 03:35:52PM -0700, Kirk Hall via dev-security-policy 
wrote:
> I also have a question for Mozilla on the removal of the EV UI.

This is a mischaracterisation.  The EV UI has not been removed, it has been
moved to a new location.

> So my question to Mozilla is, why did Mozilla post this as a subject on
> the mozilla.dev.security.policy list if it didn't plan to interact with
> members of the community who took the time to post responses?

What leads you to believe that Mozilla didn't plan to interact with members
of the community?  It is entirely plausible that if any useful responses
that warranted interaction were made, interaction would have occurred.

I don't believe that Mozilla is obliged to respond to people who have
nothing useful to contribute, and who don't accurately describe the change
being made.

> This issue started with a posting by Mozilla on August 12, but despite 237
> subsequent postings from many members of the Mozilla community, I don't
> think Mozilla staff ever responded to anything or anyone - not to explain
> or justify the decision, not to argue.  Just silence.

I think the decision was explained and justified in the initial
announcement.  No information that contradicted the provided justification
was presented, so I don't see what argument was required.

> In the future, if Mozilla has already made up its mind and is not
> interested in hearing back from the community, it might be better NOT to
> start a discussion on the list soliciting feedback.

Soliciting feedback and hearing back from the community does not require
response from Mozilla, merely reading.  Do you have any evidence that
Mozilla staff did not, in fact, read the feedback that was given?

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Website owner survey data on identity, browser UIs, and the EV UI

2019-10-08 Thread Matt Palmer via dev-security-policy
On Tue, Oct 08, 2019 at 07:16:59PM -0700, Paul Walsh via dev-security-policy 
wrote:
> Why isn’t anyone’s head blowing up over the Let’s Encrypt stats?

Because those stats don't show anything worth blowing up ones head over.  I
don't see anything in them that indicates that those 14,000 certificates --
or even one certificate, for that matter --was issued without validating
control over the domain name(s) indicated in the certificates.

EV and DV serve different purposes, and while DV is more-or-less solving the
problem it sets out to solve, the credible evidence presented shows that EV
does not solve any problem that browsers are interested in.

> If people think “EV is broken” they must think DV is stuck in hell with
> broken legs.

Alternately, people realise that EV and DV serve different purposes through
different methods, and thus cannot be compared in the trivial and flippant
way you suggest.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Re: [FORGED] Re: Website owner survey data on identity, browser UIs, and the EV UI

2019-10-03 Thread Matt Palmer via dev-security-policy
On Thu, Oct 03, 2019 at 05:36:50PM -0700, Ronald Crane via dev-security-policy 
wrote:
> 
> On 10/3/2019 2:09 PM, Ryan Sleevi via dev-security-policy wrote:
> > [snip]
> > > I guess I wasn't specific enough. I am looking for a good study that
> > > supports the proposition that the Internet community has (1) made a
> > > concerted effort to ensure that there is only one authentic domain per
> > > entity (or, at most, per entity-service, e.g, retail brokerage
> > > services); and (2) has made a concerted effort to educate users to use
> > > only that domain; and (3) that those steps have failed to significantly
> > > reduce the successful phishing rate of the users that steps (1) and (2)
> > > targeted.
> > > 
> > > 
> > > Was it intentional to presume that (1) is correct or desirable? It’s
> > > unclear if you believe it is, but if it isn’t (and for many reasons, it
> > > isn’t), then naturally one might assume (2) and (3) don’t exist.
> 
> Yes, I do believe that (1) is desirable. It has a long history in the
> context of brand identity (e.g., "Coke" in red and white script), where
> virtually all consumers use it to identify authentic products and reject
> counterfeits.

This is a valuable analogy, but I'm not sure how it advances the argument
you appear to be making.

To take the specific example you've provided, there is more than one product
made under the general brand of "Coke", most -- but not all -- involving the
word "Coke" in way or another.  If we take the "domain name per product"
analogy, there would be a bunch of different "domains" for these products:
coke, newcoke, cocacolaclassic, dietcoke, cokezero, vanillacoke,
caffeinefreecoke, and so on.

That's before we start considering other products produced and marketed by
the same company under different names.  There's a bunch of other carbonated
beverages, plus uncarbonated beverages, and even non-beverage foodstuffs,
that are all produced and/or marketed by the company that produces and
markets "Coke", at least in the country I'm from.

Contrariwise, neither the word "Coke", nor white writing on a red
background, nor even the specific font used, unambiguously identify one
particular brand -- and even then, it is only in the context of beverages
(attempts by trademark-maximalists notwithstanding).  Further, as the rather
extensive examples of counterfeit goods demonstrate, the mere existence of a
trademark, or even active measures, does not stop counterfeiting, nor does
it even attempt to -- it only tries to make counterfeiting commercially
unattractive.

Where the analogy breaks down is that in the case of phishing, people don't
typically try to "counterfeit" the domain name, merely "confuse".  If I make
a product called "Matt's Coke" and sell it, I may certainly find myself in
some legal hot water, but it won't be because of "counterfeiting", but
rather a more nebulous form of trademark infringement around confusion.

> Basically, many internet-based entities appear to have brought phishing upon
> themselves by failing to extend the above to their internet presences.
> Instead, they've trained their users to accept as authentic any domain that
> has a passing resemblance to their rat's-nest of legitimate domains.

While there's a certain amount of truth to that, I think quite a lot of it
is users just not checking *anything* about the link they're clicking.  The
amount of spam I get inviting me to login to various banking websites using
a link to yevgeniysflowershoppe.ua or the like would suggest that phishing
doesn't not absolutely rely on confusion.  Your hypothesis relies on the
idea that users can be trained in any meaningful fashion, which the research
seems to not support at all.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: An honest viewpoint: Move Extended Validation Information out of the URL bar

2019-09-07 Thread Matt Palmer via dev-security-policy
On Thu, Sep 05, 2019 at 03:38:24PM -0700, browserpadlock--- via 
dev-security-policy wrote:
> On Thursday, September 5, 2019 at 12:16:13 PM UTC-4, Jonathan Rudenberg wrote:
> > On Wed, Sep 4, 2019, at 14:53, browserpadlock--- via dev-security-policy 
> > wrote:
> > > It seems that the Certificate Authorities are doing their jobs quite 
> > > well in regards to EV certs and making sure that it is very difficult 
> > > for non-qualified/verified sites to get them according to a recently 
> > > concluded study by Georgia Tech CyFI Lab 
> > > (https://www.helpnetsecurity.com/2019/08/01/ev-ssl-certificate/), a 
> > > well respected technical institution, NOT funded by the CA industry.
> > 
> > This paper was paid for by Sectigo, this was clearly noted in their press 
> > release:
> > https://sectigo.com/blog/new-research-in-ev-ssl-security-from-georgia-tech-ev-domains-99-99-free-of-online-crime
> > 
> > The methodology is deeply flawed, for example these are some of the 
> > "malicious" domains from their dataset:
> > 
> > extended-validation-ssl.websecurity.symantec.com
> > hotmail.co.jp
> > math.northwestern.edu
> > downloads.comodo.com
> 
> Thanks for the update Jonathan, the article I read didn't mention the
> funding source, but the article wasn't the point of my post.

For something that wasn't the point of your post, it seems to have a
very prominent position therein.

> Bottom line, why strip out of view the only browser mechanism that
> identifies the owner of a website?

Because it doesn't provide any benefit commensurate with the costs.

> Why not force the CA's to improve the EV validation process and create a
> ubiquitous user experiences around EV across ALL browsers so that visitors
> can begin to see the commonality of EV's purpose?

Because there have been no plausible proposals made which meaningfully
improve the EV validation process to address the flaws.

> For the betterment of a safer and more trustworthy Internet, why digress
> from the concept of web identity verification instead of trying to make it
> better?

Because "web identity verification", as embodied in EV, has not been shown
to contribute to "the betterment of a safer and more trustworthy Internet".

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-09-04 Thread Matt Palmer via dev-security-policy
On Wed, Sep 04, 2019 at 03:50:40PM +0200, Kurt Roeckx via dev-security-policy 
wrote:
> On 2019-09-04 14:14, Matt Palmer wrote:
> > If EV information is of use in anti-phishing efforts, then it would be best
> > for the providers of anti-phishing services to team up with CAs to describe
> > the advantages of continuing to provide an EV certificate.  If site owners,
> > who are presumably smart people with significant technical skills making
> > decisions on a rational basis, don't see the benefits (after a little
> > training), perhaps you should accept their decision, even if you disagree
> > with them or have a different commercial interest.
> 
> So I think what you're saying is that sites with EV will still be more
> trusted, but the browser isn't really aware of it.

I trust the security of sites that I manage more than I trust rando sites on
the Internet, but the browser isn't really aware of it either.  Did you have
a useful point to make?

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-09-04 Thread Matt Palmer via dev-security-policy
On Tue, Sep 03, 2019 at 06:16:23PM -0700, Kirk Hall via dev-security-policy 
wrote:
> However, I did receive authority to post the following statement from
> someone who works for a major browser phishing filter (but without
> disclosing the person's name or company).  Here is the authorized
> statement:
> 
> “A browser phishing filter representative has confirmed that (1) their
> research teams do look at EV certificate attributes and do feel there
> is signal there for phish/malware detection, and (2) they would like
> to have continued access to this EV data.”
> 
> I think this establishes the point I made last week – that EV data is
> valuable for anti-phishing efforts and so EV should be supported by the
> browsers.

I think you're overstating the case somewhat.  The statement you quoted
establishes that EV data is *used* for anti-phishing efforts.  It certainly
says nothing in support of the assertion that EV should be supported by
browsers.  It also doesn't address the concerns that Ryan put forward
regarding the advisability of using EV data for anti-phishing.

> I’m still concerned that removing the EV UI in Firefox could cause some EV
> sites to stop using EV certificates which in turn would eliminate the
> availability of their EV website data from the security ecosystem.  This
> possible adverse outcome should be considered by Mozilla before it removes
> its EV UI.

Mozilla should do what is best for the users of Mozilla products[1].  Asking
Mozilla to carry a feature in Firefox that is of zero-to-negative value to
Firefox users, so as to provide benefits to anti-phishing systems, is as
nonsensical as asking Mozilla to do the same purely to provide revenue
benefits to CAs.

If EV information is of use in anti-phishing efforts, then it would be best
for the providers of anti-phishing services to team up with CAs to describe
the advantages of continuing to provide an EV certificate.  If site owners,
who are presumably smart people with significant technical skills making
decisions on a rational basis, don't see the benefits (after a little
training), perhaps you should accept their decision, even if you disagree
with them or have a different commercial interest.

- Matt

[1] within the context of the use of Mozilla products, at any rate.  I'm
sure it would be best for the users of Mozilla products if everyone
using Firefox got a million dollars and a pony, but I hope nobody's
going to start agitating for Mozilla to get into the equine distribution
game.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-29 Thread Matt Palmer via dev-security-policy
On Thu, Aug 29, 2019 at 02:14:10PM -0700, Kirk Hall via dev-security-policy 
wrote:
> For EV certificates, the appeal for website owners over the past 10 years
> has been that they get a distinctive EV UI that they believe protects
> their consumers and their brands (again, don't argue with me but argue
> with them - but these website owners include many of the top enterprises
> and brands, so presumably they are smart people too with significant
> technical skills and have made their own EV decisions on a rational basis,
> even if you disagree with them or have a different commercial interest in
> supporting or not supporting website identity).

On the contrary, I think it's perfectly reasonable to discuss this stance
with representatives of CAs, as it is, in my estimation, primarily the
marketing activities of CAs which has created this belief in website owners.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-28 Thread Matt Palmer via dev-security-policy
On Wed, Aug 28, 2019 at 11:51:37AM -0700, Josef Schneider via 
dev-security-policy wrote:
> Am Dienstag, 27. August 2019 00:48:38 UTC+2 schrieb Matt Palmer:
> > On Mon, Aug 26, 2019 at 05:39:14AM -0700, Josef Schneider via 
> > dev-security-policy wrote:
> > > Sure I can register a company and get an EV certificate for that company. 
> > > But can I do this completely anonymous like getting a DV cert?
> > 
> > Yes.
> 
> Not legally probably

Someone planning to commit fraud is unlikely to be deterred by the need to
commit fraud in order to commit fraud.

> and this also depends on the jurisdiction.  Since an
> EV cert shows the jurisdiction, a user can draw conclusions from that.

You're suggesting that Relying Parties need to familiarise themselves with
the validation procedures of every jurisdiction which is listed in an EV
certificate they are presented with, in order to establish the
trustworthiness of that EV certificate?

I'm just going to leave that there.  For posterity.

> > > Nobody is arguing that EV certificates are perfect and everything is good
> > > if you use them.  But they do raise the bar for criminals.  And in my
> > > opinion, significantly.
> > 
> > Except criminals don't need them.  Raising the bar doesn't help if you don't
> > need to go over the bar.
> 
> But removing the bar is also not the correct solution.  If you find out
> that the back door to your house is not secured properly, will you remove
> the front door because it doesn't matter anyway or do you strengthen the
> back door?

The problem with your analogy is that, in the case under discussion, there
is no known way to secure the back door, and it's the broken and unfixable
back door, not the front door, that is being removed.

So yes, if my back door was insecure, and the best information available
indicated that it couldn't be secured, and it was causing me time and money
to maintain in its current, insecure, state, I would absolutely remove it. 
I expect you would, too.  Although I can certainly understand that if you
were making money by allowing people to use my broken back door, you might
want to encourage me not to remove it.

> > > What I propose is for mozilla to not say "Fuck it, it's not working, just
> > > remove it!" but instead try to focus on finding a better UX solution to
> > > the problem that end users are not aware if a site that should have an EV
> > > certificate is not presenting one.
> > 
> > Why should Mozilla do all this work?  So far, all the evidence suggests that
> > EV certs do not do what their advocates say they do, and have a significant
> > cost to browsers (code complexity, administration of EV bits, etc) and
> > relying parties (need to learn what the EV UI means, what it does and
> > doesn't claim, etc).
> 
> Why should Mozilla do work to make the situation worse?  The current EV
> validation information in the URL works and is helpful to some users
> (maybe only a small percentage of users, but still...).  Why is mozilla
> interested in spending money making the situation worse.  If mozilla
> doesn't care about the empowerment of their users, the default would be to
> not change anything, not actively making it worse.

Not being Mozilla, I wouldn't presume to speak for them, but two
possibilities leap immediately to mind:

* It costs time and money to maintain the list of trust anchors approved for
  EV treatment -- OID mappings, evaluating EV sections of CP/CPSes, chasing
  audit reports, dealing with incident reports relating to EV validation
  failures, and discussing and evaluating proposed changes to the EVGLs.

* EV-related code in Mozilla software requires maintenance as other changes
  in surrounding code are made.  Less code == ess things to change, so
  gutting the EV support reduces maintenance costs.

> EV certificates do make more assurances about the certificate owner than
> DV certificates.  This is a fact.  This information can be very useful for
> someone that understands what it means.  Probably most users don't
> understand what it means.  But why not improve the display of this
> valuable information instead of hiding it?

Because there is no indication of what an improved EV UI would look like.

I note that you've neglected to answer the question I posed.  If CAs sat
down and did some research into what an actual, useful EV UI would involve,
then Mozilla would have something to work from.  But it would appear that
CAs -- the organisations, I'll reiterate, that benefit financially from the
continued special UI treatment of EV certificates -- are not interested in
making such a contribution.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-26 Thread Matt Palmer via dev-security-policy
On Mon, Aug 26, 2019 at 05:39:14AM -0700, Josef Schneider via 
dev-security-policy wrote:
> Sure I can register a company and get an EV certificate for that company. 
> But can I do this completely anonymous like getting a DV cert?

Yes.

> Nobody is arguing that EV certificates are perfect and everything is good
> if you use them.  But they do raise the bar for criminals.  And in my
> opinion, significantly.

Except criminals don't need them.  Raising the bar doesn't help if you don't
need to go over the bar.

> What I propose is for mozilla to not say "Fuck it, it's not working, just
> remove it!" but instead try to focus on finding a better UX solution to
> the problem that end users are not aware if a site that should have an EV
> certificate is not presenting one.

Why should Mozilla do all this work?  So far, all the evidence suggests that
EV certs do not do what their advocates say they do, and have a significant
cost to browsers (code complexity, administration of EV bits, etc) and
relying parties (need to learn what the EV UI means, what it does and
doesn't claim, etc).

Instead of Mozilla continuing to take on the burden of keeping this ship
afloat, why don't the parties that benefit from selling EV certs (ie CAs) do
the hard yards to figure out what works, in a rigorous and scientific way,
and then present the results of that research to the wider community?

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-18 Thread Matt Palmer via dev-security-policy
On Sun, Aug 18, 2019 at 09:14:52AM +0200, Paul van Brouwershaven wrote:
> On Sun, 18 Aug 2019, 07:18 Matt Palmer via dev-security-policy, <
> dev-security-policy@lists.mozilla.org> wrote:
> > On Thu, Aug 15, 2019 at 05:58:56PM +, Doug Beattie via
> > dev-security-policy wrote:
> > > Shouldn’t the large enterprises that see a value in identity (as
> > > does GlobalSign) drive the need for ending EV certificates?
> >
> > Can you point me to the in-progress discussion in the CA/B Forum lists
> > that is proposing to end EV certificates?  From what I can see so far,
> > browser vendors aren't "ending" EV certificates, a couple of them are
> > merely
> > modifying their UIs guided by relevant research into the efficacy (or lack
> > thereof) of the current UI.
> 
> What evidence or research shows that the new location is providing better
> protection for the end users?

I don't think it requires rigorous research to show that 0 >= 0.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-18 Thread Matt Palmer via dev-security-policy
On Sun, Aug 18, 2019 at 01:35:55PM -0700, Daniel Marschall via 
dev-security-policy wrote:
> Am Sonntag, 18. August 2019 07:18:56 UTC+2 schrieb Matt Palmer:
> > [...] From what I can see so far,
> > browser vendors aren't "ending" EV certificates, a couple of them are merely
> > modifying their UIs guided by relevant research into the efficacy (or lack
> > thereof) of the current UI.
> 
> Matt, I don't understand this.  Isn't removing the UI bling the same as
> "removing" EV from the browser?

Yes, but removing EV from the browser isn't the same as ending EV
certificates, which is what was claimed in the message I responded to.

> I guess that EV will eventually ended by the Customers/CAs.

We'll have to leave it to the invisible hand of the market to sort that out. 
If CAs cease issuing EV TLS/SSL certificates, it will presumably be because
customers are no longer buying them, and customers will cease buying them if
there is no perceived value in them, which is what CAs have repeatedly said
isn't the case.  So CAs ceasing to issue EV TLS/SSL certificates will be a
confirmation that, in fact, EV TLS/SSL certificates had no value beyond the
UI "bling", as you call it, which the research overwhelmingly indicates is
of trivial value.

> I just looked at Opera and noticed that they don't have any UI difference
> at all, which means I have to open the X.509 certificate to see if it is
> EV or not.

So that's one more browser vendor that sees no value in "UI bling" for EV
certificates.  It almost makes Firefox and Chrome look like the laggards in
this decision, rather than the harbingers of a new era.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-17 Thread Matt Palmer via dev-security-policy
On Fri, Aug 16, 2019 at 12:42:35PM -0700, tim--- via dev-security-policy wrote:
> That’s where EV certificates can help.  Data shows that websites with EV
> certificates have a very low incidence of phishing.

[...]

> This research validates the results of an earlier study of 3,494 encrypted
> phishing sites in February 2019 [5].  In this study the distribution of
> encrypted phishing sites by certificate type was as follows:
> 
> EV0 phishing sites (0%)

If you replace "EV" in the above with "WombleSecure(TM)(PatPend) security
seal", it is equally as true, and equally irrelevant.  It's the old "tiger
repelling rock" spiel ("Do you see any tigers around?  See, it works
great!") with a splash of X.509 for flavour.

It is not the hardest problem in science to design and execute an experiment
to demonstrate EV's efficacy.  At the most basic level, it could be "here is
a site that was receiving X reports of users being phished per month, they
deployed an EV cert and their report rate went down to Y per month, here are
the confounding factors we considered and here's why they weren't the
cause".  Increase the number of sites to improve power as needed.

That no EV-issuing CA has published the results of such an experiment, given
the large revenues it would protect, and the strong signalling that browsers
have been making over (at least) the last several years, the most plausible
explanation to me is that EV-issuing CAs *have* done the experiments, and
they didn't show anything, so in the finest traditions of
commercially-motivated science, they just buried it.  The other option is
that the management EV-issuing CAs are just clueless, which is possible, but
not really any more comforting.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-17 Thread Matt Palmer via dev-security-policy
On Fri, Aug 16, 2019 at 01:37:40PM +, Doug Beattie via dev-security-policy 
wrote:
> DB: Yes, that's true.  I was saying that phishing sites don't use EV, not
> that EV sites don't get phished
> 
> Surely this shows that EV is not needed to make phishing work, not that EV 
> reduces phishing?
> 
> [DB] It should show that users are safer when visiting an EV secured site.

When you have evidence of that, please feel free to share it.  Everything
that has been presented so far doesn't *actually* show that, it merely shows
something else that people then furiously hand-wave into "see, security!".

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-17 Thread Matt Palmer via dev-security-policy
On Thu, Aug 15, 2019 at 05:58:56PM +, Doug Beattie via dev-security-policy 
wrote:
> Shouldn’t the large enterprises that see a value in identity (as
> does GlobalSign) drive the need for ending EV certificates?

Can you point me to the in-progress discussion in the CA/B Forum lists
that is proposing to end EV certificates?  From what I can see so far,
browser vendors aren't "ending" EV certificates, a couple of them are merely
modifying their UIs guided by relevant research into the efficacy (or lack
thereof) of the current UI.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-17 Thread Matt Palmer via dev-security-policy
On Fri, Aug 16, 2019 at 03:15:39PM -0700, Daniel Marschall via 
dev-security-policy wrote:
> (2) I am a pro EV person, and I do not have any financial benefit from EV
> certificates.  I do not own EV certificates, instead my own websites use
> Let's Encrypt DV certificates.  But when I visit important pages like
> Google or PayPal, I do look at the EV indicator bar, because I know that
> these pages always have an EV certificate.

This would be a stronger argument if any Google property had an EV certificate,
or if paypal.com had been displaying an EV treatment on the most commonly
used Browser/OS combo for the past year.

> different color, that would be OK for me).  We cannot say that all users
> don't care about the EV indicator.  For some users like me, it is
> important.

I'm sure a browser plugin could be developed to provide some sort of
indication that a certificate being presented was EV, and you could install
that if you were sufficiently interested.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


  1   2   3   4   >