Re: Digicert issued certificate with let's encrypts public key
On May 18, 2020, at 23:58, Peter Gutmann via dev-security-policy wrote: > > > > This isn't snark, it's a genuine question: If the CA isn't checking that the > entity they're certifying controls the key they're certifying, aren't they > then not acting as CAs any more? They are really only certifying that the requester can control the dns for the domain name mentioned in the certificate anyway. The same function DNSSEC provides without middle men :) Paul ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Use of Certificate/Public Key Pinning
On Mon, 12 Aug 2019, Nuno Ponte via dev-security-policy wrote: Recently, we (Multicert) had to rollout a general certificate replacement due to the serial number entropy issue. Some of the most troubled cases to replace the certificates were customers doing certificate pinning on mobile apps. Changing the certificate in these cases required configuration changes in the code base, rebuild app, QA testing, submission to App stores, call for expedited review of each App store, wait for review to be completed and only then the new app version is made available for installation by end users (which is turn are required to update the app the soonest). Meeting the 5-days deadline with this sort of process is “challenging”, at best. The OS and/or App should look at Certificate Transparency, instead of hacks that hardcode the certificate serial number. Paul ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Intent to Ship: Move Extended Validation Information out of the URL bar
> On Aug 12, 2019, at 14:30, Wayne Thayer via dev-security-policy > wrote: > > Mozilla has announced that we plan to relocate the EV UI in Firefox 70, > which is expected to be released on 22-October. Details below. Relocate seems a wrong word here. You are basically removing it. A few geeks will be able to find it at the new location and I wonder why Firefox even bothered to leave it at the new location. Paul ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: DarkMatter Concerns
On Tue, 26 Feb 2019, Rob Stradling via dev-security-policy wrote: Hi Scott. It seems that the m.d.s.p list server stripped the attachment, but (for the benefit of everyone reading this) I note that you've also attached it to https://bugzilla.mozilla.org/show_bug.cgi?id=1427262. Direct link: https://bug1427262.bmoattachments.org/attachment.cgi?id=9046699 Thanks for sending the link. The letter is uhm interesting. It both states they cannot say anything for national security reasons, say they unconditionally comply with national security (implying even if that violates any BRs) and claims transparency for using CT which is in fact being forced by browser vendors on them. "quests to protect their nations" definitely does not exclude "issuing BR violating improper certificates to ensnare enemies of a particular nation state". Now of course, I don't think this is very different from US based companies that are forced to do the same by their governments, which is why DNSSEC TLSA can be trusted more (and monitored better) than a collection of 500+ CAs from all main nation states that are known for offense cyber capabilities. But you can ignore this as off-topic :) Paul ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: [FORGED] Re: Incident report - Misissuance of CISCO VPN server certificates by Microsec
On Thu, 6 Dec 2018, Peter Gutmann via dev-security-policy wrote: Paul Wouters via dev-security-policy writes: Usually X509 is validated using standard libraries that only think of the TLS usage. So most certificates for VPN usage still add EKUs like serverAuth or clientAuth, or there will be interop problems. So just to make sure I've got this right, implementations are needing to add dummy TLS EKUs to non-TLS certs in order for them to "work"? You understanding is correct. In that case why not add a signalling EKU or policy value, a bit like Microsoft's systemHealthLoophole EKU (I don't know what its official name is, 1 3 6 1 4 1 311 47 1 3) where the normal systemHealth key usage is meant to indicate compliance with a system or corporate security policy and the systemHealthLoophole key usage is for systems that don't comply with the policy but that need a systemHealth certificate anyway. I'm not sure how that is helpful for those crypto libraries who mistakenly believe a certificate is a TLS certificate and thus if the EKU is not empty it should have serverAuth or clientAuth. Better to define a new EKU, "tlsCompabitility", telling the relying party that the TLS EKUs are present for compatibility purposes and can be ignored if it's a non-TLS use. As I stated earier, since often these certificates are re-used for the VPN server's TLS (openvpn, openconnect, etc) protocols or for their webgui's provisioning API, they will most likely want serverAuth anyway. btw for nss this got fixed recently: https://bugzilla.mozilla.org/show_bug.cgi?id=1252891 Paul ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Incident report - Misissuance of CISCO VPN server certificates by Microsec
> On Dec 5, 2018, at 16:49, Jakob Bohm via dev-security-policy > wrote: > > > > Another question of relevance: > > Does the applicable VPN hardware and software (Cisco VPN servers and > compatible VPN clients) work with certificates that omit all the TLS- > related EKUs, thus allowing future VPN certificates to fall outside the > BRs ? Speaking as an IKE/IPsec implementer. Usually X509 is validated using standard libraries that only think of the TLS usage. So most certificates for VPN usage still add EKUs like serverAuth or clientAuth, or there will be interop problems. Our implementation uses NSS which only weeks ago implemented IPsec profiles that causes non-empty EKU’s that miss serverAuth and clientAuth to validate correctly for IPsec. In other words, “no” is the answer to your question for the generic case. If Cisco VPN servers only need to talk to Cisco VPN clients, then maybe their implemention could do its Another issue is that some provisions webgui’s for IPsec use the VPN gateway’s TLS server, usually using the same certificate. Especially if also supporting other VPN protocols such as openvpn or anyconnect/openconnect. So those would really need serverAuth. Paul ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Mitigating DNS fragmentation attacks
On Oct 14, 2018, at 21:09, jsha--- via dev-security-policy wrote: > > There’s a paper from 2013 outlining a fragmentation attack on DNS that allows > an off-path attacker to poison certain DNS results using IP fragmentation[1]. > I’ve been thinking about mitigation techniques and I’m interested in hearing > what this group thinks. > The mitigation is dnssec. Ensure your data is cryptographically protected. Paul ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: A vision of an entirely different WebPKI of the future...
On Thu, 16 Aug 2018, Matthew Hardeman via dev-security-policy wrote: 1. Run one or more root CAs Why would people not in the business of being a CA do a better job than those currently in the CA business? I recognize it's a radical departure from what is. I'm interested in understanding if anything proposed here is impossible. If what's proposed here CAN happen, AND IF we are confident that valid certificates for a domain label should unambiguously align to domain control, isn't this the ultimate solution? If you want a radical change that makes it simpler, start doing TLSA in DNSSEC and skip the middle man that issues certs based on DNS records. Paul ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: 2018.05.18 Let's Encrypt CAA tag value case sensitivity incident
On Tue, 22 May 2018, Ryan Sleevi via dev-security-policy wrote: However, what does this buy us? Considering that the ZSKs are intentionally designed to be frequently rotated (24 - 72 hours), thus permitting weaker key sizes (RSA-512), I don't know anyone who believes or uses these timings or key sizes. It might be done as an _attack_ but it would be a very questionable deployment. I know of 12400 512 bit RSA ZSK's in a total of about 6.5 million. And I consider those to be an operational mistake. However, let us not pretend that recording the bytes-on-the-wire DNS responses, including for DNSSEC, necessarily helps us achieve some goal about repudiation. Rather, it helps us identify issues such as what LE highlighted - a need for quick and efficient information scanning to discover possible impact - which is hugely valuable in its own right, and is an area where I am certain that a majority of CAs are woefully lagging in. That LE recorded this at all, beyond simply "checked DNS", is more of a credit than a disservice, and a mitigating factor more than malfeasance. I see no reason why not to log the entire chain to the root. The only exception being maliciously long chains, which you can easilly cap and error out on after following about 50 DS records? Paul ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
RE: "multiple perspective validations" - AW: Regional BGP hijack of Amazon DNS infrastructure
On Mon, 30 Apr 2018, Tim Hollebeek wrote: What about the cases we discussed where there is DNSSEC, but only for a subtree? I don't know what that means? You mean a trust island not chained to the root? If so, then yes, that is a zone without DNSSEC since it is missing a DS in its parent (or grand parent, etc) But again, using a proper validating DNS server will handle all that for you. Paul ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
RE: "multiple perspective validations" - AW: Regional BGP hijack of Amazon DNS infrastructure
On Mon, 30 Apr 2018, Tim Hollebeek via dev-security-policy wrote: I don't think this opinion is in conflict with the suggestion that we required DNSSEC validation on CAA records when (however rarely) it is deployed. I added this as https://github.com/mozilla/pkipolicy/issues/133 One of the things that could help quite a bit is to only require DNSSEC validation when DNSSEC is deployed CORRECTLY, as opposed to some partial or broken deployment. It's generally broken or incomplete DNSSEC deployments that cause all the problems. Getting the rules for this right might be complicated, though. It's also wrong. You can't soft-fail on that and you don't want to be in the business of trying to figure out what is a sysadmin failure and what is an actual attack. The only somehwat valid soft-fail could come from recently expired RRSIGs, but validating DNS resolvers like unbound already build in a margin of a few hours, and I think you should not to anything special during CAA verification other then using a validating resolver. Paul ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: "multiple perspective validations" - AW: Regional BGP hijack of Amazon DNS infrastructure
On Wed, 25 Apr 2018, Ryan Hurst via dev-security-policy wrote: Multiple perspectives is useful when relying on any insecure third-party resource; for example DNS or Whois. This is different than requiring multiple validations of different types; an attacker that is able to manipulate the DNS validation at the IP layer is also likely going to be able to do the same for HTTP and Whois. which is why in the near future we can hopefully use RDAP over TLS (RFC 7481) instead of WHOIS, and of course since the near past, DNSSEC :) I'm not sure how useful it would be to have multiple network points for ACME testing - it will just lead to the attackers doing more then one BGP hijack at once. In the end, that's a numbers game with a bunch of race conditions. But hey, it might lead to actual BGP security getting deployed :) Paul ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: On the value of EV
On Mon, 11 Dec 2017, Ryan Hurst via dev-security-policy wrote: The issues with EV are much larger than UI. It needs to be revisited and a honest and achievable set of goals need to be established and the processes and procedures used pre-issuance and post-issuance need to be defined in support those goals. Until thats been done I can not imagine any browser would invest in new UI and education of users for this capability. While I agree that EV does not solve world peace, can you tell me what is wrong with the firefox approach of showing EV? That is, browsers hiding the real hostname with EV seems to behave wrong, and should be fixed. This seems unrelated to other noble goals of giving users improved security. It seems you are conflating many things, then say it is too much work and lets just scrap it. Thus, so far I see reason for some browsers to fix their UI. I can see reasons for EV to improve. I see no reason to further confuse users by removing EV without a successor. Paul ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: On the value of EV
On Mon, 11 Dec 2017, James Burton via dev-security-policy wrote: EV is on borrowed time You don't explain why? I mean domain names can be confusing or malicious too. Are domain names on borrowed time? If you remove EV, how will the users react when paypal or their bank is suddenly no longer "green" ? Are we going to teach them again that padlocks and green security come and go and to ignore it? Why is your cure (remove EV) better than fixing the UI parts of EV? Paul ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
RE: Anomalous Certificate Issuances based on historic CAA records
On Thu, 30 Nov 2017, Tim Hollebeek via dev-security-policy wrote: [somewhat off topic, you can safely hit delete now] So it turns out DNSSEC solves CAA problems for almost nobody, because almost nobody uses DNSSEC. The only people who need to use CAA are the CA's. They can surely manage to fire up a validating DNS resolver. I'm sure there are more BR's that "almost nobody uses" because there is no need for everyone to use it but CA's. If you talk about domain holders not being able to run DNSSEC, that's a pretty lame excuse too, when we have many Registrars and Hosters who run millions of DNSSEC secured zones. I feel this argument is similar to "hosting your own email service is too hard". If it is, there are excellent commercial alternatives available. And given the serious flaws both in DNSSEC itself and exiting DNSSEC implementations For one, I'm not aware of "serious flaws in DNSSEC". As for wanting something to die because of bad implementations, can I suggest starting with ASN.1 and X.509, then move to crypto primitives and TLS ? :) The presence of DNSSEC in the BR policy for handling DNS failures, in hindsight, was probably a mistake, and Trusting unauthenticated data from the network should really be a no-op, from a princple point of view. Making any security decisions based on "some blob from the network that anyone could have modified without detecting" is just madness. Right now, the only thing it is really accomplishing is preventing certificate issuance to customers whose DNS infrastructure is flaky, misconfigured, or unreliable. Seems like the kind of people who should be given a certificate award for excellence :P Longer term, DNS over HTTPS is probably a more useful path forward than DNSSEC for CAA, but unfortunately that is still in it's infancy. Not really, because that only offers transport security and not data integrity. A compromised nameserver should not be able to fake the lack of CAA record for a domain that uses secure offline DNSSEC signing. The problem DNSSEC checks for CAA was intended to solve was the fact that it is certainly possible that a well-resourced attacker can manipulate the DNS responses that the CA sees as part of its CAA checks. A better mitigation, perhaps, is for multiple parties to publicly attest in a verifiable way as to what the state of DNS was at/near the time of issuance with respect to the relevant CAA records. Then why not simply cut out the DNS middle man, and give domains another way to advertise this information. What about RDAP ? What about an EPP "CA lock" similar to a "Registrar lock" ? Of course, to avoid some of the extremely interesting experiences the industry has had with CAA Maybe people should use proper dns libraries and not invent their own CNAME / DNAME handling :) Paul ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Anomalous Certificate Issuances based on historic CAA records
On Thu, 30 Nov 2017, Wayne Thayer wrote: [cut CC: list, assuming we're all on the list] - Subscribers already (or soon will) have CT logs and monitors available to detect mis-issued certs. They don't need CAA Transparency. It's not for subscribers, but for CA's. Transparency is nice, but it does not _prevent_ misissue. The goal of CAA is to prevent misissue. We don't need a CAA Transparency log, because the only thing that needs logging is the DNSSEC chain of the CAA record or lack thereof at the time of issue. And only the issuing CA needs this information in case they need to defend that there was no CAA record preventing them from issuing at the time. Of course, you could still stuff it in some transparency log if you want, but it is pretty useless for endusers. Paul ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Anomalous Certificate Issuances based on historic CAA records
> On Nov 29, 2017, at 17:00, Ben Laurie via dev-security-policy >wrote: > > This whole conversation makes me wonder if CAA Transparency should be a > thing. That is a very hard problem, especially for non-DNSSEC signed ones. Paul ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Incident Report – Certificates issued without proper domain validation
On Wed, 11 Jan 2017, Patrick Figel wrote: On 11/01/2017 04:08, Ryan Sleevi wrote: Could you speak further to how GoDaddy has resolved this problem? My hope is that it doesn't involve "Only look for 200 responses" =) In case anyone is wondering why this is problematic, during the Ballot 169 review process, Peter Bowen ran a check against the top 10,000 Alexa domains and noted that more than 400 sites returned a HTTP 200 response for a request to http://www.$DOMAIN/.well-known/pki-validation/4c079484040e32529577b6a5aade31c5af6fe0c7 [1]. A number of those included the URL in the response body, which would presumably be good enough for GoDaddy's domain validation process if they indeed only check for a HTTP 200 response. [1]: https://cabforum.org/pipermail/public/2016-April/007506.html Are you saying that for an unknown amount of time (years?) someone could have faked the domain validation check, and once it was publicly pointed out so everyone could do this, it took one registrar 10 months to fix, during which 8800 domains could have been falsely obtained and been used in targetted attacks? Have other registrars made any statement on whether they were or were not vulnerable to this attack? Is there a way to find out if this has actually happened for any domain? I would expect this would show up as "validated" certificates that were logged in CT but that were never deployed on the real public TLS servers. Is anyone monitoring that? I assume that for the "big players" who do self-monitoring, were not affected? *crosses fingers* Paul ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Reuse of serial numbers
On Tue, 6 Sep 2016, Kyle Hamilton wrote: That seems unlikely to me (in that browsers don't really keep a server cert database). Has that changed? I talked with Dan Veditz (at Mozilla) around 5 years ago regarding the fact that NSS had told me of duplicate serial numbers being issued by a single issuer, and that as a result Firefox had refused to permit me to connect to a site and also refused to allow me to examine the certificate or identify it issuer for myself. I had to use OpenSSL to get it. His action item at the time was to increase reportability of those issues to Mozilla, because (paraphrased from his words) "a CA issuing duplicate serial numbers is a violation of all of the specifications and we need to know about it, to figure out what else they're doing wrong". I recently ran into this when NSS rejected an IPsec client certificate after a libreswan ipsec software upgrade. The upgrade replaced openswan which used custom X.509 code and did not use NSS and it did accept the certificate with duplicate serial number. For IPsec, a seperate non-system NSS store is used, so I don't know how browsers handle this, but the NSS code is there to reject it _if_ it encounters this. Paul ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Should we block Blue Coat's 'test' intermediate CA?
While not stating an opinion on the question asked, let me note that having an empty crl could be fine if all their test certificates expire within the hour. Sent from my iPhone > On Jun 14, 2016, at 21:02, Mat Caughronwrote: > > > Adding fuel to the fire that Symantec's handling of the BlueCoat > certificate warrants client-side blocking, see Milton Smith's blog post > here: > http://www.securitycurmudgeon.com/2016/06/blue-coat-intermediate-ca-certificate.html > > > > Mat C. > > >> On Tuesday, May 31, 2016 at 6:56:11 AM UTC-7, awill...@mozilla.com wrote: >> http://www.theregister.co.uk/2016/05/27/blue_coat_ca_certs/ reports that >> Symantec made Blue Coat (who produce MITM-capable security kit) an >> intermediate CA last year. They claim its only been used for 'internal >> testing'. Should we take action or trust them? > ___ > dev-security-policy mailing list > dev-security-policy@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-security-policy ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Should we block Blue Coat's 'test' intermediate CA?
On Tue, 31 May 2016, doliver...@gmail.com wrote: Here is Bluecoat showing off their MITM appliance. https://www.bluecoat.com/security-blog/2014-01-02/exploring-encrypted-skype-conversations-clear-text Note that it only shows a MITM TLS when you install their CA. Which, whether we like it or not, is a valid business model. For instance it is needed for various financial compliance laws. And decrypting the TLS only gave the guy encrypted data he still could not read. My guess is skype just uses TLS and a way to break through middleware, and whatever their encryption scheme is, is still inside of that. Paul ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: StartCom (false) vulnerability report
On Thu, 24 Mar 2016, Peter Kurrasch wrote: 3) The claim that CT leads to better security is partially specious. My argument here is also one I've made before wherein having an audit trail such as CT typically only helps after the fact--only after a problem is discovered. We've even had posts in this forum of the variety of "I just noticed in the CT logs that such-and-such has done whatever". Clearly, the use of CT did not detect such problems so there is a period of time where users were less safe. This isn't to say that CT is of no value, rather that it's limited. Note: I'm the IETF co-chair for the CT group (called "trans") CT is in the early stages. We have logging now and we need to develop more monitoring and auditing on top of that. The IETF is working on a few items in this area, such as the gossip protocol: https://tools.ietf.org/html/draft-ietf-trans-gossip-02 While CT is not meant to guarantee preventing attacks, it should in the near future be able to shorten the useful lifespan of bogus certificates, and perhaps more importantly guarantee anyone using bogus certificates will be caught by the public. So a targetted attack has a hefty price. Paul ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
RE: [FORGED] Re: [FORGED] Re: Nation State MITM CA's ?
On Tue, 12 Jan 2016, Peter Gutmann wrote: Or we ensure that firefox and chrome refuses to see those sites at all, because they refuse a downgrade attack. So users will switch to whatever browser doesn't block it, because given the choice between connecting to Facebook insecurely or not connecting at all, about, oh, 100% of users will choose to connect anyway. And they'll grab a firefox/chrome from the free world. It'll work out just fine for them, because what you're giving users is a choice between using the Internet and not using it Not really. But let's just leave it at that we disagree. Paul ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Nation State MITM CA's ?
On Thu, 7 Jan 2016, Jakob Bohm wrote: It would appear from this information, that this CA (and probably others like it) is deliberately serving a dual role: 1. It is the legitimate trust anchor for some domains that browser users will need to access (in this case: Kazakh government sites under gov.kz). 2. It is the trust anchor for fake MITM certificates used to harm browser users, and which should thus be regarded as invalid. Thus it would be prudent to extend the trust list format (and the NSS code using it) to be able to specify additional restrictions beyond those specified in the CA root itself. Much easier would be to not allow a CA cert not to engage in such dual roles. - Additional path restrictions. Example 1: The Microsoft internal CA should only be trusted for a list of Microsoft owned domains. I'm not sure how to that would work in practise. I know it can be done using TLSA DNS records to pin each domain. Having that logic elsewhere seems more dangerous and less transparent. Example 2: Many nation state CAs should only be trusted for sites under that country cc-tld, and sometimes only for a subdomain thereunder such as gov.kz). I have a serious problem with this because the users are not explicitely agreeing to this and so this is facilitating a MITM no different then SSL middleware boxes. And those are only allowed to installed manually, with the user's (or enterprise's) consent. - Permitted EKU (Extended Key Usage) OIDs. Example, many nation state eID CAs should be restricted to "e-mail signing" and "client authentication", even if the CA itself suggests a more wide usage. Enforcing proper EKU's would be good so nation state CA's meant for official communication with citizens cannot be abused for MITMing those same citizens. Paul ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy