Re: Disallowed company name

2018-06-01 Thread Peter Kurrasch via dev-security-policy
  Regarding the options listed, I would agree with the first 2 but disagree with the third.My characterization of this situation is as follows:1. Trust is not granted to everyone. This is true in virtual realms as well as the real world. For example, not everyone in this forum trusts me (and perhaps shouldn't). 2. Bad actors will resort to trickery, lies, and deception to get what they want and sometimes they will be successful despite every effort to stop them.3. The onus is on CA's to prove that "additional validation" equals "more trustworthy". Their failure to do so will lead to the demise of EV.Security can be viewed as a series of AND's that must be satisfied in order to conclude "you are probably secure". For example, when you browse to an important website, make sure that "https" is used AND that the domain name looks right  AND that a "lock icon" appears in the UI AND, if the site uses EV certs, that the name of the organization seems correct. Failing any of those, stop immediately; if all of them hold true, you are probably fine.As the token bad guy in this forum, I have to make all of those steps happen if I expect to trick my victims. If I bother to get an EV cert but the name wildly mismatches for my particular objective, there's an increased chance my efforts at deception will fail. If any of those steps are taken away, my job is that much easier.From: Wayne Thayer via dev-security-policySent: Thursday, May 31, 2018 5:39 PM‎...> In my opinion, this is just a rehash of the same debate we've been havingover misleading information in certificates ever since James obtained the"Identity Verified" EV certificate. The options we have to address thisseem to be:1. Accept that some entities, based on somewhat arbitrary rules anddecisions, can't get OV or EV certs2. Accept that the organization information in certificates will sometimesbe misleading or at least uninformative3. Decide that organization information in certificates is irrelevant andignore it, or get rid of itWe currently have chosen "some parts of all of the above" :-)I am most interested in exploring the first option since that is thedirection CAs are headed with the recent proposal to limit EV certificatesto organizations that have existed for more than 18 months [1]. As long asanyone can obtain a DV certificate, are restrictions on who can obtain anOV or EV certificate a problem, and if so, why?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


CABF Recommendations (was: On the value of EV)

2017-12-17 Thread Peter Kurrasch via dev-security-policy
  As the token bad guy in this forum, I can promise you that I will resort to trickery, deception, lies, fraud, and even theft in order to get what I want. It should, perhaps, come as no surprise that those same tactics will surface when applying for an EV cert.With that in mind, it is in the CA's own best interest to improve the policies and requirements behind EV issuance. ‎The finance industry has regulations generally known as "Know Your Customer" (KYC) that are intended to stave off such things as money laundering, terrorist financing, and such. While not directly applicable to CA's and EV, KYC nonetheless might serve as a model whereby clients are scrutinized before certain actions are permitted by the CA.For example, it seems indefensible to me that a CA should issue a EV cert to a company that has no prior history and offers only the thinnest of evidence to its legitimacy, as was documented in the original reports. All CA's must do better in that regard. I don't think it's unreasonable for CA's to have a documented, pre-existing relationship with a EV requester prior to the actual EV issuance.Further, a EV requester must do more than offer its existence but be able to prove its legitimacy as an organization, institution, individual, and so forth. Such a requester should already have a presence on the Internet and, ideally, can demonstrate ‎a level of competency in operating a secure web server. There seems no justification in my mind for a company to go from nonexistent to EV cert holder in 24 hours' time, for instance.I also would discourage the use of statements such as "EV will prevent phishing attacks" as such claims are misleading. A phishing attack may take many forms, and setting up a fake website is but one of them. Likewise, my reasons for setting up a fake website are many and might have nothing to do with phishing. Instead, I would recommend a more direct approach: "EV certs allow you to associate your company name with your domain names". There is value in that alone.Again I will state that it's in the best interests of CA's to improve their EV issuing guidelines and practices. While CA's no doubt enjoy charging a premium for EV services there is no reason for browsers or the security community to recognize ‎any service that based on vapor. Indeed, the community seems to be saying right now that the status quo is not acceptable. The time for action is now.From: Tim Hollebeek via dev-security-policySent: Monday, December 11, 2017 1:33 PM‎Happy to share the details.We only had about 10 minutes on the agenda, so the discussion hasn’t been too detailed so far (there is still a lot of fallout from CAA that is dominating many validation discussions).  There was a general consensus that companies with intentionally misleading names, and companies that are recently created shell companies solely for the purpose of obtaining a certificate should not be able to get an EV certificate.Exactly what additional validation or rules might help with that problem, while not unnecessarily burdening legitimate businesses will require more time and discussion, which is why if anyone has good ideas, I’d love to hear them.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On the value of EV

2017-12-17 Thread Peter Kurrasch via dev-security-policy
  I think we've finally reached the essence of this debate: if there is a chance a security feature will fail, should we abandon that security feature?When it comes to EV certs and the UI treatments thereof, it seems that Ryan and others at Google, others in this forum, and perhaps the authors of the reports he originally cited are advocating an answer of, Yes: forget about EV, it is possible to spoof therefore nobody is safe. There is certainly a seductive purity and ruthless simplicity to such an argument.The counter-argument being made is, No: there is some value to these measures and they should remain. There is value in a human readable, positive affirmation that ‎I am where I want to be even if there's a possibility I'm being tricked. Viewed in the context of a layered approach to security, the UI signals are one more layer and if this one layer should falter, the others might help prevent a bad outcome.I would add 2 points: First, we should acknowledge the possibility for users to do all the right things and still end up with a bad result. Sometimes the bad guys are smarter than the good guys. This is especially the case when malware infects a system and that malware is able to assert every positive security indicator while doing its dirty work behind the scenes.Second, the actual value in EV as far as I can see is in having that human readable name in addition to the domain name. A successful plan of attack will need convincing names for both, which does raise the bar on an attacker. If EV and the UI treatments were to go away, it would simplify the task for some attackers and that seems undesirable.    From: Ryan Sleevi via dev-security-policySent: Friday, December 15, 2017 4:24 PMIf the signal can be spoofed, it does not actually help keep you safe.‎
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Possible future re-application from WoSign (now WoTrus)

2017-12-01 Thread Peter Kurrasch via dev-security-policy
  While it is to the benefit of everyone that Richard Wang and other employees at WoSign/WoTrus have learned valuable lessons ‎over the past year, it seems to me that far too much damage has been done for Mozilla to seriously consider a CA which has Richard in any sort of management position, much less as CEO. I look at the depth and breadth of his deceptive acts, the technical/policy/compliance issues that were present at WoSign and StartCom under his leadership, the defiance of any expectation that CA's should exhibit reasonable levels of transparency and forthrightness, the amount of time and effort spent in this forum on the myriad WoSign and StartCom issuesOne is left to consider how much tolerance remains in the community for further mistakes and transgressions th‎at might arise from WoTrus? What incentive does Richard have to be forthcoming in the future knowing that the community might take harsh action against his company? How much time should WoTrus be allowed to consume knowing it might unfairly affect the inclusion requests of new CA's or the addressing of situations that arise at other CA's or the discussion of ideas for advancing security throughout the global PKI?When the initial sanction against WoSign and StartCom took place I think many in this forum would have been content to let both CA's fade away into the land of distrust and ultimate removal. That Mozilla allowed both to remain was, I think, an act of generosity with the expectation being(?) that, with a change in leadership and a new technology infrastructure, the global PKI will be better off for keeping WoSign/StartCom as trusted CA's‎. It's not (yet) clear that enough improvements have been made to the infrastructure and, obviously, there has been no change in leadership.With everything taken together I just don't see the benefit of including WoTrus in the trusted CA program. The costs to the community have been high--and probably will continue to be high. The risks have been many--and probably will continue to be many. And the benefits would appear to be too few.From: Danny 吴熠 via dev-security-policySent: Monday, November 27, 2017 2:39 AM‎Dear Gerv, Kethleen, other community friends,First, thanks for Gerv and Kathleen’s so kind consideration and so great arrangement for this pre-discussion.Second, thanks for the community participants to help us know our problem clearly in the past year, we wish you can give us a chance to serve the Internet security.Here is our response covered your questions that we don’t reply the emails one by one.Part One: What we have done in the past year since the sanction(1)After we knew the distrust sanction would be started from Oct. 20, 2016, we started to talk to some CAs to deal with the Managed Sub CA solution, and we signed agreement with Certum and started to resell their SSL certificates since Nov. 21, 2016. And we set up second Managed Sub CA from DigiCert since June 30, 2017.(2)We sent replacement notices to all charged customer and we have replaced more than 6000 certificates for customers for free.(3)We realized our big problem is the compliance with the Standard, so we set up a department: Risk Control & Compliance Department (RCC), which have 5 persons, the manager is from the bank IT risk control department, he leads team for the risk control management and internal audit. Two English major employees, they are responsible to translate all WebTrust documents and all CAB Forum documents into Chinese to let all employees learn the Standard more clearly. And one is responsible for checking CAB Forum mailing list to produce a weekly brief in Chinese for CAB Forum activity to all department managers, one is responsible for checking Mozilla D.S.P. mailing list to produce a weekly brief in Chinese. And they produce summary report if some CA have accident report to let us learn how to prevent the same mistakes and how to response to the Community. Another two employees are security test, one from PKI/CA RD team, one is from Buy/CMS RD team, they are responsible for the system test and security test to two RD team developed system. And this department setup many internal management regulations, it is the internal auditor to check and verify every CA operation is complaint with the Standard.(4)We started to develop new PKI/CA system including validation system, OCSP system, CT system an

Re: Possible future re-application from WoSign (now WoTrus)

2017-11-28 Thread Peter Kurrasch via dev-security-policy
Danny, can you please clarify your role? Are you a WoTrus employee and are you 
speaking on behalf of Richard Wang?

Thanks.

  Original Message  
From: Danny 吴熠 via dev-security-policy
Sent: Monday, November 27, 2017 2:39 AM‎

Dear Gerv, Kethleen, other community friends,

First, thanks for Gerv and Kathleen’s so kind consideration and so great 
arrangement for this pre-discussion.
Second, thanks for the community participants to help us know our problem 
clearly in the past year, we wish you can give us a chance to serve the 
Internet security.

Here is our response covered your questions that we don’t reply the emails one 
by one.

...snip...

Finally, as a CA, we fully understand that the mistakes we have made are 
significant. By the sanction, we learned the importance of maintaining trust 
and compliance, and we hope to provide excellent products and services as 
compensation for our mistakes, and to serve the Internet security to regain 
public trust.
We’d love to hear your feedback and we are trying to do better and better, 
thanks.

Best Regards,

WoTrus CA Limited‎
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Acquisition policy (was: Francisco Partners acquires Comodo certificate authority business)

2017-11-09 Thread Peter Kurrasch via dev-security-policy
  There's always a risk that a CA owner will create a security nightmare when we aren't looking, probationary period or not. In theory regular audits help to prevent it, but even in cases where they don't, people are free to raise concerns as they come up. I think we've had examples of exactly that in both StartCom and Symantec.‎ Perhaps one way to think of it is: Do we have reason to believe that the acquiring organization, leadership, etc. will probably make good decisions in the furtherance of public trust on the Internet? For a company that is a complete unknown, I would say that no evidence exists and therefore a public review prior to the acquisition is appropriate. If we do have sufficient evidence, perhaps it's OK to let the acquisition go through and have a public discussion afterwards.The Francisco Partners situation is more complicated, however. Francisco Partners itself does not strike me as the sort of company that should own a CA but only because they are investors and not a public trust firm of some sort. That said, they are smart enough to bring in a leadership team that does have knowledge and experience in this space. Unfortunately, though, they are also bringing in a Deep Packet Inspection business which is antithetical to public trust. So what is one to conclude?The reporting that I've seen seem to indicate that Francisco Partners will not (will never?) combine ‎PKI and DPI into a single business operation. They have to know that doing so would be ruinous to their CA investment. If we assume they know that and if we are willing to take them at their word, I suppose it's reasonable to "allow" the transfer as it relates to Mozilla policy. If we should learn later on that that trust was misplaced, I'm sure we will discuss it and take appropriate action at that time.From: westmail24--- via dev-security-policySent: Wednesday, November 8, 2017 7:50 PMTo: mozilla-dev-security-pol...@lists.mozilla.orgReply To: westmai...@gmail.comSubject: Acquisition policy (was: Francisco Partners acquires Comodo certificate authority business)Hello Peter, But what prevents Francisco Partners making security nightmare after the probationary period? This is logical, I think.Regards,Andrew___dev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Acquisition policy (was: Francisco Partners acquires Comodo certificate authority business)

2017-11-08 Thread Peter Kurrasch via dev-security-policy
  I could see introducing something of a probationary period of, say, 6 weeks for a public review and discussion, post-acquisition. As a sign of good faith, Mozilla would allow the new entity to continue to issue end-entity certificates. Also as a sign of good faith, the acquirer would agree not to make changes to staff, infrastructure, keys, and so forth and will abstain from changing the interconnectedness of root and intermediate certs.The idea here being that if we should encounter something that is not acceptable, we need the ability to undo any actions taken during the probationary period. I was thinking 6 weeks would allow enough business days for people to investigate any issues that might arise and accommodate vacation schedules and such. I also think the probationary period would be granted under only certain circumstances--that is, not every acquirer will necessarily qualify.From: Gervase Markham via dev-security-policySent: Wednesday, November 1, 2017 6:04 AMTo: mozilla-dev-security-pol...@lists.mozilla.orgReply To: Gervase MarkhamSubject: Re: Francisco Partners acquires Comodo certificate authority businessOn 31/10/17 13:21, Kyle Hamilton wrote:> http://www.eweek.com/security/francisco-partners-acquires-comodo-s-certificate-authority-businessComodo notified Mozilla of this impending acquisition privately inadvance, and requested confidentiality, which we granted. Now that theacquisition is public, it is reasonable for the community to have adiscussion about the implications for Mozilla's trust of Comodo, if any.However, there is also another wrinkle to iron out. Our policy 2.5 says:"If the receiving or acquiring company is new to the Mozilla rootprogram, there MUST be a public discussion regarding their admittance tothe root program, which Mozilla must resolve with a positive conclusionbefore issuance is permitted."I personally feel that this is a bug, in that technically it says thatas soon as a deal closes and is announced, the CA has to stop issuanceentirely until the Mozilla community has had a discussion and given theOK. I believe that's not reasonable and would create massive businessdisruption if the letter of that rule were enforced strictly. I thinkthat when we wrote the policy, we didn't anticipate the situation wherethe buyer would be confidential until closing. (Compare Digimantec,where it's not.)So it would also be useful to have a discussion about what this sectionof the policy should actually say.Gerv___dev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Francisco Partners acquires Comodo certificate authority business

2017-10-31 Thread Peter Kurrasch via dev-security-policy
  The timing and content of any announcement is undoubtedly complicated, caused, in no small part, by legitimate needs for confidentiality against the goals of transparency. I have every reason to trust in the good judgment of Gerv and Kathleen in navigating that path with the interests of this community in mind. If there is more they are able to say on this matter, I hope that they will; if not, I will understand.That said, I ‎hope someone will indeed say more about the reporting in these articles. There are 2 issues in particular that I think would be good to address at this time. The first is the use of the past tense (e.g. "has acquired") regarding the reported transaction. How much of the acquisition process has, in fact, transpired--if anything?The second is ‎the meager explanation of what has transpired or is expected to transpire--again, if anything. Based on my understanding, there is (or will be) a change of legal ownership and leadership. Accordingly, is a review of the new ownership warranted? Bringing together a CA with a Deep Packet Inspection business certainly is...uncomfortable.It is my sincere hope that someone will come forward and provide some clarity, even if just to say this is fake news.   From: Ryan SleeviSent: Tuesday, October 31, 2017 2:59 PM‎To: Peter KurraschReply To: r...@sleevi.comCc: mozilla-dev-security-policySubject: Re: Francisco Partners acquires Comodo certificate authority businessOn Tue, Oct 31, 2017 at 3:44 PM, Peter Kurrasch via dev-security-policy <dev-security-policy@lists.mozilla.org> wrote:  Both articles are long on names, short on dates. I don't fault the authors for that but it is troubling that better information wasn't made available to them.When can we expect a proper announcement in this forum? I would expect any such announcement to provide details on the skills and experience that this new leadership team has in running a CA. ‎For example, are they aware of section 8 of the Mozilla Root Store Policy?Such announcements are not part of the Mozilla Policy expectations. Could you clarify why you expect such an announcement? 

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Francisco Partners acquires Comodo certificate authority business

2017-10-31 Thread Peter Kurrasch via dev-security-policy
  Both articles are long on names, short on dates. I don't fault the authors for that but it is troubling that better information wasn't made available to them.When can we expect a proper announcement in this forum? I would expect any such announcement to provide details on the skills and experience that this new leadership team has in running a CA. ‎For example, are they aware of section 8 of the Mozilla Root Store Policy?From: Kyle Hamilton via dev-security-policySent: Tuesday, October 31, 2017 12:51 PMTo: mozilla-dev-security-pol...@lists.mozilla.orgReply To: Kyle HamiltonSubject: Re: Francisco Partners acquires Comodo certificate authority businessAnother article about this is http://www.securityweek.com/francisco-partners-acquires-comodo-ca .Notably, I'm not seeing anything in the official news announcements pages for either Francisco Partners or Comodo.  Is this an attempt at another StartCom (silent ownership transfer), or is it a case of "rumor mill reported as fact"?-Kyle HOn 2017-10-31 06:21, Kyle Hamilton wrote:> http://www.eweek.com/security/francisco-partners-acquires-comodo-s-certificate-authority-business >>>___dev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert-Symantec Announcement

2017-10-11 Thread Peter Kurrasch via dev-security-policy
  Clearly there has to be a way for key compromises to be remedied. If I've been following this pinning discussion correctly it seems unavoidable that we will have cases requiring certs to be issued on the soon-to-be old Symantec infrastructure...? for the foreseeable future (i.e. post-Dec 1)?Also it's not totally clear to me what will be delivered on Dec 1, 2017 in terms of infrastructure as well as issuance policies.From: Jeremy Rowley via dev-security-policySent: Sunday, October 1, 2017 2:55 PM‎Is this a correct summary? There’s four categories of customers that require trust in existing Symantec roots being address:...4.	Those that pinned a specific intermediate’s keys, resulting in a failure unless the issuing CA had the same keys as used by Symantec. ...‎Category 4 is under discussion.  Sounds like Google would prefer not to see a reuse of keys. Pinning times are sufficiently short that customers could migrate to the new infrastructure prior to total distrust of the roots under certain circumstances. If the cert was issued prior to June 2016, and the key expires after March 2018, anyone using the pin could be locked out until the pin expires. If pins last up to a year, customers issuing a cert after June 2016 should have time to migrate prior to root removal. One issue is that these customers won’t be able to get a new cert that functions off the old intermediate post Dec 1, 2017, effectively meaning key compromises cannot be addressed.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert-Symantec Announcement

2017-09-07 Thread Peter Kurrasch via dev-security-policy
  I think the plan at the root level makes sense and is reasonable, at least as far as I think I understand it. (A diagram would be nice.)‎ At the intermediate level, however, I think more detail is needed. I'm especially interested in learning how resilient the cert hierarchy will be should it become necessary to alter the hierarchy in response to an adversarial act or management mishap or PKI community sanction or perhaps even future standards work.I also wonder about the plan to allocate the "economically critical" customer base across the various intermediates. For example, should soda companies, banks, shipping companies, and media sites all be given the same consideration? What about main web sites vs static content servers (CDNs)? What about sites that are important to the economy but aren't necessarily the most popular?‎My hope is that there will be enough fault tolerance, redundancy, resiliency--call it whatever you like--built in to the system so that we know we have options available should we need them. The extent to which DigiCert can flesh out some of these details will be of benefit to the whole community.From: Jeremy Rowley via dev-security-policySent: Monday, August 21, 2017 12:48 AMTo: mozilla-dev-security-policyReply To: Jeremy RowleySubject: RE: DigiCert-Symantec AnnouncementHi everyone, We’re still progressing towards close and transition.  One of the items we are heavily evaluating is the root structure and cross-signings post close.  Although the plan is still being finalized, I wanted to provide a community update on the current proposal.Right now, Mozilla post stated that they plan to deprecate Symantec roots for TLS by the end of 2018.  We continue to work on a plan to transition all customers using the roots for TLS to another root, likely the DigiCert High Assurance root.  We will not cross-sign any Symantec roots, however we will continue using those roots for code signing and client/email certs post close (non-TLS use cases).  We also plan on using Symantec roots to cross-sign some of the DigiCert roots to provide ubiquity in non-Mozilla systems and processes.  However, this sign will only provide one-way trust, with the ultimate chain in Mozilla being EE-> Issuing CA -> DigiCert root in all cases.DigiCert currently has four operational roots in the Mozilla trust store: Baltimore, Global, Assured ID, and High Assurance. The other roots are some permutation of these four roots that were established for future use cases/rollover (ECC vs. RSA).  We already separate operations by Sub CA, with TLS, email, and code signing using different issuing CAs. As mentioned in my previous post, we plan on using multiple Sub CAs chained to the DigiCert roots to further control the population trusted by each Sub CA but have not decided on exact numbers. OV and EV will be limited  by Alexa distribution and/or number of customers.  DV isn’t readily identifiable by customer and will use a common sub CA.Root separation proves a difficult, yet achievable, task as there are only four operational roots: Baltimore, High Assurance, Global, and Assured ID. Global and High Assurance issue mostly OV/EV certs but do include code signing and client certificates. High Assurance is our EV root and used for both EV code signing certificates and TLS certs.   Baltimore is our cross-signed root and used primarily by older Verizon customers. Assured ID is used mostly for code signing and client.  However, Assured ID is also our FBCA root, meaning government-issued TLS certificates chain to it.  Of course, all TLS certs are issued in accordance with the BRs regardless of root. Looking at the current customer base, our current plan is to issue EV (code and TLS) from High Assurance, OV (code and TLS) from Global. Assured ID will continue as our client certificate and government root. We plan to continue using Symantec roots for code signing and client.  We’re still looking into this though. We’d love to separate out the roots more than this, but that’s not likely possible given the current root architecture. If there is a non-cross-signed Symantec root that the browsers are not planning to remove, we’d like to continue using the root to issue high volume DV and device certificates.  If this is not possible and Mozilla is still planning on distrusting all Symantec roots, we’ll likely migrate DV certs to a Sub C

Re: BR compliance of legacy certs at root inclusion time

2017-08-23 Thread Peter Kurrasch via dev-security-policy
  Yes, I think it's fair for Mozilla to stake out the position that only those certs which comply with the relevant standards, policies, etc. will be accepted. Indeed, much of the other discussion on this list of late would support such a statement. That said, I suppose situations may arise where the interests of the community and relying parties are better served by granting exceptions or waivers to that position. If a CA has a compelling argument for seeking such a waiver and would like to make their case, I suppose it doesn't hurt to hear them out.Perhaps some guidelines would be in order?* The non-compliant certs must not have the potential to cause harm. For example, maybe a compelling case could be made for allowing certs with faulty SAN data but I'm not sure a compelling case could be made for allowing SHA1 certs.* Mozilla has no obligation to create or maintain special functionality in its software to support non-compliant certs.‎ A CA requesting a waiver would need to accept the risk that some of their non-compliant certs could fail to validate in Mozilla products at some point in the future. * ‎In the spirit of transparency, a CA requesting a waiver should be prepared to provide documentation as to how many certs are to be covered by the waiver. For example: "As of (date) there are (number) certificates that do not comply with (specific requirements/policies) in the Not-before date range of (month/year) to (month/year)." Such documentation should be updated "regularly".* I think a CA should be made to explain why they are unable to bring their certs into compliance. There could be an understandable reason, so let's hear it. ("We don't want to" or "We can't afford it" are not acceptable reasons.)I think the bottom line has to be the trust relationship between Mozilla products and relying parties (end users specifically). If Mozilla says a connection is secure it has to mean that all elements of the connection meet the standards of technical excellence or, failing that, that Mozilla has deemed that the technical elements and special circumstances warrant the continued trust and use of that connection by the user.   From: Gervase MarkhamSent: Tuesday, August 22, 2017 11:01 AMTo: Peter Kurrasch; mozilla-dev-security-pol...@lists.mozilla.orgSubject: Re: BR compliance of legacy certs at root inclusion timeOn 21/08/17 06:20, Peter Kurrasch wrote:> The CA should decide what makes the most sense for their particular> situation, but I think they‎ should be able to provide assurances that> only BR-compliant certs will ever chain to any roots they submit to the> Mozilla root inclusion program.So you are suggesting that we should state the goal, and let the CA workout how to achieve it? That makes sense.I agree with Nick that transparency is important.Is there room for an assessment of risk, or do we need a blanketstatement? If, say, a CA used short serials up until 2 years ago but hassince ceased the practice, we might say that's not sufficiently riskyfor them to have to stand up and migrate to a new cross-signed root. Iagree that becomes subjective.Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: BR compliance of legacy certs at root inclusion time

2017-08-21 Thread Peter Kurrasch via dev-security-policy
  I don't think Mozilla ‎can tolerate having certs that successfully chain to a root contained in its trust store that are not BR compliant.The end user trusts Mozilla products to provide a certain level of assurance in the cert chains that it allows. Having a cert chain successfully validated with a (perhaps?) hidden caveat of "by the way, we don't actually trust this cert because it doesn't comply with accepted policies" doesn't make much sense.The CA should decide what makes the most sense for their particular situation, but I think they‎ should be able to provide assurances that only BR-compliant certs will ever chain to any roots they submit to the Mozilla root inclusion program.From: Gervase Markham via dev-security-policySent: Friday, August 18, 2017 10:03 AM‎...What should our policy be regarding BR compliance for certificatesissued by a root requesting inclusion, which were issued before the dateof their request? Do we:A) Require all certs be BR-compliant going forward, but grandfather in   the old ones; orB) Require that any non-BR-compliant old certs be revoked; orC) Require that any seriously (TBD) non-BR-compliant old certs be   revoked; orD) something else?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert-Symantec Announcement

2017-08-03 Thread Peter Kurrasch via dev-security-policy
  I agree with the high-level concepts, although I would probably like to add something about "being good stewards of technologies that play a critical role in the global economy." (Feel free to use your own words!)Regarding the current Mozilla/Google plans, I don't necessarily have a problem with them but I do think we should give ourselves permission to make adjustments (if needed) because the circumstances have changed since those plans were developed. Consider:* Because the acquisition is now in the picture, legal issues might impede progress in certain areas. The most notable example is the fact that DigiCert will have limited authority over Symantec until the deal actually closes. For example, what will happen in the period between Dec 1 and the closing (assuming it's after the first)?* Once the deal does close, personnel and management issues could present various challenges in meeting certain deadlines. For example, if subject matter experts decide to leave Symantec prior to the closing, how might that hinder DigiCert?* A lot of churn is about to be introduced in the global PKI. Times of chaos create moments of opportunity for those who wish to do bad things. Should something happen, corrections may be necessary which can impact delivery dates, and so on.Let me be clear that these are just hypothetical situations and rhetorical questions. I don't expect answers and my only intention is to get people to start thinking about these matters (if they haven't already begun).Hopefully this better explains where I was coming from in my initial reply.From: Jeremy RowleySent: Thursday, August 3, 2017 8:13 PM‎ Hey Peter,  I think the Mozilla and Google plans both stand as-is, although probably need an updated based on this announcement.  I'm hoping that the high-level concepts remain unchanged:    - Migrate to a new infrastructure    - Audit the migration and performance to ensure compliance    - Improve operational transparency so the community has assurances on what is happening.  Jeremy  This certainly shakes things up! I've had my concerns that Symantec's plan was complicated and risky, but now I'm wondering if this new path will be somewhat simpler--yet even more risky? I'm not suggesting we shouldn't take this path but I am hoping we make smart, well-thought-out decisions along the waysnip...* I think it's appropriate to re-think some of the deadlines, given that we're talking less about a carrots-and-sticks model and more of one based on smart decision-making, good risk management, and sticks.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert-Symantec Announcement

2017-08-02 Thread Peter Kurrasch via dev-security-policy
  This certainly shakes things up! I've had my concerns that Symantec's plan was complicated and risky, but now I'm wondering if this new path will be somewhat simpler--yet even more risky? I'm not suggesting we shouldn't take this path but I am hoping we make smart, well-thought-out decisions along the way.Some thoughts:* Will there be other players in Symantec's SubCA plan or is DigiCert the only one?* ‎Is DigiCert prepared (yet?) to commit to a "first day of issuance" under the SubCA plan? That is, when is the earliest date that members of the general public may purchase certs that chain up through the new "DigiCert SubCA" to any of the Symantec roots? I hope that, for issues that may arise under the new system, there is sufficient time to identify and resolve them prior to the 2017-12-01 deadline.* I think the idea of a smart segregation plan for the roots and intermediates is a must-have. Such a plan should factor in the clientele who are using the different roots and the environments in which they operate. Given how important the "ubiquitous roots" are, I would hope to see community involvement and "sign-off", if you will.* I think it's appropriate to re-think some of the deadlines, given that we're talking less about a carrots-and-sticks model and more of one based on smart decision-making, good risk management, and sticks.Finally, when I went to read the DigiCert blog post, I noticed that John Merrill's link for the agreement announcement was a dud. I don't know why but I really don't care either. I think it serves as a reminder ‎that mistakes are going to be made during this process so it's best to make allowances for that in the plans going forward. That, and attention to detail is important.Thanks.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec response to Google proposal

2017-06-16 Thread Peter Kurrasch via dev-security-policy
  My thoughts:2) Timeline.I agree with Symantec that Google's original deadlines are far too aggressive, for 2 reasons. First, I do not think Symantec can move quickly without causing further damage. Second, I do not think Symantec's customers can move quickly at all ‎given that a majority of them are large corporations that have to coordinate budgets, staff, outsourcing firms, and so forth. These customers also need time to familiarize themselves with the new rules and identify a course of action that makes sense for their business environments and their user base. Even though I understand the desire to move quickly, in this situation it seems imprudent to do so.Many will find this next bit unacceptable, but in the interest of providing an alternative, let me suggest a timeline of 6 different dates over the next 2 years (in case it really does take that long): the 21st day of February, May, and August of 2018 & 2019. Each of these dates represent a different milestone for changes in policy enforcement, certificate validation, software and systems, and whatever is identified as a deliverable in these ongoing discussions. The dates here are chosen specifically to acknowledge that many businesses operate on a quarterly system while avoiding complications that inevitably take place at the end of a quarter and the end of the year. And, yes, that would imply no action taken before Feb of next year.1) Scope of DistrustFirst a question: is removing the EV entitlements from the Symantec roots something that is still on the table for Mozilla products or has that been dismissed for some reason? I ask because it hardly ‎seems appropriate that a CA under sanction be entitled to all the benefits that are extended to CA's which are not under sanction. Doing so also inflicts some pain on Symantec (without breaking the global economy) until such time as they've resolved their issues to Mozilla's satisfaction.Regarding the expiration of certificates, I do not agree that CT logging engenders trust so I disagree with Symantec on that. Frankly I don't entirely agree with Google on the phased dis-trust and CT logging items. Those seem to increase complexity in the PKI ecosystem ‎(which carries its own risk) without necessarily improving the ecosystem, but it's very likely I've missed some important details.As to scope itself, my understanding is that Mozilla will eventually remove trust from all current Symantec roots (the "ubiquitous roots") and in its place use a set of "new roots", some of which will be under the purview of Symantec competitors. These new roots will be cross-signed to those ubiquitous roots so that new certs that chain up to these new roots will still validate properly on products that have not or cannot be updated to use the new roots.‎ (If my understanding is incorrect, I hope someone will correct me.)To put it another way, all existing certs that chain up to ‎the ubiquitous roots will eventually stop working--many before their date of normal expiration. As such, there needs to be a ramp up of new cert issuing capacity while at the same time a phasing out of the existing certs. Combining that with the above "alt-timeline" would look something like the following. The exact makeup of the root bundles and CA groups are TBD.Milestone 1 (Feb 21, 2018) - EV entitlement removed from the ubiquitous roots, new root cert bundle A ‎is ready to go, and CA group 1 is approved to begin issuing against the roots in bundle A. No new end entity certs have been issued yet.Milestone 2 (May 21, 2018) - New root bundle B is ready to go and CA group 2 is approved to begin issuing against bundle B, but has not yet done so.Milestone 3 (Aug 21, 2018) - New root bundle C is ready to go and CA group 3 is approved to begin issuing against bundle C, but has not yet done so.Milestone 4 (Feb 21, 2019) - New root bundle D is ready to go and CA group 4 is approved to begin issuing against bundle D, but has not yet done so. In addition, some analysis should be performed to evaluate the ‎overall health of the new root solution (for example, how many total certs have been issued, any reports of major disruptions to end users, etc.).Milestone 5 (May ‎21, 2019) - The fifth, and final, bundle E of new roots is ready to go and CA group 5 is approved to begin issuing against them but has not yet done so. This would also represent the earliest date that Symantec is allowed to be included in any of the CA groups.Milestone 6 (Aug 21, 2019) - Final removal of trust for the original Symantec roots.I know that some of this represents a significant departure from what Google and Symantec have previously discussed but I thought there might be some utility in having an alternate framework from which to draw.   

Re: An alternate perspective on Symantec

2017-06-08 Thread Peter Kurrasch via dev-security-policy
  Let's also consider some of the companies that use the ubiquitous roots: Coca Cola, Pepsico, Nike, the CIA, all major US banks, and probably most major US companies and consumer brands. Consider, too, that in addition to their regular business they have many marketing sites and various other consumer engagement portals--and, oftentimes, these microsites will be developed and operated by a outside firm.So in cases like these companies and brands, the notification can get complicated and possibly counter-productive. If I'm the outside firm ‎handling a special portal for some "super spicy cheesy puffs" marketing campaign (a hypothetical example), I might not care about Symantec or even website security because my livelihood depends on getting the portal up in time to launch the campaign at the next major sporting event. Assuming the portal even uses a certificate, the choice of CA to issue it might not even be mine to make. (And if the site should stop working for Firefox users because of an action taken against Symantec, you can bet it will make many people very angry.)I'm all for notifications and raising awareness but it's not necessarily easy or straight-forward to get the right message to the decision makers and the people who have to execute those decisions.From: Gervase Markham via dev-security-policySent: Thursday, June 8, 2017 4:07 AMTo: userwithuid; mozilla-dev-security-pol...@lists.mozilla.orgReply To: Gervase MarkhamSubject: Re: An alternate perspective on SymantecOn 07/06/17 06:14, userwithuid wrote:> 2. Having Symantec inform their subscribers, as David mentions, is a great idea.I believe Ryan has pointed out, here or elsewhere, why "must notifycustomers" requirements are problematic.Gerv___dev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


An alternate perspective on Symantec

2017-06-06 Thread Peter Kurrasch via dev-security-policy
 Over the past months there has been much consternation over Symantec and the idea of "too big to fail". That is a reasonable idea but makes difficult the discussion of remedies for Symantec's past behavior: How does one impose a meaningful sanction without causing Symantec to fail outright since the impact would be catastrophic?I'd like to offer an alternate perspective on the situation in the hope that it might simplify the discussions of sanctions. The central point is this: Symantec is too big and too complicated to function properly.Consider:* Symantec has demonstrated an inability to exercise sufficient oversight and control over the totality of their PKI systems. Undoubtedly there are parts which have been and continue to be well-run, but there are parts for which management has been unacceptably poor.* No cases have been identified of a breach or other compromise of Symantec's PKI technology/infrastructure, nor of the infrastructure of a subordinate PKI organization for which Symantec is responsible. The possibility does exist, however, that compromises have occurred but might never be known because of management lapses.* Many of Symantec's customers play a critical role in the global economy and rely on the so-called "ubiquitous roots" to provide their services. Any disruption in those services can have global impacts. Symantec, therefore, plays a significant role in the global economy but only insofar as it is the gatekeeper to the "ubiquitous roots" upon which the global economy relies.* ‎Symantec has demonstrated admirable commitment to its customers but appear less so when it comes to the policies, recommendations, and openness of the global PKI community. Whether this indicates a willful disregard for the community‎ or difficulty in incorporating these viewpoints into a large organization (or something else?) is unclear.From this standpoint, the focus of sanctions would be on Symantec's size. Obviously Mozilla is in no position to mandate the breakup of a company but Mozilla (and others) can mandate a reduced role as gatekeeper to the "ubiquitous roots". In fact, Symantec has already agreed to do just that.In addition, this viewpoint would discourage increasing Symantec's size or adding to the complexity of their operations. I question Symantec's ability to do either one successfully. Symantec is certainly welcome to become bigger and more complex if that's what they should choose, but not as a result of some external mandate.Comments and corrections are welcome.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On remedies for CAs behaving badly

2017-06-05 Thread Peter Kurrasch via dev-security-policy
  Consider, too, that removing trust from a CA has an economic sanction built-in: loss of business. For many CA's I imagine that serves as motivation enough for good behavior but others...possibly not.Either way, figuring out how to impose, fairly, an explicit financial toll on bad CA's is likely to be as difficult as figuring out any of the other remedies ‎that are presently available. (For example, who gets to keep the money collected?)From: Matthew Hardeman via dev-security-policySent: Monday, June 5, 2017 10:52 AM‎Hi all,I thought it prudent in light of the recent response from Symantec regarding the Google Chrome proposal for remediation to raise the question of the possible remedies the community and the root programs have against a CA behaving badly (mis-issuances, etc.)Symantec makes a number of credible points in their responses.  It's hard to refute that the time frames required to stand up a third party managed CA environment at the scale that can handle Symantec's traffic could happen in reasonable time.In the end, it seems inevitable that everyone will agree that practical time frame to accomplish the plan laid out could take... maybe even a year.As soon as everyone buys into that, Symantec will no doubt come with the "Hmm.. By that time, we'll have the new roots in the browser stores, so how about we skip the third party and go straight to that?"Even if that's not the way it goes, this Symantec case is certainly a good example of cures (mistrust) being as bad as the disease (negligence, bad acting).Has there ever been an effort by the root programs to directly assess monetary penalties to the CAs -- never for inclusion -- but rather as part of a remediation program?Obviously there would be limits and caveats.  A shady commercial CA propped up by a clandestine government program such that the CA seems eager to pay out for gross misissuance -- even in amounts that exceed their anticipated revenue -- could not be allowed.I am curious however to know whether anyone has done any analysis on the introduction of economic sanctions in order to remain trusted -- combined with proper remediation -- as a mechanism for incentivizing compliance with the rules?Particularly in smaller organizations, it may be less necessary.  In larger (and especially publicly traded) companies, significant economic sanctions can get the attention and involvement of the highest levels of management in a way that few other things can.Thanks,Matt___dev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec response to Google proposal

2017-06-05 Thread Peter Kurrasch via dev-security-policy
  Hi Gerv--Is Mozilla willing to consider a simpler approach in this matter? For example, it seems that much of the complexity of the Google/Symantec proposal stems from this new PKI idea. I think Mozilla could obtain a satisfactory outcome without it.From: Gervase Markham via dev-security-policySent: Friday, June 2, 2017 9:54 AMTo: mozilla-dev-security-pol...@lists.mozilla.orgReply To: Gervase MarkhamSubject: Symantec response to Google proposalhttps://www.symantec.com/connect/blogs/symantec-s-response-google-s-subca-proposalSymantec have responded to the Google proposal (which Mozilla hasendorsed as the basis for further discussion) with a set of inlinecomments which raise some objections to what is proposed.Google will, no doubt, be evaluating these requests for change anddeciding to accept, or not, each of them. But Mozilla can make our ownindependent decisions on these points if we choose. If Google andMozilla accept a change, it is accepted. If Google accepts it but wedecline to accept, we can add it to our list of additional requirementsfor Symantec instead.Therefore, I would appreciate the community's careful consideration ofthe reasonableness of Symantec's requests for change to the proposal.Gerv___dev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Add definition of "mis-issuance"

2017-06-01 Thread Peter Kurrasch via dev-security-policy
  So how about this:A proper certificate is one that...- contains the data as provided by the requester that the requester intended to use;- contains the data as provided by the issuer that the issuer intended to use;- contains data that has been properly verified by the issuer, to the extent that the data is verifiable in the first place;- uses data that is recognized as legitimate for a certificate's intended use, per the relevant standards, specifications, recommendations, ‎and policies, as well as the software products that are likely to utilize the certificate;- is suitably constructed in accordance with the relevant standards, specifications, recommendations, and policies, as appropriate; and- is produced by equipment and systems whose integrity is assured by the issuer and verified by ‎the auditors.Thus, failing one or more of the above conditions will constitute a mis-issuance situation.From: Matthew Hardeman via dev-security-policySent: Thursday, June 1, 2017 1:35 PM‎On Thursday, June 1, 2017 at 8:03:33 AM UTC-5, Gervase Markham wrote:> > My point is not that we are entirely indifferent to such problems, but> that perhaps the category of "mis-issuance" is the wrong one for such> errors. I guess it depends what we mean by "mis-issuance" - which is the> entire point of this discussion!> > So, if mis-issuance means there is some sort of security problem, then> my original definition still seems like a good one to me. If> mis-issuance means any problem where the certificate is not as it should> be, then we need a wider definition.> It was in that spirit that I raised the questions that I did.‎ I wonder if the pedant can use these arguments to call any certificate "mis-issued" under the proposed definition.  If so, I wonder if we should care if such a tortured argument might be made.> I wonder whether we need a new word for certificates which are bogus for> a non-security-related reason. "Mis-constructed"?> ___dev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Require all CAs to have appropriate network security

2017-05-24 Thread Peter Kurrasch via dev-security-policy
  Fair enough. This is absolutely the sort of stuff that needs to be part of regular auditing. I was wondering what sort of checking or enforcement you had in mind by including it in the Mozilla policy now? Perhaps you just want the CA's to be reminded that cybersecurity issues are important despite the CABF docs on the matter being too weak?I have no qualms using "for example". I would like for more to be mentioned than just software updates but even there I don't feel too strongly about it.From: Gervase MarkhamSent: Wednesday, May 24, 2017 9:56 AMTo: Peter Kurrasch; mozilla-dev-security-pol...@lists.mozilla.orgSubject: Re: Policy 2.5 Proposal: Require all CAs to have appropriate network securityOn 24/05/17 15:31, Peter Kurrasch wrote:> It might be fair to characterize my position as "vague but> comprehensive"...if that's even possible? There are some standard-ish> frameworks that could be adopted:I think we would prefer to wait for the CAB Forum to adopt somethingrather than attempting to define and enforce our own. If for no otherreason than the CAB Forum thing is more likely to be audited andtherefore to have actual teeth.> If you'd like to keep the policy to a sentence or so, perhaps we could> use some "including but not limited to" verbiage? Well, the draft wording we started with used "for example"... :-)Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Require all CAs to have appropriate network security

2017-05-24 Thread Peter Kurrasch via dev-security-policy
  It might be fair to characterize my position as "vague but comprehensive"...if that's even possible? There are some standard-ish frameworks that could be adopted:- NIST has an existing framework that is currently going through some sort of update/revisory process.  ‎http://www.nist.gov/cyberframework/- ISO has 27032:2012 which looks to have some good stuff in it.    ‎https://www.iso.org/standard/44375.html‎- Perhaps surprisingly enough, the American Institute of CPA's has a variety of information that looks to be a good starting point for anyone.  http://www.aicpa.org/interestareas/frc/assuranceadvisoryservices/pages/cyber-security-resource-center.aspxI would be interested in knowing if other people know of other frameworks and have experience using any of them. I'm certainly not advocating that any of the above be used here or that they are necessarily even good resources for folks in the CA space.Back to laughable security, my issue is that there are many ways an organization might experience a security breakdown in ways that cause severe face damage to security folks due to either excessive face palms or banging ones head against the wall or even laughing too hard. Examples include: ‎allowing week passwords (by employees), poor password management, inadequate access controls, weak network intrusion detection, insufficient protection from well-known web application vulnerabilities (e.g. SQL injection), and the list goes on.If you'd like to keep the policy to a sentence or so, perhaps we could use some "including but not limited to" verbiage? From: Gervase MarkhamSent: Tuesday, May 23, 2017 5:23 AMTo: Peter Kurrasch; mozilla-dev-security-pol...@lists.mozilla.orgSubject: Re: Policy 2.5 Proposal: Require all CAs to have appropriate network securityOn 23/05/17 04:18, Peter Kurrasch wrote:> I think the term "industry best practices" is too nebulous. For> example, if I patch some of my systems but not all of them I could> still make a claim that I am following best practices even though my> network has plenty of other holes in it.I'm not sure that "patching half my systems" would be generally acceptedas "industry best practice". But regardless, unless we are planning towrite our own network security document, which we aren't, can yousuggest more robust wording?> I assume the desire is to hold CA's to account for the security of> their networks and systems, is that correct? If so, I think we should> have something with more meat to it. If not, the proposal as written> is probably just fine (although, do you mean the CABF's "Network> Security Requirements" spec or is there another guidelines doc?).Yes, that's the doc I mean (for all its flaws).> For consideration: ‎Mozilla can--and perhaps should--require that all> CA's adopt and document a cybersecurity risk management framework for> their networks and systems (perhaps this is already mandated> somewhere?). I would expect that the best run CA's will already have> something like this in place (or something better) but other CA's> might not. There are pros and cons to such frameworks but at a> minimum it can demonstrate that a particular CA has at least> considered the cybersecurity risks that are endemic to their> business.If we are playing "too nebulous", I would point out that to meet thisrequirement, I could just write my own (very lax) cybersecurity riskmanagement framework and then adopt it.Any requirement which is only a few sentences is always going to betechnically gameable. I just want to write something which is not easilygameable without failing the "laugh test".Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Require all CAs to have appropriate network security

2017-05-22 Thread Peter Kurrasch via dev-security-policy
I think the term "industry best practices" is too nebulous. For example, if I 
patch some of my systems but not all of them I could still make a claim that I 
am following best practices even though my network has plenty of other holes in 
it.

I assume the desire is to hold CA's to account for the security of their 
networks and systems, is that correct? If so, I think we should have something 
with more meat to it. If not, the proposal as written is probably just fine 
(although, do you mean the CABF's "Network Security Requirements" spec or is 
there another guidelines doc?).

For consideration: ‎Mozilla can--and perhaps should--require that all CA's 
adopt and document a cybersecurity risk management framework for their networks 
and systems (perhaps this is already mandated somewhere?). I would expect that 
the best run CA's will already have something like this in place (or something 
better) but other CA's might not. There are pros and cons to such frameworks 
but at a minimum it can demonstrate that a particular CA has at least 
considered the cybersecurity risks that are endemic to their business.


  Original Message  
From: Gervase Markham via dev-security-policy
Sent: Friday, May 19, 2017 7:56 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Reply To: Gervase Markham
Subject: Policy 2.5 Proposal: Require all CAs to have appropriate network 
security

At the moment, the CAB Forum's Network Security guidelines are audited
as part of an SSL BR audit. This means that CAs or sub-CAs which only do
email don't technically have to meet them. However, they also have a
number of deficiencies, and the CAB Forum is looking at replacing them
with something better, ideally maintained by another organization. So
just mandating that everyone follow them doesn't seem like the best thing.

Nevertheless, I think it's valuable to make it clear in our policy that
all CAs are expected to follow best practices for network security. I
suggest this could be done by adding a bullet to section 2.1:

"CAs whose certificates are included in Mozilla's root program MUST:

* follow industry best practice for securing their networks, for example
by conforming to the CAB Forum Network Security Guidelines or a
successor document;"

This provides flexibility in exactly what is done, while making it
reasonably clear that leaving systems unpatched for 5 years would not be
acceptable.

This is: https://github.com/mozilla/pkipolicy/issues/70

---

This is a proposed update to Mozilla's root store policy for version
2.5. Please keep discussion in this group rather than on Github. Silence
is consent.

Policy 2.4.1 (current version):
https://github.com/mozilla/pkipolicy/blob/2.4.1/rootstore/policy.md
Update process:
https://wiki.mozilla.org/CA:CertPolicyUpdates
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Remove the bullet about "fraudulent use"

2017-05-03 Thread Peter Kurrasch via dev-security-policy
  Perhaps a different way to pose the questions here is whether Mozilla wants to place any expectations on the CA's regarding fraud and the prevention thereof. Expectations beyond what the BR's address, that is. Some examples:‎- Minimal expectation, meaning just satisfy whatever the BR's say but beyond that Mozilla won't care(?)- Passive involvement, meaning a CA is expected to do some investigation into fraudulent activity but only when prompted and even then, no action is necessarily expected- Active involvement, meaning the CA has implemented policies and procedures that identify and act on situations that appear fraudulentA question one might ask is "What is reasonable?" It is not reasonable for CA's to identify and prevent all cases of fraud so I wouldn't ask that. I wouldn't call CA's the anti-fraud police, either. What about the following:- When a CA is notified that a stolen credit card was used to purchase certs, should the CA investigate the subscriber who used it and any other certs that were purchased (perhaps using a different CC) and take appropriate action?- Is it reasonable for any subscriber to request more than 100 certs on a given day? What about 500? 1000? (The point is not to prohibit large requests but I would imagine there is a level which exceeds what anyone might consider a legitimate use case.)- Is is reasonable for a single CA to issue over 150 certs containing "paypal" in the domain name? (I am referring to the analysis Vincent Lynch did back in March.) There are undoubtedly cases where including "paypal" in the name is or could be legitimate, but 150 a day, every day?- Is it reasonable for a CA to issue a cert to the CIA for Yandex or to the Chinese government for Facebook, even if the requester does demonstrate "sufficient control" of the domain?The point I wish to make is that situations will come up that go beyond anything in the BR's and that reasonable people might agree go ‎beyond a reasonable level of reasonableness. The question becomes what will Mozilla do as those situations arise? Can Mozilla envision possibly asking a CA "don't you think you should have limited ?"        From: Gervase MarkhamSent: Tuesday, May 2, 2017 5:46 AMTo: Peter Kurrasch; mozilla-dev-security-pol...@lists.mozilla.orgSubject: Re: Policy 2.5 Proposal: Remove the bullet about "fraudulent use"On 02/05/17 01:55, Peter Kurrasch wrote:> I was thinking that fraud takes many forms generally speaking and that> the PKI space is no different. Given that Mozilla (and everyone else)> work very hard to preserve the integrity of the global PKI and that the> PKI itself is an important tool to fighting fraud on the Internet, it> seems to me like it would be a missed opportunity if the policy doc made> no mention of fraud.> > Some fraud scenarios that come to mind:> > - false representation as a requestor> - payment for cert services using a stolen credit card number> - malfeasance on the part of the cert issuerClearly, we have rules for vetting (in particular, EV) which try andavoid such things happening. It's not like we are indifferent. Butstolen CC numbers, for example, are a factor for which each CA has toput in place whatever measures they feel appropriate, just as anybusiness does. It's not really our concern.> - requesting and obtaining certs for the furtherance of fraudulent activity> > Regarding that last item, I understand there is much controversy over> the prevention and remediation of that behavior but I would hope there> is widespread agreement that it does at least exist.It exists, in the same way that cars are used for bank robbery getaways,but the Highway Code doesn't mention bank robberies.Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Incorporate Root Transfer Policy

2017-05-01 Thread Peter Kurrasch via dev-security-policy
  Hi Gerv,Your updates look good! One small quibble: The bottom of the Physical Relocation section mentions the code signing trust bit, but I think that is irrelevant now?Would you feel comfortable mandating that, whenever an organization notifies Mozilla about changes in ownership or operation, the organization must notify the public about any such changes? The idea here is transparency, and making sure that all parties (subscribers and relying parties alike) are made aware of the changes in case they wish to make changes of their own.For whatever it's worth, I gave the Personnel Changes section a bit of thought and wondered if further articulation of "changes" might be helpful. The example that came to mind is GTS and GlobalSign--specifically, that Google would continue to use GlobalSign's infrastructure until a transition is made in the future. Presumably, a change in personnel will take place when Google switches to its own infrastructure, so should Mozilla be notified at that time? As written, I think the answer could be yes, but is that necessarily what you want?(And, for the record, I'm not trying to rehash any past discussion of the acquisition. Rather, I thought it might be a good real-world example based on my understanding of events. If my facts are wrong, that hopefully will not nullify its value as a hypothetical example.)If you prefer to leave the personnel section as-is, I have no issue with that.From: Gervase Markham via dev-security-policySent: Monday, May 1, 2017 4:02 AMTo: mozilla-dev-security-pol...@lists.mozilla.orgReply To: Gervase MarkhamSubject: Policy 2.5 Proposal: Incorporate Root Transfer PolicyMozilla has a Root Transfer Policy which sets out our expectationsregarding how roots are transferred between organizations, or whathappens when one company buys another, based on a recognition that trustis not always transferable.https://wiki.mozilla.org/CA:RootTransferPolicyIt has been reasonably observed that it would be better if this policywere part of our official policy rather than a separate wiki page.So, I have attempted to take that wiki page, remove duplication and boilit down into a set of requirements to add to the existing policy.Here is a diff of the proposed changes:https://github.com/mozilla/pkipolicy/compare/issue-57This is: https://github.com/mozilla/pkipolicy/issues/57---This is a proposed update to Mozilla's root store policy for version2.5. Please keep discussion in this group rather than on Github. Silenceis consent.Policy 2.4.1 (current version):https://github.com/mozilla/pkipolicy/blob/2.4.1/rootstore/policy.mdUpdate process:https://wiki.mozilla.org/CA:CertPolicyUpdates___dev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Remove the bullet about "fraudulent use"

2017-05-01 Thread Peter Kurrasch via dev-security-policy
  I was thinking that fraud takes many forms generally speaking and that the PKI space is no different. Given that Mozilla (and everyone else) work very hard to preserve the integrity of the global PKI and that the PKI itself is an important tool to fighting fraud on the Internet, it seems to me like it would be a missed opportunity if the policy doc made no mention of fraud.Some fraud scenarios that come to mind:- false representation as a requestor- payment for cert services using a stolen credit card number- malfeasance on the part of the cert issuer- requesting and obtaining certs for the furtherance of fraudulent activityRegarding that last item, I understand there is much controversy over the prevention and remediation of that behavior but I would hope there is widespread agreement that it does at least exist.From: Gervase MarkhamSent: Monday, May 1, 2017 10:49 AMTo: Peter Kurrasch; mozilla-dev-security-pol...@lists.mozilla.orgSubject: Re: Policy 2.5 Proposal: Remove the bullet about "fraudulent use"On 01/05/17 16:28, Peter Kurrasch wrote:> Gerv, does this leave the Mozilla policy with no position statement regarding fraud in the global PKI?What do you mean by "in"?Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Remove the bullet about "fraudulent use"

2017-05-01 Thread Peter Kurrasch via dev-security-policy
Gerv, does this leave the Mozilla policy with no position statement regarding 
fraud in the global PKI?


  Original Message  
From: Gervase Markham via dev-security-policy
Sent: Monday, May 1, 2017 3:36 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Reply To: Gervase Markham
Subject: Re: Policy 2.5 Proposal: Remove the bullet about "fraudulent use"

On 20/04/17 14:39, Gervase Markham wrote:
> So I propose removing it, and reformatting the section accordingly.

Edit made as proposed.

Gerv

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Removing "Wildcard DV Certs" from Potentially Problematic Practices list

2017-04-28 Thread Peter Kurrasch via dev-security-policy
  "Incomplete understanding"? That's rich.There is no reliance on certs as a protection mechanism. Rather, the use of certs/encryption help to facilitate my bad acts. If I'm doing malvertising I basically must use an encrypted channel. If I'm doing other bad things, encryption frustrates the efforts of security personnel to figure out that something bad is happening.As for the weak link, it isn't necessarily weak. True, I could get additional subdomain certs by exploiting the weak validation methods that CABF has endorsed. In many cases that will work just fine but I do still have to interact with the CA which leaves a paper trail--especially if the cert gets published via CT.In the wildcard scenario, there is no need for that interaction. Less interaction, low profile, few opportunities for detection, ability to operate unimpeded...this is the dream for any bad guy. Wildcard certs make it easier for me to get there. One can definitely reach that goal using non-wildcard tactics but it might not be as easy.Getting back to Gerv's original question, should the wildcard section be removed? My answer is: no, it should not be removed. It could stand to be updated though.   From: Ryan SleeviSent: Friday, April 28, 2017 9:51 AM‎On Fri, Apr 28, 2017 at 9:48 AM, Peter Kurrasch <fhw...@gmail.com> wrote:Suppose I want to set up a system to be used for spam, malware distribution, and phishing but, naturally, I want to operate undetected. First step is to find a (legitimate) server that is already set up and is not well secured. Without getting bogged down in the details, let's just assume I can find such a server and I'm able to obtain access to the admin panel or a command line/shell that controls it. With this access, let's also just assume I'm able to obtain the certificate and private key data that the legitimate site owner is using.You can stop here. Once you've done that, it's game over for any subdomains as it stands. Wildcard certs are a red herring. If you've got file control on the server, or can demonstrate control of the base, you can get the subdomains.That's the weak link in your attack model, and for that to change, it will at least require some action on the CA/Browser Forum to restrict the file-based controls or 'practical demonstration of control'. If you just compromise the server/key, you've compromise every subdomain, as it stands today. That's not because of wildcards. That's because of the CA/Browser Forum. Granted, there is a healthy amount of hand waving in this illustration and frankly there are situations where other attack methods are more advantageous for any number of reasons. That said, the point I am hoping to make is that a wildcard certificate opens up possibilities for me as the bad guy that I might not have otherwise.Right, not really, because above :) Again, I'll be the first to admit this is perhaps not the best illustration of the risks posed by wildcard certs but hopefully it's at least good enough. I don't think the above is a major problem today but if the desire is make wildcard certs ubiquitous (?), I hope people will at least think twice.I appreciate your threat modelling of this space, but I think it's operating on incomplete understanding of what the reasonable security boundary is, but also tries to rely on certificates as a spam/phishing protection, of which they most certainly are not :)

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Removing "Wildcard DV Certs" from Potentially Problematic Practices list

2017-04-28 Thread Peter Kurrasch via dev-security-policy
ized.And, finally, I'll just say CAA sounds like a good addition to the security toolbox but does have some shortcomings. The extent to which it can limit unintended wildcard certs is probably a good thing but clearly it will rely on a sufficiently skilled administrator for any effectiveness to be realized.    From: Ryan SleeviSent: Wednesday, April 26, 2017 2:46 PMTo: Peter Kurrasch‎Reply To: r...@sleevi.com‎Cc: Ryan Sleevi; mozilla-dev-security-policySubject: Re: Removing "Wildcard DV Certs" from Potentially Problematic Practices listOn Wed, Apr 26, 2017 at 3:17 PM, Peter Kurrasch <fhw...@gmail.com> wrote:  Hi Ryan--To your first comment, I'm afraid I won't have the time to take a closer look at the discussion on 3.2.2.4. Hopefully a path from single domain to unlimited domains exists (or will). It makes sense to me (without fully considering the consequences) that a "special" validation path be used for wildcards. Perhaps it could be part of an existing validation method, I'm not sure. Bottom line though, I don't think it makes sense that anyone and everyone be allowed to obtain a wildcard cert.Right, but there's definitely a more careful argument you'll need to make, because that is _precisely_ what is (effectively) desired, not just by CAs, but most users of more than a few TLS certificates.I'm not even sure that I disagree with you - my history in the CA/Browser Forum hopefully demonstrates Google's deep concern with the security of the validation methods, the capabilities possible with each validation method, and the frequency of which they're performed. However, finding consensus is, at this point, difficult, despite it being plainly obvious to folks with a security background :)For context, I've been pushing for exploring CAA methods to reduce the permissible validation methods, and CAA already has the ability to restrict wildcard issuance via issuewild to set a of CAs, for which you can then establish a defined procedure. So if your concern is helping protect a _competent_ organization from a rogue wildcard, that already exists and is (in the process of) being required.Let's suppose we break up everyone with a server on the Internet into 3 categories based on the size of their Internet presence: substantial, intermediate, and tiny. A "substantial" presence almost certainly has a large enterprise behind it with staff and capital resources to maintain the service. If we can assume that such organizations have tighter controls over use of the domain name space, servers, certificates, and so forth, then I think some of the risks posed by wildcard certs are reduced. That being the case, I'm less concerned about this category.Regrettably, I think this inverts the priority. "substantial" presence are often the slowest to deploy, have the most complexity in their infrastructure, and similarly, suffer the greatest risk from a rogue wildcard. For an "intermediate" presence, I'm thinking of places like universities that have a large number of subdomains in use but might not have a good set of controls over use of the domain name space and certificates and so forth. In this type of setting I think the use of wildcards might be appealing because it simplifies management of the network but if the controls are not in place, there remains a certain level of risk that these certs pose. (And to be fair, this isn't to say that only academic environments face this situation or that all academic environments are not suitably managed.) This category can be of concern.This is most organizations :) The "tiny" category almost speaks for itself and probably applies to all individuals and small businesses--people who want some presence but might not be all that diligent about it. In other words, these are the systems that probably have the weakest security measures in place. These are the systems that are regularly compromised and used for spam campaigns, fake login screens, and such. This is also the category for whom wildcard certs pose the greatest risk‎ to everyone else.I disagree that you can attribute that cost to wildcards. That's the cost of the organization itself. The wildcard doesn't really contribute or change that calculus. ...and all sorts of variants of the above.‎ Per the wildcard cert, all of the above domains are perfectly valid. Per the actual intent of th‎e domain holder, most of t

Re: Removing "Wildcard DV Certs" from Potentially Problematic Practices list

2017-04-26 Thread Peter Kurrasch via dev-security-policy
  Hi Ryan--To your first comment, I'm afraid I won't have the time to take a closer look at the discussion on 3.2.2.4. Hopefully a path from single domain to unlimited domains exists (or will). It makes sense to me (without fully considering the consequences) that a "special" validation path be used for wildcards. Perhaps it could be part of an existing validation method, I'm not sure. Bottom line though, I don't think it makes sense that anyone and everyone be allowed to obtain a wildcard cert.As for the subdomain stuff, I was hoping to avoid providing some examples because I couldn't come up with a simple one. However, I think it's necessary so let's try this out:Let's suppose we break up everyone with a server on the Internet into 3 categories based on the size of their Internet presence: substantial, intermediate, and tiny. A "substantial" presence almost certainly has a large enterprise behind it with staff and capital resources to maintain the service. If we can assume that such organizations have tighter controls over use of the domain name space, servers, certificates, and so forth, then I think some of the risks posed by wildcard certs are reduced. That being the case, I'm less concerned about this category.For an "intermediate" presence, I'm thinking of places like universities that have a large number of subdomains in use but might not have a good set of controls over use of the domain name space and certificates and so forth. In this type of setting I think the use of wildcards might be appealing because it simplifies management of the network but if the controls are not in place, there remains a certain level of risk that these certs pose. (And to be fair, this isn't to say that only academic environments face this situation or that all academic environments are not suitably managed.) This category can be of concern.The "tiny" category almost speaks for itself and probably applies to all individuals and small businesses--people who want some presence but might not be all that diligent about it. In other words, these are the systems that probably have the weakest security measures in place. These are the systems that are regularly compromised and used for spam campaigns, fake login screens, and such. This is also the category for whom wildcard certs pose the greatest risk‎ to everyone else.Consider a case where someone has a custom domain--let's say easypete.ninja--and perfectly reasonable subdomains. So: - easypete.ninja  ‎- www.easypete.ninja - email.easypete.ninjaClearly there is a slight advantage for a wildcard cert in this case--it's easier than listing those subdomains. However, a wildcard cert opens up the possibility for other things like: - com.easypete.ninja - paypal.com.easypete.ninja - google.com.easypete.ninja - .easypete.ninja - facebook.com.loginassistant.easypete.ninja...and all sorts of variants of the above.‎ Per the wildcard cert, all of the above domains are perfectly valid. Per the actual intent of th‎e domain holder, most of the above are surely not valid.So, the problem becomes how a relying party may determine if the site/domain is legitimate. If I have some indicator in the browser UI that says "secure" because the FQDN matching has been satisfied, I'll probably proceed to use the page that is presented. If the indicator is missing, there is a chance I'll think twice about proceeding any further. The trouble in this situation is that if the FQDN is nefarious but satisfies the wildcard contained in the cert, I will probably make the wrong decision and open myself up to who knows what.From: Ryan SleeviSent: Tuesday, April 25, 2017 12:44 AMTo: Peter KurraschReply To: r...@sleevi.comCc: mozilla-dev-security-policySubject: Re: Removing "Wildcard DV Certs" from Potentially Problematic Practices listOn Tue, Apr 25, 2017 at 1:31 AM, Peter Kurrasch via dev-security-policy <dev-security-policy@lists.mozilla.org> wrote:  Wildcard certs present a level of risk that is different (higher?) than for other end-entity certs.‎ The risk as I see it is two-fold: 1) Issuance: Setting aside the merits of the 10 Blessed Methods, there is no clear way to extrapol

Re: Criticism of Google Re: Google Trust Services roots

2017-04-25 Thread Peter Kurrasch via dev-security-policy
Sure, happy to take a look. I think Ryan H. makes some good points and I'm not 
entirely opposed to acquisitions or transfers. My standpoint is that when a 
transfer is to take place, can we be sure that the right things do happen 
instead of leaving things to chance? It's the age old problem of encouraging 
the good and preventing the bad.


  Original Message  
From: Gervase Markham
Sent: Tuesday, April 25, 2017 4:28 AM
To: Peter Kurrasch; mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Criticism of Google Re: Google Trust Services roots

Hi Peter,

On 25/04/17 02:10, Peter Kurrasch wrote:
> Fair enough. I propose the following for consideration:

As it happens, I have been working on encoding:
https://wiki.mozilla.org/CA:RootTransferPolicy
into our policy. A sneak preview first draft is here:

https://github.com/mozilla/pkipolicy/compare/issue-57

Would you be kind enough to review that and see if it addresses your
points and, if not, suggest how it might change and why?

I can see the value of a "definition of intention" and the choosing of a
category, as long as we are careful to make sure the categories do not
preclude operations that we would like to see occur. As Ryan Hurst
notes, there is potentially significant value to the ecosystem in root
transfers.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Removing "Wildcard DV Certs" from Potentially Problematic Practices list

2017-04-24 Thread Peter Kurrasch via dev-security-policy
  Wildcard certs present a level of risk that is different (higher?) than for other end-entity certs.‎ The risk as I see it is two-fold:1) Issuance: Setting aside the merits of the 10 Blessed Methods, there is no clear way to extrapolate a successful validation for one domain into a plethora of FQDN's in a way that works for all scenarios. So the risk in this sense is that the issuer might allow a cert requester to over-subscribe a given domain's name space.2) Abuse: Once the wildcard cert has been issued there is no way to check that the host (or FQDN) to which I'm connecting is a legitimate part of the broader domain or if it has been taken over for nefarious purposes. This is in contrast to the non-wildcard case wherein I know it's a legitimate part because I see the FQDN listed explicitly in the SAN field. So the risk in this sense is to the relying party who is less able to protect himself or herself from connecting to bad servers.The use of "problematic" to describe wildcard certs is perhaps misleading. Perhaps the wildcard‎ certs themselves are not problematic but trying to manage or mitigate the risk they pose is. Either way, I don't think it would be wise to remove this from the problematic practices list.  ‎  From: Gervase Markham via dev-security-policySent: Monday, April 24, 2017 4:16 AM‎On 21/04/17 12:09, Nick Lamb wrote:> Of the ballot 169 methods, 3.2.2.4.7 is most obviously appropriate> for verifying that the applicant controls the entire domain and thus> *.example.com, whereas say 3.2.2.4.6 proves only that the applicant> controls a web server, it seems quite likely they have neither the> legal authority nor the practical ability to control servers with> other names in that domain. I can see arguments either way for> 3.2.2.4.4, depending on how well email happens to be administrated in> a particular organisation.So your concern is that a subset of the 10 Blessed Methods might not besuitable for verifying the level of control necessary to safely issue awildcard cert?If that's true, we should look at it, but I don't see how that'sconnected with saying or not saying on our wiki page that wildcard certsare inherently problematic.So, to analyse: you are saying that demonstrating control overhttp://www.example.com/ and getting a cert for *.www.example.com isshaky? Or demonstrating control of http://example.com/ and getting acert for *.example.com? Or something else?Gerv___dev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Criticism of Google Re: Google Trust Services roots

2017-04-24 Thread Peter Kurrasch via dev-security-policy
  I see what you're saying and there should be some consideration for that scenario. If the acquiring company will keep all the same infrastructure and staff and if decision making authority will remain with that staff, then I think it's reasonable ‎to make that accommodation.Using a word like "all" could be going too far but at the moment I'm not sure how to strike a softer tone and still have something that is precise and enforceable.From: Jakob Bohm via dev-security-policySent: Monday, April 24, 2017 8:42 PM‎On 25/04/2017 03:10, Peter Kurrasch wrote:> Fair enough. I propose the following for consideration:>> Prior to ‎transferring ownership of a root cert contained in the trusted> store (either on an individual root basis or as part of a company> acquisition), a public attestation must be given as to the intended> management of the root upon completion of the transfer. "Intention" must> be one of the following:>> A) The purchaser has been in compliance with Mozilla policies for more> than 12 months and will continue to administer (operate? manage?) the> root in accordance with those policies.>> B) The purchaser has not been in compliance with Mozilla policies for> more than 12 months but will ‎do so before the transfer takes place. The> purchaser will then continue to administer/operate/manage the root in> accordance with Mozilla policies.>How about:B2) The purchaser is not part of the Mozilla root program and has notbeen so in the recent past, but intends to continue the programmembership held by the seller.  The purchaser intends to completeapproval negotiations with the Mozilla root program before the transfertakes place.  The purchaser intends to retain most of the expertise, personnel, equipment etc. involved in the operation of the CA, as willbe detailed during such negotiations.This, or some other wording, would be for a complete purchase of thebusiness rather than a merge into an existing CA, similar to whathappened when Symantec purchased Verisign's original CA business yearsago, or (on a much smaller scale) when Nets purchased the TDC's CAbusiness unit and renamed it as DanID.> C) The purchaser does not intend to operate the root in accordance with> Mozilla policies. Mozilla should remove trust from the root upon> completion of the transfer.‎
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Criticism of Google Re: Google Trust Services roots

2017-04-24 Thread Peter Kurrasch via dev-security-policy
  Fair enough. I propose the following for consideration:Prior to ‎transferring ownership of a root cert contained in the trusted store (either on an individual root basis or as part of a company acquisition), a public attestation must be given as to the intended management of the root upon completion of the transfer. "Intention" must be one of the following:A) The purchaser has been in compliance with Mozilla policies for more than 12 months and will continue to administer (operate? manage?) the root in accordance with those policies.B) The purchaser has not been in compliance with Mozilla policies for more than 12 months but will ‎do so before the transfer takes place. The purchaser will then continue to administer/operate/manage the root in accordance with Mozilla policies.C) The purchaser does not intend to operate the root in accordance with Mozilla policies. Mozilla should remove trust from the root upon completion of the transfer.The wording of the above needs some polish and perhaps clarification. The idea is that the purchaser must be able to demonstrate some level of competence at running a CA--perhaps by first cutting their teeth as a sub-CA? If a organization is "on probation" with Mozilla, I don't think it makes sense to let them assume more control or responsibility for cert issuance so there should be a mechanism to limit that.I also think we should allow for the possibility that someone may legitimately want to remove a cert from the Mozilla program. Given the disruption that such a move can cause, it is much better to learn that up front so that appropriate plans can be made.From: Gervase Markham via dev-security-policySent: Tuesday, April 11, 2017 11:36 AMTo: mozilla-dev-security-pol...@lists.mozilla.orgReply To: Gervase MarkhamSubject: Re: Criticism of Google Re: Google Trust Services rootsOn 11/04/17 14:05, Peter Kurrasch wrote:> Is there room to expand Mozilla policy in regards to ownership issues?Subject to available time (which, as you might guess by the traffic inthis group, there's not a lot of right now, given that this is not myonly job) there's always room to reconsider policy. But what we need isa clearly-stated and compelling case that changing the way we thinkabout these things would have significant and realisable benefits, andthat any downsides are fairly enumerated and balanced against those gains.Gerv___dev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Criticism of Google Re: Google Trust Services roots

2017-04-11 Thread Peter Kurrasch via dev-security-policy
  I think Jacob was merely attempting to provide a more thought out alternative to my proposal basically requiring potential CA owners to first be "accepted" into the Mozilla trusted root program. There is some overlap, yes, but the general idea is to be more prescriptive in ownership than the current policy states.Is there room to expand Mozilla policy in regards to ownership issues?From: Gervase Markham via dev-security-policySent: Friday, April 7, 2017 4:50 AMTo: mozilla-dev-security-pol...@lists.mozilla.orgReply To: Gervase MarkhamSubject: Re: Criticism of Google Re: Google Trust Services rootsOn 06/04/17 18:42, Jakob Bohm wrote:> Here are some ideas for reasonable new/enhanced policies (rough> sketches to be discussed and honed before insertion into a future> Mozilla policy version):I'm not sure what's new or enhanced about them. Our current policies arenot this prescriptive so CAs have more flexibility in how they go aboutthings, but I believe they preserve the same security invariants.In general, I suggest that if others have policy problems they see, youlet them draft the solutions? :-)‎Gerv___dev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Criticism of Google Re: Google Trust Services roots

2017-04-05 Thread Peter Kurrasch via dev-security-policy
  I have no issue with the situations you describe below. Mozilla should act to encourage the good behaviors that we would want a new, acquiring CA to exhibit while prohibiting the bad--or at least limiting the damage those bad behaviors might cause. It's in this latter category that I think the current policy falls short.Consider a situation in which I have a business called Easy Pete's Finishing School for Nigerian Princes. As the name might suggest, the nature of my business is to train potential scammer after potential scammer and set them free on the Internet to conduct whatever naughty things they like. It's a very lucrative business so when I see a root cert coming up for sale it's a no-brainer for me to go out and purchase it. Having access to a root will undoubtedly come in handy as I grow my business.Once I take possession of the root cert's private key and related assets, what will limit the bad actions that I intend to take? For the sake of appearances (to look like a good-guy CA) I'll apply to join the Mozilla root program but I'm only really going through the motions--even in a year's time I don't really expect to be any closer to completing the necessary steps to become an actual member.And it's true that I may be prohibited from issuing certs per Mozilla policy, but that actually is a bit of a squishy statement. For example, I'll still need to reissue certs to the existing customers ‎as their certs expire or if they need rekeying. Perhaps I'll also get those clients to provide me with their private key so I may hold it for "safe keeping". Sure, it's a violation of the BR's but I'm not concerned with that. Besides, it will take some time until anyone even figures out what I'm doing.The other recourse in the current policy is to distrust the root cert altogether. Even then it will take time to take full effect and who knows, maybe I can still use the root for code signing? And then there are the existing customers who are left holding a soon-to-be worthless certLeaving behind this land of hypotheticals‎, it seems to me the policy as written is weaker than it ought to be. My own opinion is that only a member CA should be allowed to purchase a root cert (and assets), regardless if it's only one cert or the whole company. If that's going too far, I think details are needed for what "regular business operations" are allowed during the period between acquisition of the root and acceptance into the Mozilla root program. And should there be a maximum time allowed to become such a member?From: Nick Lamb via dev-security-policySent: Tuesday, April 4, 2017 3:42 AMTo: mozilla-dev-security-pol...@lists.mozilla.orgReply To: Nick LambSubject: Re: Criticism of Google Re: Google Trust Services rootsOn Monday, 3 April 2017 23:34:44 UTC+1, Peter Kurrasch  wrote:> I must be missing something still? The implication here is that a purchaser who is not yet part of the root program is permitted to take possession of the root cert private key and possibly the physical space, key personnel, networking infrastructure, revocation systems, and responsibility for subordinates without having first demonstrated any competence at ‎running a CA organization.This appears to me to simply be a fact, not a policy.Suppose Honest Achmed's used car business has got him into serious debt. Facing bankruptcy, Achmed is ordered by a court to immediately sell the CA to another company Rich & Dick LLC, which has never historically operated a CA but has made informal offers previously.Now, Mozilla could say, OK, if that happens we'll immediately distrust the root. But to what end? This massively inconveniences everybody, there's no upside except that in the hypothetical scenario where Rick & Dick are bad guys the end users are protected (eventually, as distrust trickles out into the wild) from bad issuances they might make. But a single such issuance would trigger that distrust already under the policy as written and we have no reason to suppose they're bad guys.On the other hand, if Rich & Dick are actually an honest outfit, the policy as written lets them talk to Mozilla, make representations to m.d.s.policy and get themselves trusted, leaving the existing Honest Achmed subscrib

Re: Criticism of Google Re: Google Trust Services roots

2017-04-03 Thread Peter Kurrasch via dev-security-policy
I must be missing something still? The implication here is that a purchaser who 
is not yet part of the root program is permitted to take possession of the root 
cert private key and possibly the physical space, key personnel, networking 
infrastructure, revocation systems, and responsibility for subordinates without 
having first demonstrated any competence at ‎running a CA organization.

I think we need to beef up this section of the policy but if you'd prefer to 
discuss that at a later time (separate from the Google acquisition thread), 
that will work for me.


  Original Message  
From: Gervase Markham
Sent: Saturday, April 1, 2017 6:02 AM
To: Peter Kurrasch; mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Criticism of Google Re: Google Trust Services roots

On 31/03/17 20:26, Peter Kurrasch wrote:
> The revised example is not entirely what I had in mind (more on that
> in a minute) but as written now is mostly OK by me. I do have a
> question as to whether the public discussion as mentioned must take
> place before the actual transfer? In other words, will Mozilla
> require that whatever entity is trying to purchase the root must be
> fully admitted into the root program before the transfer takes
> place?

No. As currently worded, it has to take place before issuance is permitted.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Criticism of Google Re: Google Trust Services roots

2017-03-31 Thread Peter Kurrasch via dev-security-policy
The revised example is not entirely what I had in mind (more on that in a 
minute) but as written now is mostly OK by me. I do have a question as to 
whether the public discussion as mentioned must take place before the actual 
transfer? In other words, will Mozilla require that whatever entity is trying 
to purchase the root must be fully admitted into the root program before the 
transfer takes place?

Also, let me state that I did not intend to besmirch the names of either HARICA 
or WoSign and I appreciate their indulging my use of their names in what turned 
out to be a sloppy illustration. Based on my review of HARICA's CPS some months 
ago, I was left with the impression of them as a tightly-focused ‎organization 
that, by all appearances, is well-run. And that was the image I had mind and 
had hoped to convey in using their name. By mentioning WoSign I was really 
thinking of only the state of their reputation at the moment--and I think it's 
fair to say it's been tarnished? The reasons for WoSign being in the position 
they are in were totally irrelevant to what I had in mind.

So what was my point? In essence, I wanted to suggest that not every company 
seeking to purchase a root from another company will necessarily have good 
intentions and even if they do, their intentions might not be in the interest 
of the public good. I think it's important to at least acknowledge that 
possibility and try to have policies in place that encourage the good and limit 
the bad.

I don't know if people are on board with this notion or if some hypothetical 
scenarios are needed at this point? For now I'll just pause and let others 
either ask or comment away.


  Original Message  
From: Gervase Markham via dev-security-policy
Sent: Friday, March 31, 2017 12:28 PM
To: mozilla-dev-security-pol...@lists.mozilla.org
Reply To: Gervase Markham
Subject: Re: Criticism of Google Re: Google Trust Services roots

On 31/03/17 17:39, Peter Bowen wrote:
>>> For example, how frequently should roots
>>> be allowed to change hands? What would Mozilla's response be if
>>> GalaxyTrust (an operator not in the program)
>>> were to say that they are acquiring the HARICA root?
>>
>> From the above URL: "In addition, if the receiving company is new to the
>> Mozilla root program, there must also be a public discussion regarding
>> their admittance to the root program."
>>
>> Without completing the necessary steps, GalaxyTrust would not be admitted to
>> the root program.
> 
> I've modified the quoted text a little to try to make this example
> clearer, as I think the prior example conflated multiple things and
> used language that did not help clarify the situation.
> 
> Is the revised example accurate?

The revised example is accurate.

Gerv

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Criticism of Google Re: Google Trust Services roots

2017-03-30 Thread Peter Kurrasch via dev-security-policy
By "not new", are you referring to Google being the second(?) instance where a 
company has purchased an individual root cert from another company? It's fair 
enough to say that Google isn't the first but I'm not aware of any commentary 
or airing of opposing viewpoints as to the suitability of this practice going 
forward.

Has Mozilla received any notification that other companies ‎intend to acquire 
individual roots from another CA? I wouldn't ask Mozilla to violate any 
non-disclosures but surely it's possible to let the community know if we should 
expect more of this? Ryan H. implied as much in a previous post but I wasn't 
sure where he was coming from on that.

Also, does Mozilla have any policies (requirements?) regarding individual root 
acquisition? For example, how frequently should roots be allowed to change 
hands? What would Mozilla's response be if WoSign were to say that because of 
the tarnishing of their own brand they are acquiring the HARICA root? What if 
Vladimir Putin were to make such a purchase? Any requirements on companies 
notifying the public when the acquisition takes place?

Perhaps this is putting too much of a burden on Mozilla as a somewhat protector 
of the global PKI but I'm not sure who else is in a better position for that 
role?


  Original Message  
From: Gervase Markham via dev-security-policy
Sent: Thursday, March 30, 2017 1:06 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Reply To: Gervase Markham
Subject: Re: Criticism of Google Re: Google Trust Services roots

On 29/03/17 20:46, Peter Kurrasch wrote:
> It's not inconsequential for Google to say: "From now on, nobody can
> trust what you see in the root certificate, even if some of it
> appears in the browser UI. The only way you can actually establish
> trust is to do frequent, possibly complicated research." It doesn't
> seem right that Google be allowed to unilaterally impose that change
> on the global PKI without any discussion from the security
> community.

As others in this thread have pointed out, this is not a new thing. I
wouldn't say that Google is "imposing" this need.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Criticism of Google Re: Google Trust Services roots

2017-03-29 Thread Peter Kurrasch via dev-security-policy
I'm not so sure I want to optimize the system in that way, but I am concerned 
about the (un)intended consequences of rapidly changing root ownership on the 
global PKI.

It's not inconsequential for Google to say: "From now on, nobody can trust what 
you see in the root certificate, even if some of it appears in the browser UI. 
The only way you can actually establish trust is to do frequent, possibly 
complicated research." It doesn't seem right that Google be allowed to 
unilaterally impose that change on the global PKI without any discussion from 
the security community.

But you bring up a good point that there seems to be much interest of late to 
speed up the cycle times for various activities within the global PKI but it's 
not entirely clear to me what's driving it. My impression is that Google was 
keen to become a CA in their own right as quickly as possible, so is this 
interest based on what Google wants? Or is there a Mozilla mandate that I 
haven't seen (or someone else's mandate?)?


  Original Message  
From: Gervase Markham via dev-security-policy
Sent: Wednesday, March 29, 2017 9:48 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Reply To: Gervase Markham
Subject: Re: Criticism of Google Re: Google Trust Services roots

On 29/03/17 15:35, Peter Kurrasch wrote:
> In other words, what used to be a trust anchor is now no better at
> establishing trust than the end-entity cert one is trying to validate or
> investigate (for example, in a forensic context) in the first place. I
> hardly think this redefinition of trust anchor improves the state of the
> global PKI and I sincerely hope it does not become a trend.

The trouble is, you want to optimise the system for people who make
individual personal trust decisions about individual roots. We would
like to optimise it for ubiquitous minimum-DV encryption, which requires
mechanisms permitting new market entrants on a timescale less than 5+ years.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Criticism of Google Re: Google Trust Services roots

2017-03-29 Thread Peter Kurrasch via dev-security-policy
  (Renaming the thread to fork this particular discussion from any others on this transaction.)You raise a fair point, Ryan‎: I'm not well versed in how Let's Encrypt went about establishing their roots. I thought I had a good understanding but perhaps I had missed something. When I went to the Let's Encrypt site I found a diagram that showed one of their intermediates as cross-signed by IdenTrust; no mention of a root acquisition. Again, perhaps I've missed something so please let me know if that's the case. In a ironic way, this very uncertainty demonstrates the kind of confusion I was trying to highlight with my is-it-Google-or-GlobalSign comments.I think Kurt and I are on the same page regarding the precedent being established in this and prior acquisitions. In fact, I probably incorporated some of his points in my own commentary.‎ I'm not sure if he'll agree with the what I view as the danger here: By purchasing a single root certificate, Google (and anyone else) is redefining the very concept of "trust anchor".Back in the early SSL days, one could look at the root cert, see the name on it, and decide right away if it's trustworthy ("oh yeah, I know VeriSign"). In other words, trust was contained entirely within the sequence of bytes that make up the root cert combined with one's own experience.A significant change to that model happened when CA companies began to be purchased by other companies. However, since they were complete acquisitions, it isn't so hard‎ to add that one tidbit of knowledge to the thought process ("oh yeah, Symantec and VeriSign are the same"). So during this period of time trust was contained within the root certificate itself in conjunction with one's own experience and knowledge of CA company acquisitions.As more roots entered the scene, it became more frequent for one to encounter a root CA company that was unfamiliar. However, one could decide to ignore it because it was in another jurisdiction (country, government institution, etc.) that was not pertinent. Or one could, in fairly short order, do some Internet research on the CA company and make a decision--including possibly reading their CP/CPS. So, in essence, the anchor of trust is contained within the root cert combined with one's own knowledge, experience, and some research.Now, however, we are faced with a entirely different situation wherein the root certificate itself contributes little to establishing trust for a certificate chain. Any information contained in the cert is merely a starting point to research that particular cert: one must first determine who currently owns that root, possibly research the CA company if it's unfamiliar ("I know Google but what is this GTS all about?"), and probably ‎find and read the CP/CPS to find the details about the new ownership ("so who is handling revocation now?").The kicker in all this is that one must perform these steps for every root one encounters, every time one encounters ‎it: Did GlobalSign sell any other roots since the last time I checked? Has Google transferred ownership of the GlobalSign roots they purchased on to someone else? What about this Amazon root or Comodo or...?In other words, what used to be a trust anchor is now no better at establishing trust than the end-entity cert one is trying to validate or investigate (for example, in a forensic context) in the first place. I hardly think this redefinition of trust anchor improves the state of the global PKI and I sincerely hope it does not become a trend.From: Kurt Roeckx via dev-security-policySent: Thursday, March 23, 2017 11:24 AMTo: mozilla-dev-security-pol...@lists.mozilla.orgReply To: Kurt RoeckxSubject: Re: Google Trust Services rootsOn 2017-03-23 16:39, Ryan Sleevi wrote:> On Thu, Mar 23, 2017 at 8:37 AM, Peter Kurrasch via dev-security-policy <> dev-security-policy@lists.mozilla.org> wrote:>>> ‎I would be interested in knowing why Google felt it necessary to purchase>> an existing root instead of, for example, pursuing a "new root" path along>> the lines of what Let's Encrypt did? All I could gather from the Google>> security blog is that they really want to be a root CA and to do it in a>> hurry. ‎Why the need to d

Re: Google Trust Services roots

2017-03-23 Thread Peter Kurrasch via dev-security-policy
‎So this is the third of my 3 sets of criticisms regarding the acquisition of 
the GlobalSign roots by Google Trust Services. I apologize for the significant 
delay between the first 2 sets and this one. Hopefully people in the forum 
still feel this discussion relevant going forward even though the matter is 
considered resolved.

Several of the comments I made regarding GlobalSign also apply to Google, 
especially the intermingling of the two brands. The implications of the 
confusion and uncertainty leading to an exploitable attack are just as 
applicable to Google as they are to GlobalSign. However, when you consider the 
stature of Google both on the Internet and in consumer products, the 
ramifications are more significant. Or, if you will, the attack surface is 
quite different than if GlobalSign were to purchase a root from, say, Symantec.

I do fault Google for what I consider to be inadequate communication of the 
acquisition. This is not to say there's been no communication, just that I 
don't think it's enough, especially if you are not a CABF participant or don't 
keep up with Internet security news generally. Why not publish a message last 
October that regular folks on the Internet can understand? Why wait 3 months?‎ 
Why expect people to dig through a CPS to find what should be readily available 
information? 

I would be interested in knowing why Google felt it necessary to purchase an 
existing root instead of, for example, pursuing a "new root" path along the 
lines of what Let's Encrypt did? All I could gather from the Google security 
blog is that they really want to be a root CA and to do it in a hurry. ‎Why the 
need to do it quickly, especially given the risks (attack surface)?

I also would like to know what the plan or expectation is regarding formal 
separation between the infrastructures of Google and GlobalSign. The overlap is 
an understandable necessity during the transition but nonetheless presents 
another opportunity for improper access, loss (leaking) of data, and perhaps 
other nefarious activities. And, does Google have a published policy regarding 
the information collected, stored, and analyzed when people access the CRL and 
OCSP distribution nodes?

I do want to say I appreciate that someone with Ryan H.'s level of experience 
is involved in a transaction like this. There are undoubtedly many details to 
address that ensure a secure and proper transfer. I hope that someone on the 
GlobalSign side was equally experienced? The next time someone wants to 
purchase existing roots, the people involved might not have that same level of 
experience, and that should give everyone pause.


  Original Message  
From: Ryan Hurst via dev-security-policy
Sent: Wednesday, March 8, 2017 12:02 PM
To: mozilla-dev-security-pol...@lists.mozilla.org
Reply To: Ryan Hurst
Subject: Re: Google Trust Services roots

> Jakob: An open question is how revocation and OCSP status for the 
> existing intermediaries issued by the acquired roots is handled. 

Google is responsible for producing CRLs for from these roots. We are also 
currently
relying on the OCSP responder infrastructure of GlobalSign for this root but are
In the process of migrating that inhouse.

> Jakob: Does GTS sign regularly updated CRLs published at the (GlobalSign) 
> URLs 
> listed in the CRL URL extensions in the GlobalSign operated non-expired 
> intermediaries? 

At this time Google produces CRLs and works with GlobalSign to publish those 
CRLs.

> Jakob: Hopefully these things are answered somewhere in the GTS CP/CPS for 
> the 
> acquired roots. 

This level of detail is not typically included in a CPS, for example, a service 
may change 
Which internet service provider or CDN service they use and not need update 
their CP/CPS.


> Jakob: Any relying party seeing the existing root in a chain would see the 
> name GlobalSign in the Issuer DN and naturally look to GlobalSign's 
> website and CP/CPS for additional information in trying to decide if 
> the chain should be trusted. 

The GlobalSign CPS indicates that the R2 and R4 are no longer under their 
control.

Additionally given the long term nature of CA keys, it is common for the DN not 
to accurately 
represent the organization that controls it. As I mentioned in an earlier 
response in the 90’s I 
created roots for a company called Valicert that has changed hands several 
times, additionally
Verisign, now Symantec in this context has a long history of acquiring CAs and 
as such they 
have CA certificates with many different names within them.

> Jakob: A relying party might assume, without detailed checks, that these 
> roots 
> are operated exclusively by GlobalSign in accordance with GlobalSign's 
> good reputation. 

As the former CTO of GlobalSign I love hearing about their good reputation ;)

However I would say the CP/CPS is the authoritative document here and since
GMO GlobalSign CP/CPS clearly states the keys are no longer in their control I 
believe thi

Criticism of GMO GlobalSign Re: Google Trust Services roots

2017-03-10 Thread Peter Kurrasch via dev-security-policy
  This is my second of three forks of this discussion on the transfer of 2 GlobalSign roots. This thread focuses on GMO GlobalSign because in my estimation they have put themselves in a precarious position that warrants public discussion. In previous comments I've made, I've expressed disapproval at the fact that there is no mention of the transfer on GlobalSign's website. I did not find it under News‎ and Events nor under the SSL sales pages nor under the Resources page. I also could find no information about the existence of different roots that GlobalSign has and their intended use cases.The search result that Ryan H. mentions below is in fact a curious situation. A direct link to the CPS is listed, sure, but if you go to the ".../resources" page directly there is no mention of the CPS. I would assume that at some point a subscriber is required to accept the terms of the CPS and is then presented with an opportunity to obtain the actu‎al document, but what about the relying parties? Are relying parties allowed to obtain the document but only if they use a search engine to find it? Are we to expect that search engines will always and only return the correct version for the specific root in which I'm interested?To be fair, I don't know that any of this constitutes a violation of any BR requirement or Mozilla policy--I assume not.‎ I also assume that GlobalSign is not the only offender in this regard. Still, I expect better than this from any root CA participant; surely a CA can give me something rather than leave me with nothing?All that said, it is not even what causes me the most concern, which is the intermingling of the GlobalSign and ‎Google brands. Like it or not, there will always be questions like "Is this GlobalSign or is this Google?" and this creates a risk not only to GlobalSign but also the Internet community. (There is a risk to Google as well but I'll address that in a separate thread.)The risk is a result of the confusion and uncertainty that are introduced by the transfer of these 2 roots.‎ Consider that right now I could launch a phishing campaign targeting GlobalSign subscribers with a message along the lines of "Did you know that GlobalSign has sold your certificate to Google? Click here to learn what you can do to protect your website." Should the person click on a link I might put the person on a fake login screen or a malware distribution point or engage in any other nefarious act of my choosing. For that matter I might try to sell the person my own certificate service: "Leave GlobalSign and Google behind! Protect the privacy of your website visitors and buy my service instead!"The point here is that accuracy in my message is not needed. Instead, I can exploit the confusion and uncertainty (or, if you prefer, FUD) which can lead to damage to GlobalSign's reputation and possible loss of business. Conceivably this can also impact the global PKI if I'm able to gain unauthorized access to a subscriber's account and have certificates issued for my nefarious websites.All of this to say that it actually is important that GlobalSign put messaging on their website and generally be proactive in limiting the chances for misinformation, confusion, and so forth to propagate across the Internet. The last thing I'll mention is that I have questions as to whether GlobalSign has violated either their own CPS or privacy policy when it comes to their subscribers‎. Admittedly I haven't had a chance to review either document so it's quite possible I'm misinformed and I hope someone will correct me as appropriate.But the basic reasoning goes that there are some people who don't like Google and perhaps have chosen to use GlobalSign because they are not Google. Personally I think GlobalSign has an obligation to notify their subscribers with something to the effect that "after a certain date we will be sharing your payment information, certificate history, domain ownership, login activity to our Web portal, etc. with Google." However, if there are statements in either the doc that have been violated, that is a more serious issue.The exact information being shared with (or is now available to) Google has not been publicly disclosed so I couldn't say for sure what should be communicated. Still, I imagine there are subscribers who would be surprised to learn that information they thought was constrained to just GlobalSign is no longer so. ‎I think it's only fair that subscribers (and relying parties) be offered a chance to opt-out even if it means subscribers leaving GlobalSign for some other vendor. I don't know that such an offering has been made?‎I do hope that more can be publicly disclosed about what information is shared between GlobalSign and Google--including if the data sharing is related to only the 2 roots that were acquired or all GlobalSign roots. 

Criticism of Mozilla Re: Google Trust Services roots

2017-03-09 Thread Peter Kurrasch via dev-security-policy
I've changed the subject of this thread because I have criticisms of all 3
parties involved in this transaction and will be bringing them up
separately.

That said, "criticism" may be too strong a word in this case because I
think I do understand and appreciate the position that Mozilla is in
regarding confidential transactions such as this.  And while Mozilla's
actions seem perfectly reasonable, I can't help but think this transaction
is a train wreck in the making.  It would be good if Mozilla could prevent
train wrecks when possible and to do so probably requires updates to the
Root Transfer Policy.

To that end, here are some thoughts to get things going:

* Types of transfers:  I don't think the situation was envisioned where a
single root would be transferred between entities in such a way that
company names and branding would become intermingled.  My own personal
opinion is that such intermingling is not in the public interest and should
be prohibited.  That very likely could be too strong a stance for
Mozilla--which is fine--but it would be good to have Mozilla's position
clearly articulated should a situation like this arise again.

* Manner of transfer:  As we learned from Ryan H., a second HSM was
introduced for the transfer of the private key meaning that for a period of
time 2 copies of the private key were in existence.  Presumably one copy
was destroyed at some point, but I'm not familiar with any relevant
standards or requirements to know when/how that takes place.  Whatever the
case may be, this situation seems to fall outside of the Root Transfer
Policy as I now read it.  Also, did GlobalSign ever confirm to Mozilla that
they are no longer in possession of or otherwise have access to the private
key for those 2 roots?

* Conduct of the transfer:  I think an expectation should be set that the
"current holder of the trust" must be the one to drive the transfer.  Trust
can be handed to someone else; reaching in and taking trust doesn't sound
very...trustworthy?  To that end, I think the policy should state that the
current root holder must do more than merely notify Mozilla about the
change in ownership; the holder (and their auditor) must provide the
audits, attestations, and answers to questions that come up.  Only after
the transfer is complete would the "new holder" step in to perform those
duties.

* Public notification:  I appreciate that confidentiality is required when
business transactions are being discussed but at some point, in the
interest of transparency, the public must be notified of the transfer.  I
think this is implied (or assumed) in the current policy, but it might be
good to state explicitly that a public announcement must be made.  I would
add that making an announcement at a CABF meeting is all well and good, but
considering that most people on the Internet are not able to attend those
meetings it would be good if an announcement could be made in other forums
as well.


I imagine there are more things we'll want to discuss but hopefully this is
enough to start the discussion.


On Wed, Mar 8, 2017 at 9:54 AM, Gervase Markham  wrote:

> On 08/03/17 03:54, Peter Kurrasch wrote:
> > - Google has acquired 2 root certificates from GMO GlobalSign but not
> > the ‎company itself.
>
> Yes.
>
> > GMO GlobalSign will continue to own other roots and
> > will use only those other roots for the various products and services
> > they choose to offer going forward.
>
> Not quite. GMO GlobalSign continues to control some subCAs of the roots
> it sold to Google, and is using those (presumably) to wind down its
> interest in those roots over time or support customer migrations to
> other roots. This happens to include issuing EV certificates.
>
> > There is no affiliation or business
> > relationship between GMO GlobalSign and Google after the completion of
> > the acquisition.
>
> We don't have information on this; the terms of the deal, and indeed any
> other deals the two companies may have made, are not public.
>
> > - No public announcement of the acquisition was made prior to January
> > 26, 2017 via the Google security blog.
>
> Depends what you mean by announcement, but they applied in a public bug
> for inclusion in the Mozilla root program in December:
> https://bugzilla.mozilla.org/show_bug.cgi?id=1325532
> and, I think, announced their intention in a publicly-minuted meeting of
> the CAB Forum in Redmond in mid-October 2016.
>
> > - No disclosure has been made regarding what specific items were
> > acquired, including such things as: "private key material" (HSM's and
> > whatnot); computer equipment used as web servers, OCSP responders, etc.;
> > domain names, IP addresses, and other infrastructure used in the
> > operations and 

Re: Google Trust Services roots

2017-03-09 Thread Peter Kurrasch via dev-security-policy
By definition, a CPS is the authoritative document on what root
certificates a CA operates and how they go about that operation.  If the
GlobalSign CPS has been updated to reflect the loss of their 2 roots,
that's fine.  Nobody is questioning that.

What is being questioned is whether updating the GlobalSign CPS is
sufficient to address the needs, concerns, questions, or myriad other
issues that are likely to come up in the minds of GlobalSign subscribers
and relying parties--and, for that matter, Google's own subscribers and
relying parties.  To that, I think the answer must be: "no, it's not
enough".  Most people on the internet have never heard of a CPS and of
those who have, few will have ever read one and fewer still will have read
the GlobalSign CPS.

It would be good if you could elaborate more on what steps Google will be
taking to communicate to the general public that GlobalSign means
GlobalSign except when it means Google and that sometimes Google will mean
GlobalSign as well.



On Wed, Mar 8, 2017 at 12:02 PM, Ryan Hurst via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> > Jakob: An open question is how revocation and OCSP status for the
> > existing intermediaries issued by the acquired roots is handled.
>
> Google is responsible for producing CRLs for from these roots. We are also
> currently
> relying on the OCSP responder infrastructure of GlobalSign for this root
> but are
> In the process of migrating that inhouse.
>
> > Jakob: Does GTS sign regularly updated CRLs published at the
> (GlobalSign) URLs
> > listed in the CRL URL extensions in the GlobalSign operated non-expired
> > intermediaries?
>
> At this time Google produces CRLs and works with GlobalSign to publish
> those CRLs.
>
> > Jakob: Hopefully these things are answered somewhere in the GTS CP/CPS
> for the
> > acquired roots.
>
> This level of detail is not typically included in a CPS, for example, a
> service may change
> Which internet service provider or CDN service they use and not need
> update their CP/CPS.
>
>
> > Jakob: Any relying party seeing the existing root in a chain would see
> the
> > name GlobalSign in the Issuer DN and naturally look to GlobalSign's
> > website and CP/CPS for additional information in trying to decide if
> > the chain should be trusted.
>
> The GlobalSign CPS indicates that the R2 and R4 are no longer under their
> control.
>
> Additionally given the long term nature of CA keys, it is common for the
> DN not to accurately
> represent the organization that controls it. As I mentioned in an earlier
> response in the 90’s I
> created roots for a company called Valicert that has changed hands several
> times, additionally
> Verisign, now Symantec in this context has a long history of acquiring CAs
> and as such they
> have CA certificates with many different names within them.
>
> > Jakob: A relying party might assume, without detailed checks, that these
> roots
> > are operated exclusively by GlobalSign in accordance with GlobalSign's
> > good reputation.
>
> As the former CTO of GlobalSign I love hearing about their good reputation
> ;)
>
> However I would say the CP/CPS is the authoritative document here and since
>  GMO GlobalSign CP/CPS clearly states the keys are no longer in their
> control I believe this
> Should not be an issue.
>
> > Jakob: Thus a clear notice that these "GlobalSign roots" are no longer
> > operated by GlobalSign at any entrypoint where a casual relying party
> > might go to check who "GlobalSign R?" is would be appropriate.
>
> I would argue the CA’s CP/CPS’s are the authoritative documents here and
> would
> Satisfy this requirement.
>
> > Jakob: If possible, making Mozilla products present these as "Google",
> not
> > "GlobalSign" in short-form UIs (such as the certificate chain tree-like
> > display element).  Similarly for other root programs (for example, the
> > Microsoft root program could change the "friendly name" of these).
>
> I agree with Jakob here, given the frequency in which roots change hands,
> it would make
> sense to have an ability to do this. Microsoft maintains this capability
> that is made available
> to the owner.
>
> There are some limitations relative to where this domain information is
> used, for example
>  in the case of an EV certificate, if Google were to request Microsoft
> use this capability the
> EV badge would say verified by Google. This is because they display the
> root name for the
> EV badge. However, it is the subordinate CA in accordance with its CP/CPS
> that is responsible
> for vetting, as such the name displayed in this case should be GlobalSign.
>
> Despite these limitations, it may make sense in the case of Firefox to
> maintain a similar capability.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-poli

Re: Google Trust Services roots

2017-03-07 Thread Peter Kurrasch via dev-security-policy
  Previous attempt had a major formatting snafu. Resending.From: Peter KurraschSent: Tuesday, March 7, 2017 9:35 PM‎I'm trying to keep score here but am having difficulties. Can someone confirm if the following statements are correct:- Google has acquired 2 root certificates from GMO GlobalSign but not the ‎company itself. GMO GlobalSign will continue to own other roots and will use only those other roots for the various products and services they choose to offer going forward. There is no affiliation or business relationship between GMO GlobalSign and Google after the completion of the acquisition. - No public announcement of the acquisition was made prior to January 26, 2017 via the Google security blog.- No disclosure has been made regarding what specific items were acquired, including such things as: "private key material" (HSM's and whatnot); computer equipment used as web servers, OCSP responders, etc.; domain names, IP addresses, and other infrastructure used in the operations and maintenance of the acquired roots; data such as subscriber lists, databases, server logs, payment details and histories, certificate issuance activities and histories, etc.; any access rights to physical space such as offices, data centers, development and test facilities, and so forth; and last, but not least, any personnel, documentation, training materials, or other knowledge products.- The scope of impact to existing GlobalSign customers is not known. Neither GMO GlobalSign nor Google have notified any existing clients of the acquisition.- The GlobalSign web site has no mention of this acquisition for reasons which are unknown. Further, the web site does not make their CP/CPS documents readily available limiting the ability for current subscribers and relying parties to decide if continued use of a cert chaining up to these roots is acceptable to them.- A relying party who takes the initiative to review a certificate chain that goes up to either of the acquired roots will see that it is anchored (or "verified by") GlobalSign. No mention of Google will be made anywhere in the user interface.- Google has acquired these roots in order to better serve their subscribers, which are organizations (not people) throughout the many Google companies. Relying parties (i.e. end users of the various Google products) are not affected positively or negatively by this acquisition.- Mozilla granted Google's request to keep the acquisition confidential based on statements made by Google and Google's auditor E&Y. Neither GlobalSign nor their auditors offered any opinion on this matter.Thank you.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services roots

2017-03-07 Thread Peter Kurrasch via dev-security-policy
I'm trying to keep score here but am having difficulties. Can someone 
confirm if the following statements are correct:- Google has acquired 
2 root certificates from GMO GlobalSign but not the ‎company itself. GMO 
GlobalSign will continue to own other roots and will use only those other roots 
for the various products and services they choose to offer.- No 
public announcement of the acquisition was made prior to January 26, 2017 via 
the Google security blog.- No disclosure has been made regarding what 
specific items were acquired, including such things as: "private key 
material" (HSM's and whatnot); computer equipment used as web 
servers, OCSP responders, etc.; domain names, IP addresses, and other 
infrastructure used in the operations and maintenance of the acquired roots; 
data such as subscriber lists, server logs, payment details and histories, 
certificate issuance activities and histories, etc.; and any access rights to 
physical space such as offices, data centers, development and test facilities, 
and so forth.- The scope of impact to existing GlobalSign customers 
is not known. Neither GMO GlobalSign nor Google have notified any existing 
clients of the acquisition.- The GlobalSign web site has no mention 
of this acquisition for reasons which are unknown. Further, the web site does 
not make their CP/CPS documents readily available limiting the ability for 
current subscribers and relying parties to decide if continued use of a cert 
chaining up to these roots is acceptable for them.- A relying party 
who actually takes the initiative to review a certificate chain will see that 
it is anchored (or "verified by") GlobalSign. No mention of Google 
will be made anywhere.- Google has acquired these roots in order to 
"better serve" their subscribers, which are organizations (not 
people) throughout the many Google companies. Relying parties are not affected 
positively or negatively by this acquisition. - Mozilla granted 
Google's request to keep the acquisition confidential based on statements 
made by Google and Google's auditor E&Y. GlobalSign nor their auditors 
offered any opinion on this matter.I'm trying to keep 
score here but am having difficulties. Can someone confirm if the following 
statements are correct:

- Google has acquired 2 root certificates from GMO GlobalSign but not the ‎company itself. GMO GlobalSign will continue to own other roots and will use only those other roots for the various products and services they choose to offer.

- No public announcement of the acquisition was made prior to January 26, 2017 via the Google security blog.

- No disclosure has been made regarding what specific items were acquired, including such things as: "private key material" (HSM's and whatnot); computer equipment used as web servers, OCSP responders, etc.; domain names, IP addresses, and other infrastructure used in the operations and maintenance of the acquired roots; data such as subscriber lists, server logs, payment details and histories, certificate issuance activities and histories, etc.; and any access rights to physical space such as offices, data centers, development and test facilities, and so forth.

- The scope of impact to existing GlobalSign customers is not known. Neither GMO GlobalSign nor Google have notified any existing clients of the acquisition.

- The GlobalSign web site has no mention of this acquisition for reasons which are unknown. Further, the web site does not make their CP/CPS documents readily available limiting the ability for current subscribers and relying parties to decide if continued use of a cert chaining up to these roots is acceptable for them.

- A relying party who actually takes the initiative to review a certificate chain will see that it is anchored (or "verified by") GlobalSign. No mention of Google will be made anywhere.

- Google has acquired these roots in order to "better serve" their subscribers, which are organizations (not people) throughout the many Google companies. Relying parties are not affected positively or negatively by this acquisition. 

- Mozilla granted Google's request to keep the acquisition confidential based on statements made by Google and Google's auditor E&Y. GlobalSign nor their auditors offered any opinion on this matter.


___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy

Re: Let's Encrypt appears to issue a certificate for a domain thatdoesn't exist

2017-03-01 Thread Peter Kurrasch via dev-security-policy
What you've stumbled into is the unspoken truth that CA's want to have their 
cake and eat it too.

Specifically, they market themselves and their products under the umbrella of 
security: "You want to be secure on the Internet, right? We can help you do 
that!"‎ Then, most all CA's will turn around and say: "Oh by the way, if you 
encounter a situation in which something bad happens it's not our fault because 
we couldn't possibly ‎confirm that what we say is secure is in fact secure."

Let's Encrypt is not unique as I think most CA's express this viewpoint in one 
form or another. This may be acceptable from a legal or compliance standpoint 
but not in terms of reputation. As people find themselves in bad situations 
because of a malicious site that used one of their certs, they very well might 
blame the CA.

Certainly a CA does not want the reputation of being "the preferred cert vendor 
for malicious sites everywhere" so whatever principles they try to espouse, the 
reality will be more complicated.


  Original Message  
From: wuyi via dev-security-policy
Sent: Friday, February 24, 2017 1:57 AM
To: vtlynch; dev-security-policy
Reply To: wuyi
Subject: Re: Let's Encrypt appears to issue a certificate for a domain 
thatdoesn't exist

Exactly that’s the meaning of “entitle”.
Based on the interpretation, I understand that when a firefighter is on a 
vocation, even a fire breaks next to him, it’s of his choice to go hiking, fly 
kites whatever as he may only be entitled on weekdays rather than in a 
vocation. 
IMO, the point of the Let’s Encrypt’ CP 9.6.3 is to ensure that the CA has 
privilege to revoke certificate when certain issue happens as it describes. The 
deeper meaning of it is that it ensures the CA has ability to maintain online 
security anytime, but it’s enough. I am not going to debate here, I would 
rather go and teach my grandfather those green lock icons from some certain CA 
means anything but online security they states. 
Nio 
SZU


发自我的iPhone

-- Original --
From: Vincent Lynch via dev-security-policy 

Date: Fri,Feb 24,2017 15:10
To: dev-security-policy , wuyi 
<594346...@qq.com>
Subject: Re: Let's Encrypt appears to issue a certificate for a domain 
thatdoesn't exist



As you have quoted it, Let's Encrpyt's CPS says:

"the CA is *entitled* to revoke the certificate"

The key word is "entitled." Meaning that Let's Encrypt may revoke the
certificate if they chose, but are not required to. Therefore not revoking
the certificate is compatible with their CPS.

It's important to realize this is not an argument about what *should* be
done or what we think is right, but what *can* be done under the current
rules and regulations.

The fact is that the CAB/F BR's are so (intentionally) ambiguous about
"high risk" certificates that there is no real way meaning to that section
and no real way for a CA to violate said section.




As others have mentioned, please can we stop writing unrelated comments on
this thread. This thread is about a specific report about a DV cert being
issued to a non-existent domain. That report turned out to be false. Unless
comments are about that report, this is the wrong thread to post in.

I would also encourage anyone interested in the topic of high risk requests
and certs being issued to phishing sites to see Eric Mill's comment in this
thread. He has provided a link to past discussion on this topic, and I can
promise you that however displeasing and shocking this practice may be, it
has already been considered and debated.



On Fri, Feb 24, 2017 at 12:36 AM wuyi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> According to what I’ve known,
>
>
> “Acknowledgment and Acceptance: An acknowledgment and acceptance that the
> CA is entitled to revoke the certificate immediately if the Applicant were
> to violate the terms of the Subscriber or Terms of Use Agreement or if the
> CA discovers that the Certificate is being used to enable criminal
> activities such as phishing attacks, fraud, or the distribution of
> malware.” (Let’s Encrypt’ CP 9.6.3)
>
>
>
>
> Now that a phishing site has been detected with a valid certificate, but
> no immediate action was taken on it. I don’t think that this is what a CA,
> who states to “Support a more secure and privacy-respecting Web” is
> supposed to do.
>
>
>
>
> Quoted from Google’s Policy, “it would be no different than a firefighter
> who was not responding to fires in which people died.”
>
>
> It may be difficult to sort what types of domains are high risk, but when
> a certificate was used in this way without being revoked, it was no
> difference from the Google CP’s metaphor. As an Internet user I was
> disappointed about that, and those in the LE’ CP above can be treated as
> nonsense.
>
>
> Nio
> SZU
>
>
> On Fri, Feb 24, 2017 at 01:12:38AM +, Richard Wang via
> dev-security-policy wrote:
>
>
> > >I am sure this site: https://www.microsoftonline.us.com/ is a ph

Re: Incapsula via GlobalSign issued[ing] a certificate for non-existing domain (testslsslfeb20.me)

2017-03-01 Thread Peter Kurrasch via dev-security-policy
Would it be possible to get a more precise answer other than "in accordance 
with"? I am left to assume that in fact no verification was performed because 
the previous verification was in the 39 month window.


  Original Message  
From: douglas.beattie--- via dev-security-policy
Sent: Tuesday, February 28, 2017 6:46 AM‎

...snip...
‎
> I also would like to have an official reply from GlobalSign saying that "on 
> the date they issue the certificate the domain exists".

On the date that the certificate was issued it was verified in accordance with 
the Domain Verification requirements in the BRs.

Doug Beattie
GlobalSign
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Release and revoke (was: Let's Encrypt appears to issue a certificate for a domain that doesn't exist)

2017-02-23 Thread Peter Kurrasch via dev-security-policy
By and large I'd say that Matt's no's should instead be yes's. If we adopt the 
standpoint that releasing a domain is equivalent to saying "I no longer use 
that name" then a revocation is equivalent to adding "...and anyone who does 
use that name must surely be an imposter."

In other words, we should give relying parties every opportunity to determine 
legitimate-or-fraud to the greatest extent possible. Granted the real world is 
not quite so simple but I think that's (part of?) the spirit of what we're here 
to do.


  Original Message  
From: Matt Palmer via dev-security-policy
Sent: Wednesday, February 22, 2017 10:32 PM
To: dev-security-policy@lists.mozilla.org
Reply To: Matt Palmer
Subject: Re: Let's Encrypt appears to issue a certificate for a domain that 
doesn't exist

On Wed, Feb 22, 2017 at 10:00:45PM -0500, George Macon via dev-security-policy 
wrote:
> On 2/22/17 7:30 PM, Gervase Markham wrote:
> > On Hacker News, Josh Aas writes:
> > Update: Squarespace has confirmed that they did register the domain and
> > then released it after getting a certificate from us."
> 
> In this case, should Squarespace have requested that the certificate be
> revoked before releasing the domain?

No.

> Is there a way to automatically detect that the domain was released? (I
> suspect the answer to this question is "not easily".)

There have been feeds provided in the past (they may still exist, but I
haven't needed to look for them for some years) for registered domains, I
don't know if something exists for expiration, but it certainly seems like
it, given the speed with which squatters appear able to pick up expired
domains.

> Would it make sense to prohibit certificate issuance during the grace
> period?

No.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Cerificate Concern about Cloudflare's DNS

2016-11-02 Thread Peter Kurrasch
It depends. If a CA just hands out a cert to anyone who manipulates DNS, that's 
one thing. If a CA (such as Comodo) has a formal agreement‎ with another party 
(such as CloudFlare) to facilitate the issuance of certs, I think that's quite 
another. The former has all sorts of problems and I'm not so interested in 
rehashing them just now.

The latter, however, has not been formally addressed. I can envision scenarios 
where certs get mis-issued, people blame the CA for having some arrangement 
with CloudFlare (or whomever), and CA's scramble for cover from the storm that 
inevitably follows.‎ I think it would be useful to have some ideas in place in 
advance of any such scenarios. 


  Original Message  
From: Kristian Fiskerstrand
Sent: Wednesday, November 2, 2016 5:41 PM
To: Peter Kurrasch; gerhard.tin...@gmail.com; 
mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Cerificate Concern about Cloudflare's DNS

On 11/02/2016 11:38 PM, Peter Kurrasch wrote:
> This raises an interesting point and I'd be interested in any comments
> ‎that Comodo or other CA's might have.
> 

It really seems like a matter of discussion for the terms of agreement
and interaction between the user and service provider, and not a CA matter.


-- 

Kristian Fiskerstrand
Blog: https://blog.sumptuouscapital.com
Twitter: @krifisk

Public OpenPGP keyblock at hkp://pool.sks-keyservers.net
fpr:94CB AFDD 3034 5109 5618 35AA 0B7F 8B60 E3ED FAE3

Aurum est Potestas
Gold is power

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Cerificate Concern about Cloudflare's DNS

2016-11-02 Thread Peter Kurrasch
  This raises an interesting point and I'd be interested in any comments ‎that Comodo or other CA's might have.It appears we have a situation where a cert is being issued to what is presumably an authorized party yet that party is not actually authorized by the subscriber. How does Comodo or any other CA validate that a "domain manipulator" has been and continues to be authorized by the actual domain registrant? Is any attestation provided by a party (such as CloudFlare) that they have authorization by their own clients to do whatever they are doing?It's in the interest of CA's to ‎have some well thought-out plans here because I think we know who is getting the blame when the system breaks down. I don't think it would sit well if a CA were to come here and say "you can't blame us for the misissuance because we will give CloudFlare any cert they want."From: gerhard.tin...@gmail.comSent: Wednesday, November 2, 2016 4:16 AMTo: mozilla-dev-security-pol...@lists.mozilla.orgSubject: Re: Cerificate Concern about Cloudflare's DNSHi, > > Since you delegated your DNS server to Cloudflare, you implicitly allowed them to perform this certificate request on your behalf.> > This is where I strongly disagree! I have checked the TOS and Security policy, ... etc. There is nowhere stated that Cloudflare is allowed without the Users knowledge to manipulate there DNS settings. That sad, there is the proxy service they offer which is changing the DNS settings. But as you actively enable it, you are aware. By delegating the DNS server to Cloudflare, you entrust Cloudflare to distribute the User defined DNS settings. To be able to validate for the certificate, the DNS settings are changed without the users knowledge. No TOS or any other policy states this. Even if that might not be issue for the CA itself (which i do not agree on), This is definitely braking the trust to its users.And the CA (Comodo) informed about it, and not at least requesting a statement from Cloudflare, means they support this, from my point of view, wrong behavior.As it seems the only thing that can be done is move to a different DNS provider!! Still, this is a vialation of trust!!!___dev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Guang Dong Certificate Authority (GDCA) root inclusion request

2016-10-26 Thread Peter Kurrasch
  I think these are both good points and my recommendation is that Mozilla deny GDCA's request for inclusion.We should not have to explain something as basic as document versioning and version control. If GDCA can not demonstrate sufficient controls over their documentation, there is no reason for the Internet community to place confidence in any of the other versioning systems that are needed to operate a CA.Question: Are auditors expected to review translations of CP or CPS docs and verify consistency between them?From: Jakob BohmSent: Saturday, October 22, 2016 9:07 AMTo: mozilla-dev-security-pol...@lists.mozilla.orgSubject: Re: Guang Dong Certificate Authority (GDCA) root inclusion requestOn 21/10/2016 10:38, Han Yuwei wrote:>> I think this is a major mistake and a investgation should be conducted for CPS is a critical document about CA. This is not just a translation problem but a version control problem. Sometimes it can be lying.>Let me try to be more specific:When publishing a document called CPS version 4.3 the document withthat number must have the same contents in all languages that have adocument with that name and version number.When making any change, even just correcting a mistyped URL, thedocument becomes a new document version which should have a new andlarger number than the number of the document before the change.Thus when a published document refers to a broken URL on your ownserver, it is often cheaper to repair the server than to publish a newdocument version.  Some of the oldest CAs have been proudlypublishing their various important files at multiple URLs correspondingto whatever was mentioned in old CP and CPS documents etc., onlyshutting down those URLs years after the corresponding CA roots wereshut down.There can also be a "draft" document which has no number and whichcontains the changes that will go into the next numbered edition.  Sucha "draft" would have no official significance, as it has not beenofficially "published".  For a well-planned change, the final "draft"would be translated and checked into the relevant languages (e.g.Chinese with mainland writing system, Chinese with Hong Kong and MacaoSpecial Administrative Regions old writing system, English), beforesimultaneously publishing the matching documents with the same numberon the same day.There are infinitely many version numbers in the universe to choosefrom.  There are also computer programs that can generate new versionnumbers every time a draft is changed, but computers cannot decide whena version is good enough in all languages to make an officialpublication, and the computer generated version numbers are oftenimpractical for publication because they count all the small steps thatwere not published.EnjoyJakob-- Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.comTransformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10This public discussion message is non-binding and may contain errors.WiseMo - Remote Service Management for PCs, Phones and Embedded___dev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Deception (was: WoSign: updated report and discussion)

2016-10-11 Thread Peter Kurrasch
I agree that it probably is not worth dwelling on the "Andy Ligg question" in 
particular but I think there is a broader issue at play which is worth 
addressing: deception.

I think there is ample evidence that WoSign engaged in a deliberate, 
persistent, and extensive campaign of deception committed against many 
different parties within the PKI ecosystem. In some cases the deception was 
committed by Richard Wang himself while in other cases it's less clear if the 
perpetrator was Richard or someone under his supervision.

I'd like to see something included in the summary report, although I'm the 
first to admit I don't know how best to do that. It seems to me the level of 
deceptive activity here falls well outside the norm of something more innocent, 
like being coy to protect a company's proprietary information. I don't think 
we've seen anything like this from other CA representatives in this forum.

If someone reads the report without having also participated in these 
discussions it's possible that he or she will not appreciate the difficulty 
we've had at times in getting at the truth of what has transpired. In fact, I 
think we continue to struggle to understand the extent of damage committed 
precisely because of the deception.

Again, I'm not sure the best way to capture this whole idea but I think it's 
something that should not be left unsaid. 


  Original Message  
From: Gervase Markham
Sent: Monday, October 10, 2016 5:45 AM
To: i...@matthijsmelissen.nl; mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: WoSign: updated report and discussion

I don't believe this aspect of things is worth spending time on. However:

On 10/10/16 09:44, i...@matthijsmelissen.nl wrote:
> On Saturday, October 8, 2016 at 8:18:09 AM UTC+2, uri...@gmail.com
> wrote:
>> Did anyone ever determine if "Andy Ligg" is in fact a real person? 
>> (As discussed here 
>> https://groups.google.com/forum/#!msg/mozilla.dev.security.policy/0pqpLJ_lCJQ/7QRQ7oqGDwAJ
>> )
> 
> I believe Andy Ligg is a pseudonym of Richard Wang.
> 
> Have a look at this Bugzilla thread:
> https://bugzilla.mozilla.org/show_bug.cgi?id=851435 At 2015-03-12
> 08:43:09, some information related to Wosign is posted on behalf of
> Andy Li. 

This Bugzilla account was created in November 2014, presumably in order
to file this bug:
https://bugzilla.mozilla.org/show_bug.cgi?id=1106390

The email address associated with it, as anyone with a Bugzilla account
can see, is wosign at outlook dot com. Therefore, the Andy Li in
Bugzilla (not the same name as Andy Ligg, of course) claims to be
connected to WoSign, and was so long before they acquired StartCom.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Incidents involving the CA WoSign

2016-10-11 Thread Peter Kurrasch
Don't sell your namesake domain short! Sure, the Google domains are subject to 
different types of attacks than ‎most others but any domain with a cert has 
value. For example, I'd be happy to use gerv.net as a landing page for my spam 
campaign or as a phishing site or, even better, as a host for malware in my 
malvertising activities. 

All I'm saying is that revocation is valuable for everyone in all sorts of ways.


  Original Message  
From: Gervase Markham
Sent: Friday, October 7, 2016 4:37 AM
To: Peter Gutmann; mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Incidents involving the CA WoSign

On 07/10/16 04:21, Peter Gutmann wrote:
> That still doesn't necessarily answer the question, Google have their CRLSets
> but they're more ineffective than effective in dealing with revocations
> (according to GRC, they're 98% ineffective,
> https://www.grc.com/revocation/crlsets.htm). 

That statistic assumes that all revocations are equal, which is clearly
not true. A revoked cert for www.google.com is orders of magnitude more
important to Chrome users than one for www.gerv.net.

Gerv

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


New Roots? (was: WoSign and StartCom)

2016-09-29 Thread Peter Kurrasch
I think we're well past the point where a "do-over" can be considered a 
reasonable remedy. The problem is not simply one in which certs were issued 
improperly nor is it simply one in which ‎there were mistakes in the CA 
infrastructure. Such problems, I think, could fall under a category where 
starting over with new roots, new audits, etc. would seem acceptable. 

Rather, what we have here is basically a rogue operator that has threatened the 
trust and integrity of the global PKI system. Their conduct‎ has undermined 
efforts to establish and maintain security on the Internet (e.g. backdating 
SHA-1 certs). Their conduct has flaunted rules and regulations for reasons that 
are still to this day not fully understood (e.g. ownership and problems with 
the auditng). Their conduct has caused undue consternation to web site owners 
(e.g. github) due to cert mis-issuance. Their conduct has put their own 
customers in a difficult position as they must now consider obtaining new certs 
for their websites.

Starting over with new roots won't remedy these problems.

  Original Message  
From: Stephen Schrauger
Sent: Tuesday, September 27, 2016 7:32 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: WoSign and StartCom

> > Should StartCom/WoSign be permitted to re-apply using the same roots,
> > or would they need new roots?
> 
> New roots. Considering the extent to which StartCom/WoSign have
> mismanaged things, there could be further misissued certificates
> chaining to their roots that we don't know about. The only way to
> protect the ecosystem from such certificates is to require new roots -
> roots that have only ever operated under the new audits that will be
> required by Mozilla.
> 
> Regards,
> Andrew

I agree that they should need new roots. But on top of the points Andrew makes, 
it would also require StartCom and WoSign to get cross-signed if they wish to 
continue supporting older devices that lack their new roots. 

They would have to regain the trust of another root CA who would be willing to 
cross-sign their new roots. Or else StartCom and WoSign would have to accept 
that new certificates created under their new root may not work on older 
devices, since older computers and embedded devices aren't always able to 
update their root stores.

Assuming they want new certificates to work on older devices, I imagine the 
need to be cross-signed would create another point of trust, since another CA 
willing to cross-sign would do their own audit and have added requirements.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: WoSign and StartCom: next steps

2016-09-29 Thread Peter Kurrasch
So if WoSign will not be present to discuss possible sanctions against WoSign, 
what are we to infer from that? Is Qihoo 360 acting in a capacity that is more 
than just an investor in WoSign? 

I'm trying not to get too far ahead of things, but this seems to be a very 
curious turn of events.


  Original Message  
From: Gervase Markham
Sent: Thursday, September 29, 2016 10:41 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: WoSign and StartCom: next steps

Hi everyone,

Following the publication of the recent investigative report,
representatives of Qihoo 360 and StartCom have requested a face-to-face
meeting with Mozilla. We have accepted, and that meeting will take place
next Tuesday in London.

After that, we expect to see a public response and proposal for
remediation from them, which will be discussed here before Mozilla makes
a final decision on the action we will take.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Time to distrust

2016-09-26 Thread Peter Kurrasch
The actual revocation model I had in mind was a more friendly one where the 
cert holder wants it revoked for some reason. This was based on your research 
which showed that many ‎WoSign clients were not using their WoSign-issued 
certs. (I don't remember if you indicated they had switched to a different cert 
issuer/provider.)

My thinking was that if we put together a white list, the idea is that those on 
the whitelist would migrate to a different CA over time. As that process takes 
place, presumably the WoSign certs would be revoked. So the question is, 
assuming those clients do bother to request revocation, can we trust that 
WoSign will honor those requests?

Certificate Transparency is something I should have included in my original 
list so thanks for bringing it up. I have a question though about what happens 
if WoSign fails to publish a cert to the CT logs: can they get away with it, 
and does the answer change if the user is on one side of the Great Firewall or 
the other?

Finally, I have no problem with the exploration of alternatives to outright 
distrust. That said, I did want to help flesh out how we might know when 
distrust is the right decision to make and what that might look like--beyond 
the obvious impact it has to current cert holders.


  Original Message  
From: Ryan Sleevi
Sent: Friday, September 23, 2016 10:27 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Time to distrust

On Friday, September 23, 2016 at 6:03:01 AM UTC-7, Peter Kurrasch wrote:

> * Revocation: If a particular cert has been revoked for any reason, I should 
> be able to find that out so that I will know not to use it. Ideally this is 
> handled automatically in software but for various reasons it doesn't always 
> work out that way. I'm not sure if the manual tools are all that robust (or 
> exist?), but that almost doesn't matter because I'm dependent either way on 
> the issuer of the cert to issue the proper revocation. In the case of WoSign, 
> there are documented cases where certs were issued improperly. (I'm not sure 
> if we have documented cases where revocations were made improperly?)

Note: In pretty much all major software, automatic revocation doesn't work 
under an adversarial threat model. That is, you can consider two types of 
misissuance: Procedural misissuance (perhaps it states Yahoo rather than 
Google) and adversarial misissuance (that is, someone attempting to 
impersonate). I'm handwaving a bit here, because it's more of a spectrum, but 
we might broadly lump it as "stupidity" that causes insecurity and "evil" that 
causes insecurity.

Stupidity is important/relevant to trust decisions, to some extent. However, in 
practice, I doubt you're examining each certificate presented to you on each 
connection *before* you send data on that connection, which is what the CPS's 
are worded to suggestion, so it's unlikely you're making major trust decisions 
on the basis that the certificate says it was for "Yahoo"

So you're most likely concerned about 'evil' misissuance - those that an 
adversarial attacker is attempting to gain access to your private 
communications. And it's under this model that revocation doesn't work, due to 
the attacker's ability to perform various exploits (such as blocking 
communication to revocation servers, or stapling an OCSP GOOD response).

For this reason, I would encourage you not to think of Revocation as an 
important factor of trust. I agree that you want to know if a certificate is 
revoked in the abstract, and if we have reliable solutions that can help 
facilitate that (e.g. OneCRL or CRLSets are more reliable means of revocation), 
then great, but we're not at a place yet where revocation is an intrinsic part 
of the secure connection establishment.

> 
> So here's the point I wish to make: If I'm presented with a certificate 
> issued by WoSign and I'm told I have to decide for myself if I should trust 
> it, I really don't see how I have any choice but to refuse to trust the cert 
> even though my cert validation software might say it's OK. The above 
> mechanisms available to me as an end user seem to be hopelessly compromised 
> by WoSign's actions over the past 10 or so months.

You missed some other additional characteristics to consider, and ones that I 
think significantly affect the conclusions made.

The first is a consideration of risk. The decision to trust or distrust - and 
this isn't just among CAs or certificates - is more than just a binary yes or 
no. It's a spectrum that varies with the risk involved. For example, if you 
said I had a 1% of getting a papercut when doing some action, I'd probably be 
willing to entertain that risk, but if you were to say I had a 1% risk of 
dying, then I'

Re: Time to distrust

2016-09-23 Thread Peter Kurrasch
e rush to production of an obviously flawed system clearly points to a lack of testing before its deployment.)So here's the point I wish to make:  If I'm presented with a certificate issued by WoSign and I'm told I have to decide for myself if I should trust it, I really don't see how I have any choice but to refuse to trust the cert even though my cert validation software might say it's OK.  The above mechanisms available to me as an end user seem to be hopelessly compromised by WoSign's actions over the past 10 or so months.Admittedly this is a lot to throw out at once and I know I've probably made mistakes or misstatements that don't quite square with the ongoing investigations or "findings of fact", if you will.  I've probably overlooked something as well.  I hope people will correct me or ask for clarifications where needed.I also recognize that I've still said nothing about the impacts that removing trust has on the cert holders (website operators, mostly).  Doing so is a deliberate move on my part just to keep this particular message of manageable size and scope.  I'd be happy to discuss those impacts at another time.Finally, I fully acknowledge that any decision to remove trust from a root certificate creates a significant disruption to the PKI ecosystem and the Internet as a whole.  It is not an action to be taken capriciously but only after very careful consideration and thorough discussion.‎ It's good that we have had and continue to have these discussions here in this forum. From: Gervase Markham <g...@mozilla.org>On 22/09/16 03:00, Peter Kurrasch wrote:> Well, well. Here we are again, Ryan, with you launching into a bullying,
 > personal attack on me instead of seeking to understand where I'm coming
 > from and why I say the things I say.Er, no. I am entirely comfortable with saying that if you found Ryan's
 message to be a bullying, personal attack then your skin is too thin.(Which would surprise me, given what I know of you.)Ryan's message, while possibly carrying a slightly exasperated tone, was
 a reasonable exposition of the trade-offs inherent in various optionsfor dis-trusting a CA, trade-offs which you seem unwilling to recognise.
 I'm sad that you don't see this as a set of trade-offs, but perhapsthere's little I or Ryan can do about it.> ‎If Kathleen or Gerv or Richard decide that I'm> disruptive and not providing any value to the wider population, I'll
 > happily withdraw from this forum. I am not requesting that you withdraw, although you should know that the
 level of account taken of what you say is approximately proportional to
 the level of understanding that you show of the perspectives of allparties involved - including those currently using WoSign certificates
 for their sites.Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Time to distrust (was: Sanctions short of distrust)

2016-09-21 Thread Peter Kurrasch
  Well, well. Here we are again, Ryan, with you launching into a bullying, personal attack on me instead of seeking to understand where I'm coming from and why I say the things I say. You may have noticed that I do not reply to your messages because I generally find your tone to be disrespectful of others and occasionally narrow-minded. This time, however, I will reply.‎You know nothing of my knowledge, my experience, or the things that I need to be taught. You are not the arbiter of what is reasonable or sensible in this forum. I hardly think you are the one person in here to determine if people agree or disagree with me--and, for that matter, who qualifies as an expert. If Kathleen or Gerv or Richard decide that I'm disruptive and not providing any value to the wider population, I'll happily withdraw from this forum. I participate here because I enjoy it, though I obviously don't enjoy being attacked personally.‎If I am a fool, let me be a fool. If I say things that don't make sense and you seek to know why, ask me questions. If you see no value in what I have to say, so be it; others in this forum might think otherwise. That's all I ask of anyone.From: Ryan SleeviSent: Wednesday, September 21, 2016 2:27 PM‎...snip...It's unclear who you're referring to here. I think, judging by some of your replies, that some of the experts in this space don't agree with you or your conclusions, but this may simply be a teachable opportunity.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Time to distrust (was: Sanctions short of distrust)

2016-09-21 Thread Peter Kurrasch
  I have a hard time seeing how any sort of white list solution will actually mitigate any of the bad behavior exhibited by WoSign. From my perspective, I think we can make a pretty clear case that WoSign is a poorly run CA and poses a threat to the secure Internet that many of us are trying to achieve. They have many, serious bugs in their systems. Their responsiveness in fixing these problems is slow. Their understanding of security threats is limited. Their interest in compliance seems minimal.‎ Their willingness to be forthright, honest, open in this forum can only be described as unacceptable. So the problem I have with a white list is the implication that while we don't trust the CA to issue new certs, we do have trust in the continued operation of other parts of the CA. Chief among these is revocation as cert holders move away from WoSign to a new CA. Do we trust that WoSign will honor requsts for certs to be revoked? Do we trust that revocation will take place in a timely matter? Do we trust that WoSign will not collect information on hits to any OCSP responders they have set up and share that info with...whomever?I'm just having a hard time seeing how there is anything left to trust when it comes to WoSign. Maybe the best outcome would be a finding of irreconcilable differences and for us to go our separate ways? Maybe we just want different things in a global PKI system?From: Peter BowenSent: Wednesday, September 21, 2016 10:04 AMTo: Gervase MarkhamCc: mozilla-dev-security-pol...@lists.mozilla.orgSubject: Re: Sanctions short of distrustOn Wed, Sep 21, 2016 at 2:21 AM, Gervase Markham  wrote:> On 12/09/16 22:46, Ryan Sleevi wrote:>> Consider if we start with the list of certificates issued by StartCom>> and WoSign, assuming the two are the same party (as all reasonable>> evidence suggests). Extract the subjectAltName from every one of>> these certificates, and then compare against the Alexa Top 1M. This>> yields more than 60K certificates, at 1920K in a 'naive' whitelist. However, if you compare based on base domain (as it appears in>> Alexa), you end up with 18,763 unique names, with a much better>> compressibility. For example, when compared with Chrome's Public>> Suffix List DAFSA implementation (as one such compressed data>> structure implementation), this ends up occupying 126K of storage.>> Can you tell us how many unique base domains (PSL+1) there are across> WoSign and StartCom's entire certificate corpus, and what that might> look like as a DAFSA?I'm not sure about a DAFSA, but I wrote a semi-naive implementation ofa compressed trie and got 1592272 bytes.  That is assuming each issuerhas its own trie.  It could be optimized to be smaller if it was justa single trie of eTLD+1 for all issuers.Thanks,Peter___dev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Cerificate Concern about Cloudflare's DNS

2016-09-20 Thread Peter Kurrasch
[removing Matt from the cc list]

OK, I thought it was established that CloudFlare used the DNS method but if 
it's unknown, that's fine.

I agree that the magic-email-address method and the modify-your-website method 
are horrible.‎ They are entirely too easy to compromise making it hard to reach 
a firm conclusion in all cases that the person requesting the cert should 
rightfully receive the cert. I think the WHOIS-based method is marginally 
better but is still susceptible to compromise by bad actors.

So the DNS-based method does come out ahead on a list of weak authentication 
methods. Of course this method can be compromised if you hand over control to 
someone else (either knowingly or unknowingly). Or if the account you use to 
control your DNS entries is easily hacked. Or if a sufficiently motivated 
attacker engages in DNS cache poisoning.

Of course all of these methods require that the CA has implemented the checks 
correctly on their side.


  Original Message  
From: Matt Palmer
Sent: Tuesday, September 20, 2016 8:01 PM
To: dev-security-policy@lists.mozilla.org
Subject: Re: Cerificate Concern about Cloudflare's DNS

[No need to Cc me; I read the list]

On Tue, Sep 20, 2016 at 05:37:03PM -0500, Peter Kurrasch wrote:
> From: Matt Palmer
...snip...‎
>
> I took Florian's comment to mean that the structure of what CloudFlare is
> doing is essentially a proxy service that is able manipulate DNS entries
> to obtain a certificate. In the case of CloudFlare, we understand their
> business and thus presume that the action is OK. For anybody else,
> though...???
> 
> Put another way: Just because you can manipulate DNS entries does not
> necessarily mean you are the right person to receive a cert. Rather, it's
> "I hope you are the right person".‎ If Florian had a different
> meaning, though, it would be good to get him to clarify that.

There's no indication that Cloudflare used DNS, specifically, to prove
control of any of the validated names in the certificate. All of the names
were, at one time or another (and all bar one still is) resolving to a
Cloudflare IP. It's unfortunate (though understandable) that Comodo weren't
able or willing to disclose the validation method used, but since every name
in the cert is, or was at some point, provided HTTP service by Cloudflare,
it seems reasonable to believe that was the method of control validation
used in this instance.

Frankly, though, to my mind DNS is the *best* (or, if you prefer, *least
worst*) way of demonstrating control of a name -- because that's where the
name originates from. Blessed e-mail addresses and "can respond to HTTP"
are far less compelling answers to the question, "does the applicant have
effective control of the name(s) being validated?". Control over DNS
allows you to subvert any other method of control validation.

Thus, be careful who you grant control over your DNS records. End of story.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Cerificate Concern about Cloudflare's DNS

2016-09-20 Thread Peter Kurrasch
I took Florian's comment to mean that the structure of what CloudFlare is doing 
is essentially a proxy service that is able manipulate DNS entries to obtain a 
certificate. In the case of CloudFlare, we understand their business and thus 
presume that the action is OK. For anybody else, though...???

Put another way: Just because you can manipulate DNS entries does not 
necessarily mean you are the right person to receive a cert. Rather, it's "I 
hope you are the right person".‎ If Florian had a different meaning, though, it 
would be good to get him to clarify that.


  Original Message  
From: Matt Palmer
Sent: Saturday, September 17, 2016 6:27 PM
To: dev-security-policy@lists.mozilla.org
Subject: Re: Cerificate Concern about Cloudflare's DNS

On Sat, Sep 17, 2016 at 04:38:50PM +0200, Florian Weimer wrote:
> * Peter Bowen:
> 
> > On Sat, Sep 10, 2016 at 10:40 PM, Han Yuwei  wrote:
> >> So when I delegated the DNS service to Cloudflare, Cloudflare have
> >> the privilege to issue the certificate by default? Can I understand
> >> like that?
> >
> > I would guess that they have a clause in their terms of service or
> > customer agreement that says they can update records in the DNS zone
> > and/or calls out that the subscriber consents to them getting a
> > certificate for any domain name hosted on CloudFlare DNS.
> 
> I find it difficult to believe that the policies permit Cloudflare's
> behavior, but are expected to prevent the issue of interception
> certificates. Aren't they rather similar, structurally?

I'm not seeing any similarity, but I don't understand your use of
"structurally", so if you could expand on your meaning, that would be
useful.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Cerificate Concern about Cloudflare's DNS

2016-09-15 Thread Peter Kurrasch
  I see a few takeaways here.‎ For consideration:1) 39 months is entirely too much time to elapse before a cert requester ‎should re-demonstrate control over a domain. Too much can happen in that time, including the abandonment of a domain name itself.2) To that point, CA's should probably check that a domain is still active before every cert issuance--that is has not expired, been deleted, or been re-purchased by someone else prior to deletion.3) ‎Many (all?) CPS documents seem to have a clause about relying parties saying, in essence, that it's up to the relying party to decide when to trust the cert. There are many reasons I think such clauses are not enforceable but the main reason in this case is that the relying party must not only perform the technical validation but must also familiarize oneself with the business model of the cert holder. In this case, I see multiple domains listed that seem unrelated, except that they happen to use the same CDN service and that it's because of the CDN that the domains appear together. In other cases where seemingly unrelated domains appear, what am I to conclude?4) DNS proxies and, now, certificate proxies are problematic because they obfuscate the true owner and thus the true intentions for holding a certificate. In this case, the true owner of the domain has no intention of actually using a certificate. Yet, as a relying party, what am I to conclude if I connect to a server with his domain name that is offering up a certificate? Basically I have no choice but to accept the connection even though I might be putting ‎myself in harm's way. Merely demonstrating control of a domain doesn't necessarily mean that issuance of a cert is appropriate, so how will CA's make the right decisions so that relying parties will continue to trust them?From: Jakob BohmSent: Monday, September 12, 2016 5:51 PMTo: mozilla-dev-security-pol...@lists.mozilla.orgSubject: Re: Cerificate Concern about Cloudflare's DNS‎On 12/09/2016 23:48, Ryan Sleevi wrote:> On Monday, September 12, 2016 at 2:33:47 PM UTC-7, Jakob Bohm wrote:>> I find fault in CloudFlare (presuming the story is actually as>> reported).>> Why? Apologies, but I fail to see what you believe is "wrong", given how multiple people have pointed to you it being well-understood and permissible, under past and present guidelines.>Note that this is *entirely* outside CA/B and CA inclusion relatedguidelines, since CloudFlare is (presumably) not a CA and thus notsubject to such guidelines.I am saying that they are (if the story is true) morally at fault forrequesting a certificate that the domain owner did not authorize themto request, abusing their job of handling some technical aspects of thedomain owners operation.The common equivalent would be a network administrator requesting acertificate that his boss had not authorized him to request.  There isno way an outside CA could know that such authorization had not beengiven if that employee was in a position where the only differencebetween a real or bad request would be what their boss did or did nottell the employee to do.This common equivalent would be sufficiently common (on a worldwidestatistical basis), that it would be useful for large CAs to havestandard procedures and policies (be they manual or automated, publicor internal) for handling such situations.  The defining characteristicwould be "this person claims to outrank the original certificaterequestor and is requesting revocation of a certificate without havingaccess to the files etc. involved in the original request".>>  From the story as reported, Comodo had every reason to believe that>> CloudFlare was authorized by the domain owner to request that DV cert,>> and had no additional preemptive tests to do (baring a future finding>> that CloudFlare should be blacklisted from requesting DV certificates,>> which would require a large number of cases given the huge number of>> domains they handle without objection by domain owners).>> This gets further into "What you're proposing doesn't exist" territory, such as the notion of blacklisting an organization from requesting a DV cert, when the whole notion of DV is that the only thing validated is the domain (not the organization operating the domain or requesting the cert)>I was arguing *against* adding CloudFlare to such a list, even if itexisted.And I would presume that

Re: Cerificate Concern about Cloudflare's DNS

2016-09-12 Thread Peter Kurrasch
Hi Erwann, 

I was thinking of more the server (cloud) side of things. I'm not familiar 
enough with Cloudflare's service, but I imagine that if I have a server set up 
I will also have access to my private key. If so, I now have access to the 
private key of the other domains. Perhaps there are protections set up?

Thanks for letting me know about the BR stipulation. I was hoping it would say 
something but didn't know what. 39 months seems too long though. A lot can 
happen in 3.5 years.


  Original Message  
From: Erwann Abalea
Sent: Monday, September 12, 2016 7:41 AM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Cerificate Concern about Cloudflare's DNS

Bonjour,

Le lundi 12 septembre 2016 14:30:56 UTC+2, Peter Kurrasch a écrit :
> I noticed there a several other domains listed on that cert besides Han's 
> (and wildcard versions for each).‎ Unless Han is the registrar or has some 
> other affiliation with those domains it seems to me there is a risk of some 
> private key compromise situation.

How is the risk of key compromise higher because there are several domain names 
in the certificate?

> Also, if I want to add a new domain to a cert that has several other domains 
> already on it, will I need to demonstrate control over all of the domains or 
> only the new one?

For a DV, if you demonstrated control less than 39 months ago, the CA MAY keep 
the result and issue the certificate for the previously verified domains.

Again, this is in the Baseline Requirements, not in this particular CA's CPS.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Cerificate Concern about Cloudflare's DNS

2016-09-12 Thread Peter Kurrasch
I noticed there a several other domains listed on that cert besides Han's (and 
wildcard versions for each).‎ Unless Han is the registrar or has some other 
affiliation with those domains it seems to me there is a risk of some private 
key compromise situation.

Also, if I want to add a new domain to a cert that has several other domains 
already on it, will I need to demonstrate control over all of the domains or 
only the new one?


  Original Message  
From: Rob Stradling
Sent: Monday, September 12, 2016 4:18 AM
To: Erwann Abalea; mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Cerificate Concern about Cloudflare's DNS

On 10/09/16 15:43, Erwann Abalea wrote:

> In my opinion, the most plausible verification method in this case is the 
> last one: "Having the Applicant demonstrate practical control over the FQDN 
> by making an agreed-upon change to information found in the DNS containing 
> the FQDN";

Correct. That's what happened.

-- 
Rob Stradling
Senior Research & Development Scientist
COMODO - Creating Trust Online

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: WoSign’s Ownership of StartCom

2016-09-09 Thread Peter Kurrasch
  Thank you, Gerv and the Mozilla CA Team, for researching and compiling this information and raising awareness with the forum. I have been thinking about this ownership item in the Mozilla policy but had not put anything together myself (and you've certainly been more thorough!).I think it's abundantly clear that the CA we used to call simply StartCom is no more and that the ‎relationship is intertwined with WoSign to the point that one can not identify any meaningful separation between the two. That is, aside from legal documents in multiple jurisdictions I'm not sure how anyone might argue that the two CA's are independent in a way that is meaningful to people in this forum or to citizens of the Internet at large. I think Peter G's name of WoStartSignCom is actually quite apt.Indeed it will be interesting to read Richard's response, fully recognizing that he chose not to reply to Mozilla's request ‎within a fair amount of time. The fact that Eddy is not also responding is, perhaps, all the information we need given what his level of involvement in this forum has been in years past. If Richard is already exercising control over Eddy's involvement here, the claim of independence between the two CA's is that much more specious. I would also ask for confirmation that "Andy Ligg" is in fact a real person and not a pseudonym adopted by Richard or someone else. The similarity to Eddy's name is...remarkable. I think it's clear that Andy does not live in Bristol but it's unclear what his role is within StartCom. My concern is that people are using pseudonyms but I'm happy for someone to prove me wrong.This may be a distraction (though it seems relevant?) I found this in the Chromium tracker. It's the request for StartCom's CT log inclusion. It has Eddy's and Andy's name but a phone number in Los Angeles. It seems noteworthy that the request was modified to add the WoSign certs as well as StartCom. Here's the link:    ‎https://bugs.chromium.org/p/chromium/issues/detail?id=611672‎In any event, I think we're on the verge of discussing sanctions to consider against StartCom for persistent failure to comply with Mozilla root inclusion policies.Thanks again for putting this together!  ‎  From: Gervase MarkhamSent: Friday, September 9, 2016 4:49 AMTo: mozilla-dev-security-pol...@lists.mozilla.orgSubject: WoSign’s Ownership of StartComDear m.d.s.policy,We have been actively investigating reports that WoSign and StartCom mayhave failed to comply with our policy on change of control notification.Below is a summary representing the best of our knowledge and belief,based on our findings and investigation to date.The operations of the CA known as StartCom have historically been ownedand controlled by an Israeli company, number 513747303, called "סטארטקומארשל בע”מ", or in English "Start Commercial Ltd". This company willbe referred to in this document as "StartCom IL". It has normally beenrepresented in public and the CAB Forum by its COO/CTO, Eddy Nigg.On August 5th, 2015 a new company, "StartCom CA Ltd", was created inHong Kong.[0] This company will be referred to in this document as"StartCom HK".On August 21st, 2015 a new company, also called "StartCom CA Ltd", wascreated in the UK.[1] This company will be referred to in this documentas "StartCom UK".100% of the shares of “StartCom CA Ltd” in the UK are listed as beingowned by "StartCom CA Ltd".[2] This seems circular, but ourunderstanding is it actually refers to StartCom HK, which has the samename. StartCom UK is documented as having two directors. One is Gaohua(Richard) Wang, who will be known to you all as he represents WoSign inthis forum and at the CAB Forum. The other, appointed last month, isIñigo Barreira, formerly of the CA Izenpe and now of StartCom.StartCom HK's 100% ownership appears to give it total control overStartCom UK, including the ability to hire and fire directors at will,due to a special clause (#73) in the company formation documents.[3]StartCom HK's Company Registration Number (CRN) is 2271553, which can belooked up at the Cyber Search Centre of the Integrated CompaniesRegistry Information System[4] in Hong Kong. There is a requirement forregistration and a small payment, but the relevant documents have beenprovided by Mozilla. These documents show that:* StartCom HK’s documents list only one director, Gaohua (Richard) Wang.[5]* StartCom HK’s d

Re: Japan GPKI Root Renewal Request

2016-08-05 Thread Peter Kurrasch
  Kathleen--As I understand it, the request is for only CA2(Root) to be included in the trust store. Is that correct? The CP/CPS document submitted for the CA2(Root) hardly seems sufficient to satisfy anyone for one simple reason: there is no detail! I'm surprised the auditors (KPMG in this case) found this to be acceptable. If the CA2(Sub) ‎is not to be included in the Mozilla trust store then I don't see how it's CP/CPS can be reviewed for consideration here.My recommendation is to reject this request and ask that the root's documentation be rewritten to reflect the policies and procedures that apply to all certs that chain to this root.From: Eric MillSent: Wednesday, July 20, 2016 8:15 PM‎For some reason, Gmail split up this thread into two for me. In case anyoneelse is having similar issues, here's the original detail for this request:On Wed, Apr 27, 2016 at 4:56 PM, Kathleen Wilson  wrote:> This request by the Government of Japan, Ministry of Internal Affairs and> Communications, is to include the GPKI 'ApplicationCA2 Root' certificate> and enable the Websites trust bit. This new root certificate has been> created in order to comply with the Baseline Requirements, and will> eventually replace the 'ApplicationCA - Japanese Government' root> certificate that was included via Bugzilla Bug #474706. Note that their> currently-included root certificate expires in 2017, and will be removed> via Bugzilla Bug #1268219.>> The request is documented in the following bug:> https://bugzilla.mozilla.org/show_bug.cgi?id=870185>> And in the pending certificates list:> https://wiki.mozilla.org/CA:PendingCAs>> Summary of Information Gathered and Verified:> https://bugzilla.mozilla.org/attachment.cgi?id=8673399>> Noteworthy points:>> * The primary documents are the Root and SubCA CP/CPS, provided in> Japanese and English.>> Document Repository (Japanese):> http://www.gpki.go.jp/apca/cpcps/index.html> Document Repository (English):> https://www2.gpki.go.jp/apca2/apca2_eng.html> Root CP/CPS:> https://www2.gpki.go.jp/apca2/cpcps/cpcps_root_eng.pdf> SubCA CP/CPS:> https://www2.gpki.go.jp/apca2/cpcps/cpcps_sub_eng.pdf>> * CA Hierarchy: This root certificate has one internally-operated> subordinate CA that issues end-entity certificates for SSL and code signing.>> * This request is to turn on the Websites trust bit.>> SubCA CP/CPS section 3.2.2, Authentication of organization identity> As for the application procedure of a server certificate, ... the LRA> shall verify the authenticity of the organization to which the subscriber> belongs according to comparing with organizations which were written in the> application by directory of government officials that the Independent> Administrative Agency National Printing Bureau issued.>> SubCA CP/CPS section 3.2.3, Authentication of individual identity> As for the application procedure of a server certificate, ... the LRA> shall verify the authenticity of the subscriber according to comparing with> name, contact, etc. which were written in the application by directory of> government officials that the Independent Administrative Agency National> Printing Bureau issued.> The LRA also check the intention of an application by a telephone or> meeting.>> SubCA CP/CPS section 4.1.2, Enrollment process and responsibilities> (1) Server certificate> The subscriber shall apply accurate information on their certificate> applications to the LRA.> The LRA shall confirm that the owner of the domain name written as a> name(cn) of a server certificate in the application form belongs to> Ministries and Agencies who have jurisdiction over LRA, or its related> organization with the thirdparty databases and apply accurate information> to the Application CA2(Sub).>> * Mozilla Applied Constraints: This CA has indicated that the CA hierarchy> may be constrained to the *.go.jp domain.>> * Root Certificate Download URL:> https://bugzilla.mozilla.org/attachment.cgi?id=8673392> https://www.gpki.go.jp/apca2/APCA2Root.der>> * EV Policy OID: Not requesting EV treatment>> * Test Website:> https://www2.gpki.go.jp/apca2/apca2_eng.html>> * CRL URLs:> http://dir.gpki.go.jp/ApplicationCA.crl> http://dir2.gpki.go.jp/ApplicationCA2Root.crl> http://dir2.gpki.go.jp/ApplicationCA2Sub.crl> SubCA CPS section 4.9.7: The CRL of 48-hour validity period is issued at> intervals of 24 hours.>> * OCSP URL:> http://ocsp-sub.gpk

Re: About the ACME Protocol

2016-07-19 Thread Peter Kurrasch
  Thanks, Patrick. This is helpful. A few answers/responses:Regarding the on-going development of the spec: I was thinking more about the individual commits on github and less about the IETF process. I presume that most commits will not get much scrutiny but a periodic (holistic?) review of the doc is expected to find and resolve conflicts, etc. Is that a fair statement?The report on the security audit was interesting to read. It's good to see someone even attempted it. In addition to the protocol itself it would be interesting to see an analysis of an ACME server (Boulder, I suppose). Maybe someone will do some pentesting at least?The 200 LOC is an interesting idea. I assume such an implementation would rely heavily on external libraries for certain functions (e.g. key generation‎, https handling, validating the TLS certificate chain provided by the server, etc.)? If so, does anyone anticipate that someone will develop a standalone, all-in-one (or mostly-in-one) client? Is a client expected to do full cert chain validation including revocation checks?In the -03 version of the draft, section 6.1 is where I felt the spec was getting too much into server implementation details. I think there were some other spots where "server must" statements felt a little over-specified.After reading the latest draft of the spec and the audit report, I figured I would offer up my take on the "state of the protocol", if you will.‎ I know there will be sharp disagreements and that's fine; this is but one person's perspective. In terms of an overly broad, overly general statement, the protocol strikes me as being too new, too immature. There are gaps to be filled, complexities to be distilled, and (unknown) problems to be fixed. I doubt this comes as new information to anyone but I think there's value in recognizing that the protocol has not had the benefit of time for it to reach it's full potential.The big, unaddressed (or insufficiently addressed) issue as I see it‎ is compatibility. This is likely to become a bigger problem should other CA's deploy ACME and as interdependencies grow over time. Plus, when vulnerabilities are found and resolved, compatibility problems become inevitable (the security audit results hint at this).The versioning strategy of having CA's provide different URL's for different versions of different clients might not scale well.‎ One should not expect all cert applicants to have and use only the latest client software. This approach might work for now but it could easily become unmanageable. Picture, if you will, a CA that must support 20 different client versions and the headaches that can bring.My recommendation is for the protocol to accommodate ‎version information (data and status codes) but for a separate document to discuss deployment details. A deployment doc could also be used to cover the pro's and con's of using one server to do both ACME and other Web sites and services. The chief concern is if a vulnerability in the web site can lead to remote code execution which can then impact handling on the ACME side of the fence. Just a thought.Thanks.   From: Patrick FigelSent: Friday, July 8, 2016 4:43 PM‎Before getting into specifics, I should say that you're likely to get abetter answer to most of these question on the IETF ACME WG mailing list[1].On 08/07/16 16:36, Peter Kurrasch wrote:> I see on the gitub site for the draft that updates are frequently> and continuously being made to the protocol spec (at least one a> week, it appears). Is there any formalized process to review the> updates? Is there any expectation for when a "stable" version might> be achieved (by which I mean that further updates are unlikely)?‎The IETF has a working group for ACME that's developing this protocol.The IETF process is hard to describe in a couple of words (you can readup on it on ietf.org if you're interested). Other related protocols suchas TLS are developed in a similar fashion.> How are compatibility issues being addressed?Boulder (the only ACME server implementation right now, AFAIK) plans totackle this by providing new endpoints (i.e. server URLs) wheneverbackwards-incompatible changes are introduced in a new ACME draft, whilekeeping the old endpoints available and backwards-compatible until theclient ecosystem catches up. I imagine once ACME becomes an internetstandard, future changes will be kept backwards-compa

About the ACME Protocol

2016-07-08 Thread Peter Kurrasch
  At Nick's request, I've changed the subject line. Also, for my part, my comments are not intended to single out ACME to the exclusion of other protocols or implementations to which my comments might equally apply.I see on the gitub site for the draft that updates are frequently and continuously being made to the protocol spec (at least one a week, it appears). Is there any formalized process to review the updates? Is there any expectation for when a "stable" version might be achieved (by which I mean that further updates are unlikely)?‎ How are compatibility issues being addressed? Has any consideration been given to possible saboteurs who might like to introduce backdoors?I personally don't see the wisdom in having the server implementation details‎ in what is ostensibly a protocol specification. Will there be any sort of audit to establish compliance between a particular sever implementation and this Internet-Draft? Will the client software be able to determine the version of the specification under which the server is operating? (I apologize if it is in the spec; I didn't do a detailed reading of it.)On the client side, is there a document describing the details of an ideal implementation? Does the client inform the server to which version of the protocol it is adhering--for example, in a user-agent string (again, I didn't notice one). Is there any test to validate the compliance of a client with a particular version of the Internet-Draft?One thought for consideration is the idea of a saboteur who seeks to compromise the client software.‎ This is of particular concern if the client software can also generate the key pair since there are the obvious benefits to bad actors if certain sites are using a weaker key. Just as Firefox is a target for malware, the developers of client-side software should be cognizant of bad actors who might seek to compromise their software.    From: Nick LambSent: Wednesday, July 6, 2016 8:16 AMTo: mozilla-dev-security-pol...@lists.mozilla.orgSubject: Re: StartEncrypt considered harmful todayOn Wednesday, 6 July 2016 09:50:46 UTC+1, Peter Gutmann  wrote:> Oh dear God, another one?  We've already got CMP, CMC, SCEP, EST, and a whole> slew of other ones that failed to get as far as RFCs, which all do what ACME> is trying to do.  What's the selling point for ACME?  That it blows up in your> face at the worse possible time?In the examples I've reviewed the decision seems to have been made (either explicitly or tacitly) to leave the really difficult problem - specifically achieving confidence in the identity of the subject - completely unaddressed. ACME went out of its way to address it for the domain we care about around here.Your work on SCEP is probably appreciated by people who aren't interested in that problem, but this forum is concerned with the Web PKI, where that problem is pre-eminent, and this thread is about another provider, StartCom trying and failing to solve that problem.So the answer to your question is that ACME's selling point is that it solves the problem lots of people actually have, a problem which was traditionally solved by various ad hoc methods whose security (or more often otherwise) was only inspected after the fact rather than being considered in advance.I presume the "blows up in your face" comment was purely because of ACME's hilarious choice of name, but if not please elaborate _in a thread about ACMEdev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: StartEncrypt considered harmful today

2016-07-01 Thread Peter Kurrasch
Only reason I'm focusing on Let's Encrypt and ACME is because they are 
currently under review for inclusion.‎ As far as I'm concerned all CA's with 
similar interfaces warrant this extra scrutiny.

I am somewhat curious if any of this has come up before in other forums--that 
these interfaces can ‎be abused and lead to certificate mis-issuance? 


  Original Message  
From: Matt Palmer
Sent: Friday, July 1, 2016 12:16 AM
To: dev-security-policy@lists.mozilla.org
Subject: Re: StartEncrypt considered harmful today

On Thu, Jun 30, 2016 at 11:10:45AM -0500, Peter Kurrasch wrote:
> Very interesting. This is exactly the sort of thing I'm concerned about
> with respect to Let's Encrypt and ACME.

Why? StartCom isn't the first CA to have had quite glaring holes in its
automated DCV interface and code, and I'm sure it won't be the last. What
is so special about Let's Encrypt and ACME that you feel the need to
constantly refer to it as though it's some sort of new and special threat to
the PKI ecosystem?

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: ISRG Root Inclusion Request

2016-07-01 Thread Peter Kurrasch
I'm not sure I follow. Why should the inclusion process proceed before the 
updates are complete?


  Original Message  
From: j...@letsencrypt.org
Sent: Thursday, June 30, 2016 10:04 PM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: ISRG Root Inclusion Request

On Wednesday, June 8, 2016 at 7:56:44 AM UTC-5, Peter Kurrasch wrote:
> I have comments as well. I made it to only Section 3.2.1 before I decided I 
> have too many concerns about Let's Encrypt to include in a review of the CPS. 
> I'll raise them in a separate thread, so here are my comments on just this 
> CPS document.

Hi Peter,

We've reviewed your feedback, much of it is helpful. Thanks. We'll work on 
incorporating improvements based on it into the next revision of our CPS. We 
don't believe any of these items need to block inclusion, however.

--
Josh Aas
Executive Director
Internet Security Research Group
Let's Encrypt: A Free, Automated, and Open CA
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: StartEncrypt considered harmful today

2016-06-30 Thread Peter Kurrasch
All good points. ‎I wonder if we need to start with something more basic: 
setting expectations.

Maybe we need to communicate to all participating CA's that we expect them to 
perform a security scan of all Internet-facing interfaces. That we expect each 
interface to be able to pass the OWASP Top Ten. That we expect a scan to be 
performed at least once per year.

To be sure, that's a pretty low bar but I don't know that all CA's could pass 
even that minimal benchmark today. If so, that's a big problem.


  Original Message  
From: Tom Ritter
Sent: Thursday, June 30, 2016 11:57 AM‎

On 30 June 2016 at 11:10, Peter Kurrasch  wrote:
> Very interesting. This is exactly the sort of thing I'm concerned about with 
> respect to Let's Encrypt and ACME.
>
> I would think that all CA's should issue some sort of statement regarding the 
> security testing of any similar, Internet-facing API interface they might be 
> using. I would actually like to see a statement regarding any interface, 
> including browser-based, but one step at a time. Let's at least know that all 
> the other interfaces undergo regular security scans--or when a CA might start 
> doing them.
>
> Anyone proposing updates in CABF?

In theory I would support this, in practice it has no teeth. There is
no (real) accreditation for security reviews, and the accreditations
that exist do not, in practice, ensure one with the accreditation is
skilled. You can say "APIs must have a security review" or an
"adversarial security scan" or a "vulnerability scan", or "manual
penetration test", or a "red team assessment" - but the definition of
the terms and the skillsets of people performing them vary so widely
that it would not guarantee very much in practice.

I believe that the CAs who want to be a leader in this niche already
are, and the CAs who cannot afford to do so (because I assume every CA
wants to take security seriously, but is confined in practice) will
wind up meeting the requirement in a way that does not significantly
improve their security. (And various shades in between)

But I'm biased, being a security consultant and all.

-tom
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: StartEncrypt considered harmful today

2016-06-30 Thread Peter Kurrasch
Let's be even more pointed: How do we know that *any* of the certs issued 
through this ‎interface were issued to the right person for the right domain? 
How can StartCom make that determination?

  Original Message  
From: Daniel Veditz
Sent: Thursday, June 30, 2016 12:04 PM‎

...

How many mis-issued certs were obtained by the researchers? Has there
been an investigation to see if there were similarly mis-issued certs
prior to this report?

Have those certs been revoked?

-Dan Veditz
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: StartEncrypt considered harmful today

2016-06-30 Thread Peter Kurrasch
Very interesting. This is exactly the sort of thing I'm concerned about with 
respect to Let's Encrypt and ACME.

I would think that all CA's should issue some sort of statement regarding the 
security testing of any similar, Internet-facing API interface they might be 
using. I would actually like to see a statement regarding any interface, 
including browser-based, but one step at a time. Let's at least know that all 
the other interfaces undergo regular security scans--or when a CA might start 
doing them.

Anyone proposing updates in CABF?


  Original Message  
From: Rob Stradling
Sent: Thursday, June 30, 2016 10:31 AM
To: mozilla-dev-security-pol...@lists.mozilla.org; 'Eddy Nigg (StartCom Ltd.)'
Subject: StartEncrypt considered harmful today

https://www.computest.nl/blog/startencrypt-considered-harmful-today/

Eddy, is this report correct? Are you planning to post a public 
incident report?

Thanks.

-- 
Rob Stradling
Senior Research & Development Scientist
COMODO - Creating Trust Online

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: ISRG Root Inclusion Request

2016-06-29 Thread Peter Kurrasch
  I agree that ACME alone is not the issue, so many of the criticisms I would levy against it apply equally well to other, more proprietary interfaces that the other CA's use (and I do commend ISRG for taking the open and transparent route). For that matter, are existing CA's required to undergo frequent security scans/reviews of their interfaces, both browser-based and otherwise? Do we know that their websites pass even the most basic pen-tests? As the bad guy, I hope the answer is no.Personally I see ease and ubiquity as more appealing than the cost. It's easy and cheap to get a stolen credit card number so cost isn't much of a deterrent to the bad actor. But you'd still have get one so a free cert is an easy cert and, thus, more appealing.(You might want to dust off your popcorn machine in case we need it.)From: Matt PalmerSent: Tuesday, June 28, 2016 10:13 PM‎On Tue, Jun 28, 2016 at 09:52:59PM -0500, Peter Kurrasch wrote:>At issue is the degree to which automation is featured in the operating>model of the Let's Encrypt CA. Fast, easy, cheap, and with little>chance for human intervention or oversight...that's a recipe for abuse.Every CA that I've ever used automates DV issuance.  My DV certs always seemto turn up within a minute or so of validating them -- no way a human isconsistently doing meaningful checks in there.  ACME isn't the issue either;most other CAs have (hideous) APIs to make requests automatic.The only difference here between LE and every other CA is that issuance fromLE is free.  While it's not a meaningful speedbump for the modern criminal,it does at least mean they've got to find a stolen CC.  Personally, I'd loveto have the popcorn concession on a discussion about whether to requirepayment for DV certs; I could retire on the proceeds.- Matt___dev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: ISRG Root Inclusion Request

2016-06-28 Thread Peter Kurrasch
  Apologies for the delay in responding...it got lost amongst the other things on my plate.I'll answer the easier question first: admin certs. I wasn't so much confused by their presence in the CPS as I was discontent with the similar-but-different manner in which they were discussed. I think keeping them separate from the DV-SSL cert discussions will limit ambiguity and confusion.As for my "different machine" comment, I wasn't very clear so I'll try to ‎fix that. I'm not actually concerned with the number of machines involved in obtaining a cert, but I do wonder what might happen when one of them becomes compromised. In an ACME setting I think the consequences could be worse than the status quo/traditional setting, but it's more accurate to say I just can't tell. I would need to review a more complete spec than I was able to find‎ before I could draw any real conclusions.At issue ‎is the degree to which automation is featured in the operating model of the Let's Encrypt CA. Fast, easy, cheap, and with little chance for human intervention or oversight...that's a recipe for abuse. If ACME means that I, as the bad guy, can compromise one machine and start getting certs issued to me for my nefarious purposes, then I'm a huge fan and Let's Encrypt will be my go-to source for certs. If instead ACME makes it harder for me to do that, then I'll be sad but it's probably a boon to Internet security.‎I know there are other comments I've made that probably deserve scrutiny and fleshing out so I hope people will feel free to speak up and let me know.From: Nick LambSent: Wednesday, June 8, 2016 10:03 AMTo: mozilla-dev-security-pol...@lists.mozilla.orgSubject: Re: ISRG Root Inclusion RequestOn Wednesday, 8 June 2016 13:56:44 UTC+1, Peter Kurrasch  wrote:> Section 3.2.1 is where this whole thing falls apart for me. I am not familiar with the ACME protocol nor the ACME client so this is the first time I'm seeing that the client might be running on a different machine than the one getting the cert. This raises all sorts of security concerns.How so? As you correctly observed the subscriber is not a machine but a person, and the ACME process obtains reasonable confidence this person (whether through the operation of one machine, two machines as is common or dozens of machines in some complicated scenarios) is only issued with certificates for names they control.Consider the mundane situation of a customer of a traditional commercial CA. Perhaps Dave Cameron. Dave wishes to obtain a certificate for the name www.example.com, and so he uses a laptop, machine #1 to view pages from the CA's web server, machine #2, he types www.example.com into a web form, picks a few items from lists, types in his credit card details. After a while, email is sent to the address d...@example.org, which is the WHOIS admin contact for the example.com 2LD, the email asks to confirm that Dave wants the certificate issued. This email is delivered somehow to the example.org mail server (machine #3) and Dave views it on his iPhone (machine #4). Dave clicks "Yes, confirm" and duly receives the certificate.This everyday process for issuing DV certificates involves four machines outside the Certificate Authority systems themselves, none of which is www.example.com, and the process achieves its confidence as to Dave's legitimate control of www.example.com by a rather circuitous route, but it has been accepted as "good enough" and goes unremarked upon in applications to these trust stores. ACME is, if anything, simpler and more robust in its assurance, your lack of familiarity with it aside. If you object already to the status quo then an inclusion request thread is the wrong place to do that.You also seem confused by the existence of Administrative Certificates, which are needed for infrastructure. The document already lists examples of Administrative Certificates such as OCSP responders, and the intermediates for issuing leaf certificates to end users. They aren't some sort of alternative to ACME for end users.___dev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: ISRG Root Inclusion Request

2016-06-08 Thread Peter Kurrasch
  I have comments as well. I made it to only Section 3.2.1 before I decided I have too many concerns about Let's Encrypt to include in a review of the CPS. I'll raise them in a separate thread, so here are my comments on just this CPS document.General Comments:* I'd like to echo what Andrew brought up about separating out the DV-SSL certs from other certs to be covered by this CPS. Doing so throughout the doc will make it cleaner, but Section 3.2 would definitely benefit.* ‎I would like to see more of a distinction made between human actors and machines. Machines and such can be compromised more readily than humans so the extent to which machines are performing some of these functions will greatly dictate what the attack surface looks like.* I'd like to see an updated CPS submitted for review before we decide to proceed with inclusion of the Let's Encrypt root. Specific comments:Section 1.1 needs to mention ACME since that is a key feature (benefit?) of Let's Encrypt. This is essential because the success (or failure) of Let's Encrypt depends on the security (or vulnerability) of ACME. A reference should be made to an RFC or other fixed specification (i.e. no Internet-Drafts or doc on github that is still being updated).Section 1.2.2 has a lot of words to say, basically, there is one ID and a bunch of "other" stuff that is left unspecified and unidentified. Will ISRG create new cert types without updating the CPS? I think it would be clearer to instead state exactly what is being done now and the respective OIDs.Section 1.3.1, para 8, should say certs are issued *to* people *for a* machine because it is people who have the responsibilities, liabilities, and claims concerning this whole process. Besides, it is a person who must demonstrate control over a FQDN, not the machine (or is it?). Also,‎‎ can the CA revoke a cert for any reason it wants, or only those spelled out by the BR?... para 12, does not explain how exactly the CA will enter into a "relying party agreement" with each relying party on the Internet, and how the CA will be able to document that the terms were actually agreed upon...with every relying party. (See also my comments for Section 1.3.5.)Section 1.3.3 the word "combination" requires the use of and, not or. Is an implication to be made that ISRG might contract out RA work to an organization that is composed exclusively of intelligent computational entities?Section 1.3.4 would be better to separate out the applicant from subscriber, since they are defined as having separate responsibilities a la Section 1.6.1. For example, an applicant can not ask the CA to revoke a cert.Section 1.3.4 Item 1 has poor grammar; plus, applicant should verify and accept/reject all certs, not only those with the applicant named in them Item 2 has poor grammar and needs to explain what an unaffiliated subscriber is. What is the distinction to be made between applicants and subscribers and representatives? Are all of them human?... Item 5 shouldn't say a RA is responsible for revocation if that's not cited as a responsibility in 1.3.3 In addition, an "it" can not notify a CA that a cert should be revoked. An "it" is unable to make claims and thus can not forfeit them for failure to comply with anything. Also, this section is an introduction so why all the fine details?... The second Item 4 hardly seems necessary ("the subscriber is obligated to not screw up"). Plus this seems relevant to only the DV-SSL certs not the others that ISRG might want to issue The second Item 5 and Item 6 shouldn't say that revoking a cert necessitates throwing away your key pair. Also the list of situations is hardly complete And Item 8 has an interesting list of criminal activities. So ISRG cares about these uses of certs that chain to their root? What criteria do they have in mind to identify those activities? This item should at least agree with what's stated in Section 1.4 if not just referencing that section outright.Section 1.3.5 says a relying party might be an organization but that doesn't make sense to me. What use case does ISRG envision wherein they might enter into an agreement with an organization? Quite apart from that question, however, is the problem that this whole section, as written, is wildly impractical and unenforceable. What is the intention for this section?Section 1.3.6 doesn't make sense: each participant in the PKI should be identified explicitly and not lumped together under a category of "other".Section 1.3.6.1 should explain what is meant by manufacture in this context, why it merits identification as a distinct entity, and why it would make any sense for ISRG to contract that work out to someone else.Section 1.3.6.2 doesn't make sense. A repository is a thing, not a participant. It's something that a CA does, except for when it doesn't do it. I'm not sure I'm altogether comfortable with ISRG saying they w

Re: When good certs do bad things

2016-06-03 Thread Peter Kurrasch
I wasn't intending to get into a broader discussion about the merits of 
encryption. My initial point was two-fold: First, that there are a lot of 
different scenarios to consider--too many, in fact. Second, that a "good" cert 
could be used for any of those bad things, although the use of certs is not 
necessary in all cases. 

Regarding use of the term "bad", what does anyone think about this as an 
alternative: "furtherance of criminal activity"

Granted the term criminal might be a bit subjective, but I can't think of good 
uses for trojans or botnets or ransomware. And I would hope that CA's would 
agree that furtherance of criminal activity is an inappropriate use of the PKI 
system?

Thoughts? 


  Original Message  
From: Ryan Sleevi
Sent: Thursday, May 26, 2016 11:44 PM‎

On Thu, May 26, 2016 at 1:58 PM, Phillip Hallam-Baker
 wrote:
> What has encryption got to do with it?

The "bad" raised was unrelated to certificates, publicly trusted or
otherwise. As Nick also pointed out, a number of the "bad" is just as
accomplish through other means independent of certificates - whether
using raw public keys, DANE, etc. That is, the concerns raised were
about TLS, not about certificates.

...snip...

> Now the problem here is that there are also folk who just want to turn on
> encryption and that is all and they don't care about doing online commerce
> or banking. They just want to keep their email secret. And that is fine. But
> that does not mean that people who only want to do confidentiality should
> rip up the infrastructure that is designed to serve a different purpose.

It would seem you're suggesting that CAs aren't the right
infrastructure to enable the Internet's growth and user's security,
which may be true, but would be a surprising statement to make.
Otherwise, the choice of the term "rip up" to suggest that, regardless
of original intent, the infrastructure may better serve users' and
security more by doing something more broadly scoped seems...
unnecessary simplistic.

Put differently, even if it were true that the goal of the Web PKI was
to "prevent bad," it still suffers from the same problem - first, the
definition of "bad" posited on this thread is largely related to
encryption (first and foremost), and thus orthogonal to certificates,
but in several of the remaining cases, the definition of bad is a
statement that users have unrealistic expectations about what
certificates can/do provide. Ironically, those unrealistic
expectations may have been caused by CAs themselves and by their
marketing teams.

So to address these "bad" uses of certificates, it's necessary as the
community to decide whether encryption is bad, whether the
'undesirable' uses of encryption and the desire to prevent such uses
is worth the risk to the 'good' uses of encryption and the desire to
promote them, and to decide on what the reasonable and realistic
expectations of certificates should be.
‎
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Job: Is it OK to post a job listing in this forum?

2016-06-03 Thread Peter Kurrasch
These are worthy goals, for sure. I'm not necessarily persuaded that posting 
jobs here is the best way to do that.

With the exception of a few people, we usually only hear from CA's when their 
stuff is up for consideration or when they've done something wrong. Perhaps if 
CA's were more engaged here generally, I'd feel better about posting jobs? If 
CA's were more active in sharing their good works and seeking input or feedback 
from the broader community, perhaps we'd see better, more robust security as 
well? Just a thought.

Also, what will we do if just anyone starts posting jobs?


  Original Message  
From: Ryan Sleevi
Sent: Saturday, May 28, 2016 11:29 PM
To: Peter Kurrasch
Reply To: r...@sleevi.com
Cc: Kathleen Wilson; mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Job: Is it OK to post a job listing in this forum?

Reposing from the right email address...

On Fri, May 27, 2016 at 5:36 AM, Peter Kurrasch  wrote:
> I'm opposed to allowing job postings in this forum. The focus should be 
> policy as that is the reason we have gathered here.
>
> Job postings generally are intended for people in a particular country ‎with 
> a particular level of experience who are actively seeking or receptive to a 
> new job. Sending out off-topic messages that are intended for a subset of a 
> subset of a subset of people here sounds like spam to me.

Policy and engineering are often intertwined - especially in the CA
space. Our ability to enact meaningful policies that protect users is
often directly correlated to CAs (and site operators) abilities to
enact changes. Things like CAA, name constraints, and short-lived
certificates are all prime examples of this - they relate to policies
but require engineering.

I would think that if we want to improve the state of the ecosystem,
we also need to improve the state of the engineering. And it's clear
that this forum, of perhaps all those out there, has the right
confluence of people passionate about policy and interested in the
engineering side.

While I don't know to what extent the broader (lurking) community is
able and receptive to such postings, the active participants are at
least interested in developing a robust approach to user security -
that is why we're here and care about the policies. If they could get
paid for that (as many presently are volunteers), isn't that a win?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: MITM detection in the browser

2016-06-01 Thread Peter Kurrasch
  It's an interesting idea but not without some issues. ‎Essentially you are proposing a mechanism for in-band-data-over-TLS to determine if the end-to-end encryption has been compromised, correct? (I'm deliberately avoiding the term MITM as I think it carries extra baggage that is distracting for now.)I think having a "preamble code" (for lack of a better term?) as mentioned could be difficult for sites that are heavy on the dynamic content--they'd have to buffer up the final page then hash it anyway. If the server can do it, so could any MITM appliance. I think a "postamble code" is the way to go.Also, ‎any detection method that relies on timing would have to be a non-starter almost out of necessity. Propagation of data throughout the internet is wacky enough that it would be extremely difficult to get down a timing model that works in all cases.I also took a look at the ref [1] you provided. An interesting idea as well, though wildly impractical. I can't imagine that any of the top 1000 sites on the internet could even implement such a thing because they tend to have pages that pull in data from far flung corners of the world. I don't think any but the most trivial sites would ever work. I also have a problem with the salted password that is mentioned. I don't think it's as secure as would be wanted.So the key to making something like this work is to figure out the algorithm for producing "the code" to use. Obviously it has to incorporate knowledge of the TLS data but then what else can be used in a secure manner? The password idea from [1] is clever, but if we realize it won't work what is a good alternative?I hope you don't feel I'm trying to discourage more thought on this. My intention is only to offer a way to look at this that might help focus additional work and conversation. From: John NagleSent: Monday, May 30, 2016 2:44 PMTo: dev-security-policy@lists.mozilla.org‎Reply To: na...@animats.comSubject: MITM detection in the browserWe need general, automatic MITM detection in HTTP.It's quite possible.  An MITM attack has a basic quality that makes it detectable - each end is seeing different crypto bits for the same plaintext.  All they have to do is compare notes.There are out-of-band ways to do this, such as certificate pinning and certificate repositories.  But these haven't achieved much traction.Doing it in-band is difficult, but possible. An early system, for one of the Secure Telephone Units (STU), displayed a 2-digit number to the user at each end, based on the crypto bits. The users were supposed to compare these numbers by voice, and if they matched, they were probably not having a MITM attack. An MITM attacker would need to fake the voices of the participants to break that.This is the insight that makes MITM detection possible.  You can force the MITM to have to tell a lie to convince the endpoints.  More than that, if you work at it, you can force the MITM to have to tell an *arbitrarily complex* lie. You can even force the MITM to have to tell a lie about the future traffic on the connection. That means they have to take over the entire conversation and fake the other end.As an example, suppose a server sending a page sends, at the beginning of the page, a hash value which is based on the contents of the page about to be sent, and also based on the first 64 bytes of the crypto bits of the connection. The browser checks this. The MITM attacker now has a problem.  If the attacker didn't know about this, the MITM attack immediately sounds an alarm at the browser.  If the attacker does know about this, they can compute their own hash.  But they haven't seen the content the hash covers, because the page hasn't been transmitted yet.So the attacker either has to buffer up the entire page before they can send any of it, or fake the page based on some source like a cache. Buffering up the entire page adds delay. The server can add to that delay by deliberately stalling for some seconds before sending the last few bytes of the page.  If the MITM attack adds 10 seconds before every page begins to load, it's obvious what's happening.  The browser could even check this; if the first byte of the page doesn't appear within N seconds, don't display it.Faking the page is a lot of work, especially if it's customized.  A cache won't be enough. Users will notice if they get a generic page instead 

Re: Job: Is it OK to post a job listing in this forum?

2016-05-27 Thread Peter Kurrasch
I'm opposed to allowing job postings in this forum. The focus should be policy 
as that is the reason we have gathered here.

Job postings generally are intended for people in a particular country ‎with a 
particular level of experience who are actively seeking or receptive to a new 
job. Sending out off-topic messages that are intended for a subset of a subset 
of a subset of people here sounds like spam to me.



  Original Message  
From: Kathleen Wilson
Sent: Thursday, May 26, 2016 5:17 PM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Job: Is it OK to post a job listing in this forum?

Hi All,

I have been asked if it is OK to post job listings in 
mozilla.dev.security.policy. Surprisingly, I don't recall ever being asked that 
question before, and I am not aware of a written policy about the content of 
postings to mozilla.dev.security.policy.

So, here is a proposal:
~~
Jobs may be posted if they meet the following criteria:
* The company/organization name is clearly listed
* The person posting the job information actually works for that 
company/organization and is not a contracted recruiter
* A single posting only (for each job opportunity)
* The person posting the job info is actively engaged in this 
mozilla.dev.security.policy forum
* The job opportunity is a role relevant to the forum's audience
* The posting consists of a paragraph outline and a "read more" URL
* The Subject of the posting begins with "Job: " 
~~

Does that sound reasonable?

As always, I will appreciate thoughtful and constructive input.

Thanks,
Kathleen


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: When good certs do bad things

2016-05-26 Thread Peter Kurrasch
You are right to point out that many of those scenarios could be accomplished 
with a self-signed cert or indeed no cert at all. The decision to use a good 
cert or the likelihood of a good cert being used in any given scenario is not 
necessarily that important. What matters is that once we find a good cert has 
been used, what should we do about that cert?

I don't think we should absolve CA's of any responsibility or involvement when 
something "bad" comes along but neither do I think it falls entirely to them to 
figure out what to do. Getting the right balance will be tricky but I think 
it's worth fleshing out if people are interested.


  Original Message  
From: Nick Lamb
Sent: Thursday, May 26, 2016 1:25 PM‎

On Thursday, 26 May 2016 15:40:35 UTC+1, Peter Kurrasch wrote:
> I might use a perfectly good cert in a "bad" way:

Maybe it's worthwhile to consider what happens instead if we live under a 
regime (whether legally enforced or just de facto because of choices made by 
browser vendors) where you can't get a "perfectly good cert" for these 
scenarios. But in some cases I might not be clear what you propose.

> * ‎Create a phishing site to harvest login credentials from unsuspecting 
> people. For this I might create my own bogus domain or piggyback off an 
> existing, legitimate domain. Either way, I can use the cert to help create 
> the illusion of legitimacy to a victim while I steal his or her info

You don't need your own "perfectly good" cert, the legitimate domain has one, 
which you retain. To stop this we must prevent legitimate domains obtaining 
certificates. There is precedent for this as an anti-crime strategy - don't try 
to arrest criminals, instead go after victims, with their prey thinning the 
criminals will starve. It's not terribly... nice, though is it?

> * Use a server to distribute malware via online adverts (malvertising). 
> Having a cert helps make it look more legit and is required by some 
> advertising services.

Again, it is much cheaper to use somebody else's servers. Since you're a 
criminal it hardly matters that this is illegal.

> * Set up a spam email server and use the cert for my login page to the 
> control panel.‎ The cert wouldn't be used on the email side of things but 
> controlling access to a server that lets me do bad stuff.

A self-signed certificate works for this purpose too. To prevent this we could 
perhaps outlaw encryption? Or email?

> * Use a server to distribute malware via downloads. When I launch a spam 
> campaign I'll attach an infected document with some dropper malware in it. 
> The dropper malware then contacts my server to get the real malware, be it 
> ransomware or a banking trojan or remote desktop control or general zombie 
> code or Whatever the case may be I can use certs to encrypt the malware 
> download making it harder for people to figure out what's really going on.  

Again, self-signed certificates work fine. Indeed they are already widely used 
for this purpose.

> ‎* Sign my malware code so that Windows or MacOS will happily‎ install it.

As with web browsers the Dancing Pigs problem applies. Users will happily click 
past the "not signed, danger of death" dialog so long as they think they're 
getting nude pictures of a celebrity rather than having their bank account 
emptied.

We know the CA role isn't relevant here because in the Android ecosystem where 
there is no third party proof of identity users instead click past the (OS 
enforced) capabilities warnings, authorising an app to spend their money or 
spam their friends in the hope that this way they get to see the Dancing Pigs.

> ‎* Set up a command and control server and use certs to send encrypted 
> messages between the malware on the devices I've pwnd and my server.

Self-signed certs work really well here.

> * Set up a media server so that people can download some great movies that I 
> pilfered from someone else.

Self-signed certs are very popular in this role as far as I understand

> * Create a forum so people can talk about things their government does that 
> really bugs them and how to evade the different law enforcement agencies.‎ 
> Obviously I'm using certs to make it harder for those agencies to snoop on 
> the forum participants. 

If you want to attract people who really care which type of foil is best then 
judging from openbsd-misc you should dismiss TLS altogether, they don't trust 
it at all. They will want a web-of-trust type setup, although (due to dancing 
pigs) they won't actually check any of the signatures so plaintext HTTP would 
also work (www.openbsd.org didn't do TLS until this month).

> * Set up an online marketplace to swap/buy/trade any compromised keys and the 
> certs that go with the

When good certs do bad things

2016-05-26 Thread Peter Kurrasch
 It strikes me that some people might not have a good idea how people use certs to do bad things. As the token bad guy in this forum I'll take it upon myself to share some examples of how I might use a perfectly good cert in a "bad" way:‎* ‎Create a phishing site to harvest login credentials from unsuspecting people. For this I might create my own bogus domain or piggyback off an existing, legitimate domain. Either way, I can use the cert to help create the illusion of legitimacy to a victim while I steal his or her info.* Use a server to distribute malware via online adverts (malvertising). Having a cert helps make it look more legit and is required by some advertising services.‎* Set up a spam email server and use the cert for my login page to the control panel.‎ The cert wouldn't be used on the email side of things but controlling access to a server that lets me do bad stuff.‎* Use a server to distribute malware via downloads. When I launch a spam campaign I'll attach an infected document with some dropper malware in it. The dropper malware then contacts my server to get the real malware, be it ransomware or a banking trojan or remote desktop control or general zombie code or Whatever the case may be I can use certs to encrypt the malware download making it harder for people to figure out what's really going on.  ‎‎* Sign my malware code so that Windows or MacOS will happily‎ install it.‎* Set up a command and control server and use certs to send encrypted messages between the malware on the devices I've pwnd and my server.‎* Set up a media server so that people can download some great movies that I pilfered from someone else.* Create a forum so people can talk about things their government does that really bugs them and how to evade the different law enforcement agencies.‎ Obviously I'm using certs to make it harder for those agencies to snoop on the forum participants. ‎* Set up an online marketplace to swap/buy/trade any compromised keys and the certs that go with them. Naturally I'd have a place to discuss which CA's have the easiest security measures to bypass.‎‎* And sometimes it's just fun to park outside a hotel and setup a free WiFi network to do some MITM. People do some crazy things when they think no one is watching, and certs keep people from getting suspicious that anything is amiss.* Oh, and Lenovo and Dell demonstrated some out of the box thinking with all the Superfish stuff.‎The point here is that while "bad" can be a subjective term there are some behaviors that ought to be discouraged. There is a role for CA's to play in that effort but not in any sort of absolute, all or nothing sense.My suggestion is to frame the issue‎ as: What is reasonable to expect of a CA if somebody sees bad stuff going on? How should CA's be notified? What sort of a response is warranted and in what timeframe? What guidelines should CA's use when determining what their response should be?All of this is worthy of discussion, but it's gonna get complicated. 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Request to enable EV for VeriSign Class 3 G4 ECC root

2016-05-19 Thread Peter Kurrasch
Hi Kathleen, 

My recommendation is for Mozilla to reject this request from Symantec on the 
grounds that it is unnecessary. As others have pointed out recently, the chief 
function of a CA is to certify identity. That certification should be ably met 
with the regular cert issuance procedures rendering the EV procedures 
superfluous. That, or perhaps the CA knows of certain weaknesses in the regular 
identification process that have been remedied for the EV process? Perhaps EV 
is a way of saying, "No, seriously you guys, this time we really, really 
identified the cert applicant."

Whatever the case may be, I don't see how turning on EV ‎benefits Mozilla 
users. If basic identity is sufficiently established for regular certs, and 
that being the chief function of CA's, what improvement will EV enablement 
possibly produce?

Thanks.

  Original Message  
From: Kathleen Wilson
Sent: Wednesday, May 18, 2016 4:58 PM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Request to enable EV for VeriSign Class 3 G4 ECC root

Here is a summary of this discussion so far about Symantec's request to enable 
EV treatment for the "VeriSign Class 3 Public Primary Certification Authority - 
G4" root certificate that was included via bug #409235, and has all three trust 
bits enabled. 

1) The "Symantec AATL ECC Intermediate CA" needs to be revoked and added to 
OneCRL. The intermediate cert has been added to Salesforce. 
I'm assuming we may proceed with this request, as long as the cert is added to 
OneCRL before EV treatment is actually enabled in a Firefox release.

2) Questions were raised about wildcard certs in regards to the BRs. But it 
sounds like for now Symantec's use of wildcard certs is not breaking any BRs.
Question for Symantec: Are any of the issued wildcard certs EV?

3) Question raised: What technical controls are in place to ensure that systems 
which issue S/MIME certs "in this CA hierarchy" are not capable of issuing an 
SSL server certificate?
Answer from Symantec: We have a technical control in place for systems that 
issue S/MIME certs in this CA hierarchy. Our systems use static cert templates 
from which end-entity certs are issued. Those templates include an EKU value, 
but do not use the serverAuth or anyExtendedKeyUsage values.

4) Intermediate certificates for this root have been loaded into Salesforce, 
and are available at the following links:
https://wiki.mozilla.org/CA:SubordinateCAcerts
https://mozillacaprogram.secure.force.com/CA/PublicIntermediateCerts?CAOwnerName=Symantec%20/%20VeriSign
Symantec’s revoked intermediate certs have not yet been loaded into Salesforce. 
As per https://wiki.mozilla.org/CA:Communications#March_2016_Responses Symantec 
plans to enter this data by June 30, 2016.

This request is still under discussion, so please continue to provide your 
input.

Thanks,
Kathleen


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


CABF discussion Re: SSL Certs for Malicious Websites

2016-05-19 Thread Peter Kurrasch
  Hi Gerv,I would recommend against asking CA's to necessarily judge "malicious intent" on their own. Many CA's have enough to worry about just making sure they issue certs properly that I would question their ability to take on this extra work and perform it sufficiently well.That said, it may be worthwhile bringing up for discussion the idea of malicious intent and what roles or responsibilities are reasonable for a CA. There are undoubtedly parallels here to how anti-spam and IP address blacklisting services go about their functions as well as Microsoft's response to malicious code that is signed with a cert that is recognized by their code signing program. Perhaps there are lessons learned there or good practices that could be adopted.Possible talking points:* In simplest terms I imagine most all CA's would agree that "we have the right to refuse service to anyone for any reason, up to and including revocation of already-issued certs". Certainly, something that can be seen as harmful to the reputation of a CA would qualify as one such reason?* ‎If a CA is notified that a cert under their purview is being used for malicious intent, what will their response be and what should it be? Can a CA individually specify what they might consider malicious, how a judgment might be reached, any notifications made? Certainly each case and situation will have its own considerations but maybe some general guidelines could still be documented.Even if the best a CA can do is revoke the cert, that still has value. At least we might have some assurance that future encounters with the malicious activity will have limited effect.Thanks.From: Gervase MarkhamSent: Wednesday, May 18, 2016 9:17 AMTo: Kathleen Wilson; mozilla-dev-security-pol...@lists.mozilla.orgSubject: Re: SSL Certs for Malicious WebsitesOn 17/05/16 22:41, Kathleen Wilson wrote:> On Monday, May 16, 2016 at 9:20:56 AM UTC-7, Kathleen Wilson wrote:>> I am wondering if the BRs need to be updated to:>> + Define what is meant by "Certificate misuse, or other types of fraud". (e.g. being used for a purpose outside of that contained in the cert, or applicant provided false information.)>> + Add text similar to what is in the EV Guidelines stating that TLS/SSL certificates focus only on the ownership of the domain name(s) included in the certificate, and not on the behavior of the website. Note that the BRs already have section 9.6.1 about certificate warranties.> > Would someone please volunteer to take this up with the CA/Browser Forum?To be clear: you would like the CA/Browser Forum to define moreexplicitly what is meant by "Certificate misuse, or other types offraud" in the definition of "Certificate Problem Report"? And yourinitial proposal for a definition is "being used for a purpose outsideof that contained in the cert, or applicant provided false information"?If we can be clear by the end of the week what we are asking, I canbring this up in the CAB Forum face-to-face meeting next week.> https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/inclusion/> "4. We reserve the right to not include a particular CA certificate in our software products. This includes (but is not limited to) cases where we believe that including a CA certificate (or setting its "trust bits" in a particular way) would cause undue risks to users’ security, for example, with CAs that> - knowingly issue certificates without the knowledge of the entities whose information is referenced in the certificates; or> - knowingly issue certificates that appear to be intended for fraudulent use."> > What is meant by "fraudulent use"?I think the bullet as a whole could mean that we reserve the right tonot include CAs who happily issue certs to "www.paypalpayments.com" tojust anyone without any checks or High Risk string list or anything.Such a cert, unless issued to Paypal, Inc., is clearly to be used forfraud, IMO, and a CA is negligent in issuing it given that it's not hardto flag for manual review any cert containing the names of major banksand payment companies.Gerv___dev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://l

Re: HARICA Root Renewal Request

2016-02-22 Thread Peter Kurrasch
  Hi Dimitris, ‎You certainly echo the sentiment of others in this forum by directing me to the CABF but my concerns are particular to HARICA at this point. Simply put, the CABF BR has security gaps in section 3.2.2.4 which can result in certificate mis-issuance. There is no reason HARICA must tolerate such gaps in its own policies.So the question I guess comes down to this: Is HARICA able to tighten its own controls regarding section 3.2.2.4 and go beyond what the BR has outlined?Thanks.From: Dimitris ZacharopoulosSent: Monday, February 22, 2016 2:23 AM‎
  

  
  

  
  >> PK
  >> The one ‎change I would ask for, before
inclusion of the HARICA root in the Mozilla trust store, is that
the CPS be updated in regards to the use of these options. Where
possible, I would just say that an option will not be used. If
not possible, I would like to see details on how the
vulnerabilities in each case will be mitigated.‎


All these methods are approved and published in the CA/B Forum
Baseline Requirements. Perhaps it would be best to raise these
concerns in the CA/B Forum's public mailing list (pub...@cabforum.org).
In any case, if the CA/B Forum changes these methods, all CAs
(including HARICA) will have to adjust their practices (and practice
documents) to remove the verification methods you mentioned.‎
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: HARICA Root Renewal Request

2016-02-01 Thread Peter Kurrasch
  Thank you, Dimitris, for your helpful response! I appreciate the clarifications you provided. I do like that there are fairly tight controls in place as I think it will serve everyone (both HARICA CA subscribers and the wider Internet population) well.I did review version 3.3 (which is much better than the previous version!) and the clarifications you mention below all sound‎ reasonable to me. I have no further comments on them if you will be updating the CPS accordingly. For some of the more technical points, I will provide some commentary but in a separate email. I'll try to get my comments to you soon since as I'm sure you want to move forward in this process without too much delay. Thanks again.From: Dimitris ZacharopoulosSent: Tuesday, January 26, 2016 5:58 AM‎
  

  
  

  Hello Peter and thank you for reviewing this request. I hope you
  have reviewed the DRAFT CP/CPS available




  from the bug 1201423
  since we have done some changes after the original bug report.
  
  
  On 25/1/2016 6:16 μμ, Peter Kurrasch wrote:


  I've reviewed the CPS/CP and in general I
like it but I do have some concerns. My frame of reference is
two-fold: First, how large is the attack surface through which I
as a bad guy might obtain a cert to use for nefarious purposes?
I would rate that as "moderate". Second, ho‎w much damage can I
cause with a fraudulently obtained cert and private key? I rate
this as "significant" based on my understanding and
interpretation of this doc. As my understanding improves I'll
probably change my mind, though.
  
  
  One general problem I had was trying to
figure out the right context, roles, and such for some of the
policies stated in the doc. For example, the terms HARICA,
HARICA PKI, HARI PKI, HARICA member of organization, HARICA
root, subCAs and such appeared in ways that seemed confusing but
maybe I am the one who's confused. In particular it wasn't
always clear to me which roles would be performed by a "member
organization" vs "the main" CA--and under which circumstances
and how many there are likely to be. Knowing this helps me
better judge the attack surface and damage potential.
  
  


We will try to make these terms clearer in a future revision. For
this review, please consider the following which might make things
more clear:

 "HARICA" is the "organization" that runs, administers, manages,
oversees the "HARICA PKI". HARICA Root and all subCAs are centrally
managed. We searched for the term "HARI PKI" in our CP/CPS but did
not get a hit. HARICA members are Greek Academic and Research
Institutions signing a certain MoU, which is
available at http://www.harica.gr/procedures.
You may consider this as an "affiliation", as defined in section
1.6.1. HARICA members (as Institutions) have physical persons
(students, faculty, staff, researchers and so on) under their
"supervision".

We did not find the term "the main" referring to a CA. We do have a
"Central RA" that verifies identity, email ownership and control
over domains.

...snip...‎
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: ComSign Root Renewal Request

2016-01-29 Thread Peter Kurrasch
  Thanks for the update on the code signing situation within CABF. Last I knew about it, it was‎ on the path towards adoption so it's good to know that's no longer the case.Regarding the processes to verify ownership and control, I hope you're not suggesting we should continue to allow provably insecure procedures because ‎the BR says it's OK to use them?From: Peter BowenSent: Friday, January 29, 2016 8:08 PM‎Peter,I obviously do not represent ComSign, but several of the items in your list are not really specific to the CPS and instead are more comments on the Mozilla policies.On Fri, Jan 29, 2016 at 4:24 PM, Peter Kurrasch <fhw...@gmail.com> wrote:  * There is a BR from CABF that covers code signing. I must admit I don't know the status of it but this CPS should at least acknowledge it and say if ComSign will adhere to it.There is not a BR from the CA/Browser Forum.  A subset of the members of the CABF drafted a BR, but it failed to be adopted as a Forum Guideline when brought to a vote of the whole Forum.  Concerns were raised on several fronts, including some specific requirements.  Therefore I don't think it is necessary or appropriate for a CA to commit to adhere (or not adhere) to a document that is still under development.Additionally, Mozilla has determined that Code Signing is out of scope for the Mozilla CA program.  Therefore, as I understand it, whether a CA issues certificates for code signing or not, and the terms under which is does so, should not be in scope for review of their CPS in this forum. * Section 3.2.8.1.1. is provably insecure and should not be used to verify ownership or control of a domain. A WHOIS record might contain an email address of a proxy and is, therefore, unreliable. The "magic" email address names might be directed to an unauthorized person and, therefore, also unreliable. The process described in 3.2.8.1.1 is the process that was included in the Mozilla CA policy (https://wiki.mozilla.org/CA:CertInclusionPolicyV2.0) and is now included in the CABF BRs.  It is an approved process to verify ownership or control of a domain. * Section 3.2.8.1.3. is also provably insecure and should not be used. Changing a website proves nothing and if I'm trying to exploit an existing domain for nefarious purposes I probably have control over the website anyway.The process described in 3.2,8.1.3 is an implementation of section 3.2.2.4 (6) of the CABF BRs.  It appears to be an approved process to verify ownership or control. Thanks,Peter

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: ComSign Root Renewal Request

2016-01-29 Thread Peter Kurrasch
  I've reviewed the ComSign CPS and while it has a lot of legal language in it, it lacks a certain legal and technical precision that is needed in this case. For example, there is frequent use of the term "electronic certificate" which I think this document means "certificate used for electronic signatures of individual people" even though one might naturally include SSL/TLS and code signing certs as‎ such. Similarly, the use of "SSL" in this doc might also mean "code signing" but that is less clear. I was actually surprised to find code signing in section 3.2.9 given the lack of its even being mentioned elsewhere in the doc.I can appreciate ComSign wanting to take a shortcut by adding on the SSL and code signing stuff to their already detailed doc but my recommendation is for ComSign to rework the document anyway. It's difficult for me to assess the security risk that this root might introduce to the wider Internet population. Based on my current understanding of this document I'd say the attack surface is "large", meaning there seem to be many gaps through which I might fraudulently obtain a ComSign certificate. Further, I would say the potential for harm that I can cause with that ill-gotten cert is probably "unlimited".My hope is that by reworking the CPS document ComSign can more clearly articulate what is and is not possible when it comes to issuing and using their certs and, consequently, ‎my concerns about risk and damage are lessened. That said, I offer some specific comments:* Section 1 should stipulate that The Law was enacted by the State of Israel and has jurisdiction in the relevant Israeli territories. I'm assuming that to be the case, anyway.‎* I assume The Law does not apply to SSL/TLS certs. What about code signing? * There is a BR from CABF that covers code signing. I must admit I don't know the status of it but this CPS should at least acknowledge it and say if ComSign will adhere to it.* In Section 1 where it says "some procedures dealing with SSL...are not part of the Hebrew version", this statement is insufficient. For example what about code signing? Which SSL procedures are actually in the Hebrew version? This has an important implication when it comes to statements like "only the Hebrew version is binding".* I wasn't sure if a Registration Agent (Section 1.3.2) represents a business entity which is separate and apart from the ComSign company? Will ComSign bear any responsibility for the hiring of people at an RA, for example? This has a significant impact into my evaluation of the attack surface.‎ The greater the "space" between ComSign and the RA, the greater the chance for mistakes or bad acts to happen.* Section 1.4., what are the usage terms for SSL/TLS and code signing certs? Is key sharing allowed for any of the certs issued by ComSign? If key sharing is not allowed, how is that enforced (and who does the enforcing)?* Section 3.2.8.1.1. is provably insecure and should not be used to verify ownership or control of a domain. A WHOIS record might contain an email address of a proxy and is, therefore, unreliable. The "magic" email address names might be directed to an unauthorized person and, therefore, also unreliable. * Section 3.2.8.1.3. is also provably insecure and should not be used. Changing a website proves nothing and if I'm trying to exploit an existing domain for nefarious purposes I probably have control over the website anyway.* Throughout the CPS I found some uses of the term "signature" to be ambiguous. Where possible it would be good to distinguish between a pen-and-paper signature and one that's in some electronic form. I do like that the issuance process is not completely electronic.I do have concerns with other sections in this CPS but I would like to give ComSign a chance to respond or do rework before I get overly detailed. As I stated above, my hope is that a revised CPS will demonstrate that the attack surface is more "medium" and that the potential for damage is less than "unlimited"--that it is constrained in some way.   From: Kathleen WilsonSent: Wednesday, January 20, 2016 5:46 PM‎On 12/10/15 12:01 PM, Kathleen Wilson wrote:> This request is to include the "ComSign Global Root CA" root> certificate, and enable the Websites and Email trust bits. This root> will eventually replace the "ComSign CA" root certificate that is> currently included in NSS, and was approved in bug #420705.>> ComSign is owned by Comda, Ltd., and was appointed by the Justice> Ministry as a CA in Israel in 

Re: HARICA Root Renewal Request

2016-01-25 Thread Peter Kurrasch
  I've reviewed the CPS/CP and in general I like it but I do have some concerns. My frame of reference is two-fold: First, how large is the attack surface through which I as a bad guy might obtain a cert to use for nefarious purposes? I would rate that as "moderate". Second, ho‎w much damage can I cause with a fraudulently obtained cert and private key? I rate this as "significant" based on my understanding and interpretation of this doc. As my understanding improves I'll probably change my mind, though.One general problem I had was trying to figure out the right context, roles, and such for some of the policies stated in the doc. For example, the terms HARICA, HARICA PKI, HARI PKI, HARICA member of organization, HARICA root, subCAs and such appeared in ways that seemed confusing but maybe I am the one who's confused. In particular it wasn't always clear to me which roles would be performed by a "member organization" vs "the main" CA--and under which circumstances and how many there are likely to be. Knowing this helps me better judge the attack surface and damage potential.Some specific comments:* Will any cert issuer chaining to the HARICA root be issuing a cert to any entity outside of the HARICA organization? ‎Frequently, universities will partner with other universities, government agencies, businesses, etc. Would a situation arise where any such org might receive a HARICA cert?‎* Section 1.4.1 mentions code signing by these certs and this is actually my biggest concern in terms of "how much damage can I cause?". I assume that code signing is intended for use on the Microsoft platforms? Is any subscriber able to obtain a user cert with code signing enabled?‎ There should be limits imposed by the subCAs. * The repo mentioned in section 2.2 introduces some perhaps unintentional risk into the system because it essentially makes every name and email address public knowledge. Granted there are many other ways to obtain that info (for example, a researcher publishes a paper) so this isn't a situation unique to the repo. But the issue comes when user passwords become compromised. If I have a password and a name or partial name, I'm guaranteed access into the PKI system (as I understand this doc).* In section 3.1, will all certs issued under this root have C=GR?* Section 3.2.2.4 seems to contradict the whole of this CPS. I hope that most of these options will not be used to assess domain ownership because this document seems to illustrate what option 7 had in mind?* Section 3.2.3.1, the first sentence in paragraph 2 makes no sense. Also‎, in paragraph 3, use of email is sufficient...unless the account has been compromised‎. I'm not saying a change is needed here but rather pointing out a fallacy in the logic being used.* ‎For section 3.3.2, I think you mean "initial cert".* ‎Section 6.1.2 uses the phrase "confirm the validity of the identity of the user" which made me wonder if anything more than an email address + password is required. Can this be clarified? How will the confirmation be implemented uniformly across the HARICA PKI organization? * Reading section 6.1.2 (and others) made me wonder if there is any checking within the HARICA system to see if any 2 certs have the same public key. Such a check should be done prior to issuance I would think. I didn't find any prohibitions on key sharing, so is key sharing acceptable and, if not, how is that enforced?* ‎The EKU list in section 7.1.2 caught me by surprise. Some of them (e.g. file system, time stamping, IPSEC) I would have expected to be mentioned earlier in the doc. The size of this list has a direct impact on the "how much harm can I do" question so I was hoping this would be a very short list to limit the potential harm.* Section 7.1.5 is an outright mess. The stuff that's a copy/paste of the Mozilla requirements should be removed. I'd like to see name constraints to be included in the HARICA root, and it seems .gr and .com are doable.‎ I'd like to see tighter controls on what justifications are needed to create the .com sub-root (for example, can any student request a cert with .com and suddenly the .com sub-root will appear?). Also, I wasn't sure if a full sub-CA is to be created or a just the intermediate cert. I think there is good potential here but need more info before I could really say for sure.As a final comment, let me say that I have nothing against HARICA and my intention is not for them to be treated unfairly. Rather, I'd been thinking about many of the ideas mentioned here and wanted to compare those ideas against a real CPS document. HARICA just happened to be the next one to come along.‎

Validating a Domain Registrant

2015-12-09 Thread Peter Kurrasch
     From: Kathleen WilsonSent: Wednesday, December 2, 2015 2:18 PM‎On 12/2/15 11:13 AM, Peter Kurrasch wrote:> I don't so much have a problem with the change but I would like to know if this is fairly common across other cert issuers?>> ‎Personally I'm of the opinion that email is inherently insecure which makes it a bad mechanism to use in the course of trying to establish trust. However, my concern at the moment is the use of privacy services to obscure the actual owner/registrar of the domain. I see no reason to believe such services are any more trustworthy than the email channel. In fact it seems to me that those services are the weakest link in the chain.>> The implication is that only method 1, below, should be employed. However, if everyone else is also employing method 2 I don't want to single out SECOM unfairly.>Copied from the Baseline Requirements (note #2 and #4)...~3.2.2.4. Authorization by Domain Name RegistrantFor each Fully‐Qualified Domain Name listed in a Certificate, the CA SHALL confirm that, as of the date the Certificate was issued, the Applicant (or the Applicant’s Parent Company, Subsidiary Company, or Affiliate, collectively referred to as “Applicant” for the purposes of this section) either is the Domain Name Registrant or has control over the FQDN by:1. Confirming the Applicant as the Domain Name Registrant directly with the Domain Name Registrar;2. Communicating directly with the Domain Name Registrant using an address, email, or telephone number provided by the Domain Name Registrar;3. Communicating directly with the Domain Name Registrant using the contact information listed in the WHOIS record’s “registrant”, “technical”, or “administrative” field;4. Communicating with the Domain’s administrator using an email address created by pre‐pending ‘admin’, ‘administrator’, ‘webmaster’, ‘hostmaster’, or ‘postmaster’ in the local part, followed by theat‐sign (“@”), followed by the Domain Name, which may be formed by pruning zero or more components from the requested FQDN;5. Relying upon a Domain Authorization Document;6. Having the Applicant demonstrate practical control over the FQDN by making an agreed‐upon change to information found on an online Web page identified by a uniform resource identifier containing the FQDN; or7. Using any other method of confirmation, provided that the CA maintains documented evidence that the method of confirmation establishes that the Applicant is the Domain Name Registrant or hascontrol over the FQDN to at least the same level of assurance as those methods previously described.___dev-security-policy mailing listdev-security-policy@lists.mozilla.orghttps://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Let's Encrypt Incident Report: Broken CAA Record Checking

2015-12-09 Thread Peter Kurrasch
FYI, the RFC URL in BR 1.3.1 section 1.6.1 for CAA is malformed.


  Original Message  
From: Phillip Hallam-Baker
Sent: Tuesday, December 8, 2015 11:37 AM‎

People are using CAA.

Cool!‎
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECOM Request for EV Treatment

2015-12-02 Thread Peter Kurrasch
I don't so much have a problem with the change but I would like to know if this 
is fairly common across other cert issuers?

‎Personally I'm of the opinion that email is inherently insecure which makes it 
a bad mechanism to use in the course of trying to establish trust. However, my 
concern at the moment is the use of privacy services to obscure the actual 
owner/registrar of the domain. I see no reason to believe such services are any 
more trustworthy than the email channel. In fact it seems to me that those 
services are the weakest link in the chain.

The implication is that only method 1, below, should be employed. However, if 
everyone else is also employing method 2 I don't want to single out SECOM 
unfairly.


  Original Message  
From: Kathleen Wilson
Sent: Tuesday, December 1, 2015 11:34 AM‎

> Here is the text that was added to the CP:
> ~~
> The authentication method is as follows:
> 1. Using the WHOIS registry service, SECOM Trust System verifies that
> the relevant subscriber owns the domain to which the Certificate pertains.
> 2. Should the owner of the domain be different from the subscriber,
> SECOM Trust Systems authenticates the domain by having the domain owner
> submit to SECOM Trust Systems a document granting subscriber the
> permission to use the domain or by sending a verification e-mail to the
> e-mail address of the domain owner registered in the WHOIS registry
> service.
> ~~
>
> If everyone is OK with this, then I will proceed with recommending
> approval of this request to enable EV treatment for the "Security
> Communication RootCA2" root certificate.
>
> I will also track an action item to ensure that SECOM adds the updates
> in the translated version of their CP back to the original CP.
>
> Kathleen
>


Thanks again to everyone who reviewed and commented on this request from 
SECOM to enable EV treatment for the "Security Communication RootCA2" 
root certificate.

I am now re-closing this discussion and will recommend approval in the 
bug. In parallel, I will also track the action item for SECOM to update 
their original CP according to the changes they drafted in the English 
version.

https://bugzilla.mozilla.org/show_bug.cgi?id=1096205

Any further follow-up on this request should be added directly to the bug.

Thanks,
Kathleen

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECOM Request for EV Treatment

2015-11-13 Thread Peter Kurrasch
Kathleen, is SECOM getting special treatment? I was wondering if there was some 
reason to move forward before a CA has everything in order? Will we be seeing 
more of this going forward?

  Original Message  
From: Kathleen Wilson
Sent: Wednesday, November 11, 2015 4:26 PM‎

On 11/9/15 3:54 PM, Kathleen Wilson wrote:‎
>
> I propose that we move forward with approving and implementing SECOM's
> request to enable EV treatment for the the "Security Communication
> RootCA2" root certificate that was included in NSS via Bugzilla Bug
> #527419.
>
> In parallel, I plan to continue to track the action item for SECOM to
> update their CP/CPS documentation to address the concerns that have been
> raised. I believe that Ryan Sleevi is also planning to review the full
> translated CP, but I am confident that SECOM will be prompt to address
> any further concerns that are raised.
>
> I plan to track SECOM's status on updating their CP in the bug.
> https://bugzilla.mozilla.org/show_bug.cgi?id=1096205
>
> Does anyone have objections or concerns about this?
>
> Thanks,
> Kathleen
>


Thanks again to everyone who reviewed and commented on this request from 
SECOM to enable EV treatment for the "Security Communication RootCA2" 
root certificate.

I am now closing this discussion and will recommend approval in the bug. 
In parallel, I will also track the action item to finish the review of 
SECOM's translated CP/CPS and for SECOM to finish updating their CP/CPS 
(based on that review).

https://bugzilla.mozilla.org/show_bug.cgi?id=1096205

Any further follow-up on this request should be added directly to the bug.

Thanks,
Kathleen



___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Test Cert Misissuance Incident

2015-11-13 Thread Peter Kurrasch
As the perennial bad guy, I look forward to Symantec (and others) publishing 
certs with CT. I can data-mine the logs to get a list of their customers and 
target them for any number of purposes. One possibility I'm considering:

"Dear Symantec Customer: Did you hear about Symantec's recent trouble with 
certificate mis-issuance? We care deeply about your security, so please click 
on this link to verify that your own certificates have not been compromised"

The linked site, of course, will infect the person's PC with malware. Or maybe 
it will be a site that tries to steal their business from Symantec. I haven't 
decided yet.


  Original Message  
From: Matt Palmer
Sent: Thursday, October 29, 2015 3:49 PM‎

On Thu, Oct 29, 2015 at 02:17:35PM +0100, Kurt Roeckx wrote:
> On 2015-10-28 22:30, Kathleen Wilson wrote:
> >According to the article, here is what Google is requiring of Symantec:
> >
> >1) as of June 1st, 2016, all certificates issued by Symantec itself will
> >be required to support Certificate Transparency
> 
> I know this is directly copied from their blog about this, but I wonder what
> it means for a certificate to support CT. Is the requirement really that
> all certificates need to published in CT?

Yes, I'd say that's the intention. Further, I'll wager that Chromium will
refuse to trust a certificate issued after the cutoff date which chains to a
Symantec root, unless it is presented with sufficient SCTs to qualify under
Chromium's CT policy. If Google's *really* playing hardball, they may
require all existing Symantec certs to be enumerated for a whitelist, and
will refuse to trust the notBefore date, similar to how existing EV certs
were grandfathered.‎
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Let's Encrypt Root

2015-10-26 Thread Peter Kurrasch
I couldn't tell from the bug report if it means that a discussion will take 
place once all the information is collected or if Mozilla is already moving 
forward with incorporation of the root? I'd like to ask a question about 
technical constraints on the Let's Encrypt root but will wait until the 
appropriate time. 

Thanks.

  Original Message  
From: Richard Barnes
Sent: Monday, October 26, 2015 6:46 PM
To: s...@gmx.ch
Cc: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Let's Encrypt Root

https://bugzilla.mozilla.org/show_bug.cgi?id=1204656

On Mon, Oct 26, 2015 at 7:20 PM,  wrote:

> Hi
>
> AFAIK we didn't have any Root Inclusion Request from Let's Encrypt yet.
> Did I miss something?
>
> Regards,
> Jonas
>
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certum Root Renewal Request

2015-10-21 Thread Peter Kurrasch
Hi Kathleen, 

I recommend we not allow the code signing bit to be enabled for this root. Even 
though removing code signing is not yet official policy I don't think it makes 
much sense to activate it for this root if only to remove it a year later (or 
whatever the timeframe). It might be good to at least let Unizeto Certum know 
that the change is in the works. 

Speaking for myself, I'd be interested in knowing why they wanted code signing 
trust in the first place. Do they have specific customers or use-cases in mind 
or??? This could be a good learning opportunity, if there's anything that 
Unizeto Certum would like to share with the community.

  Original Message  
From: Kathleen Wilson
Sent: Wednesday, October 21, 2015 2:28 PM‎

On 10/1/15 3:44 PM, Kathleen Wilson wrote:
> Unizeto Certum has applied to include the “Certum Trusted Network CA 2”
> root certificate, turn on all three trust bits, and enable EV treatment.
> This is the next generation of the “Certum Trusted Network CA” root cert
> that was included via bug #532377.
>

Does anyone have any comments, questions, or concerns about this request 
from Unizeto Certum?

Kathleen


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy Update Proposal: Remove Code Signing Trust Bit

2015-10-13 Thread Peter Kurrasch
  I can't think of a case either. What I'm advocating would be an expansion of Mozilla's role in the security space--something that may or may not be appropriate for me to do, with pros and cons either way.   From: Gervase MarkhamSent: Monday, October 12, 2015 10:56 AM‎On 08/10/15 14:27, Peter Kurrasch wrote:> 2. Loss of visibility/consistency/input:  If Mozilla decides to exit the> code signing world, the security community loses a place to share> experiences, establish policies, discuss and evaluate bad acts and bad> actors, and so forthI've never seen this go on in a Mozilla context with regard tocode-signing - have you?Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


  1   2   >