Re: OISTE WISeKey Global Root GC CA Root Inclusion Request
I hope you realize that these discussions were happening well after we started the inclusion request in Bugzilla, and I can't even see how what we did wasn't compliant with BR 8.1, even with the current wording. Nevertheless, can we at least agree that our plan to advance the start of the annual audit period to 9th of May will satisfy both the previous and the current criteria? Thanks, Pedro El martes, 26 de junio de 2018, 0:00:29 (UTC+2), Wayne Thayer escribió: > On Mon, Jun 25, 2018 at 2:45 PM Ryan Sleevi via dev-security-policy < > dev-security-policy@lists.mozilla.org> wrote: > > > On Mon, Jun 25, 2018 at 5:12 PM, Pedro Fuentes via dev-security-policy < > > dev-security-policy@lists.mozilla.org> wrote: > > > > 7. In my humble opinion, I think that these requirements must be formalized > > > in audit criteria or explicitly in the BR, and not raised "ad hoc". Any > > CA > > > embarking in an inclusion process should know all requirements > > beforehand. > > > > > > But they're already arguably part of the BRs, as I showed, and it's up to > > the relevant groups (WebTrust, ETSI) to ensure that the criteria they adopt > > reflect what browsers expect. As we see with ETSI and ACAB-c, if the > > auditor fails to meet those requirements, it's the auditor that's at fault. > > > > 8.1 is the relevant section of the BRs, and the issue was recently > discussed on this list: > https://groups.google.com/d/msg/mozilla.dev.security.policy/rR9g5BJ6R8E/Gwzqquv6BgAJ ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: OISTE WISeKey Global Root GC CA Root Inclusion Request
On Mon, Jun 25, 2018 at 2:45 PM Ryan Sleevi via dev-security-policy < dev-security-policy@lists.mozilla.org> wrote: > On Mon, Jun 25, 2018 at 5:12 PM, Pedro Fuentes via dev-security-policy < > dev-security-policy@lists.mozilla.org> wrote: > > 7. In my humble opinion, I think that these requirements must be formalized > > in audit criteria or explicitly in the BR, and not raised "ad hoc". Any > CA > > embarking in an inclusion process should know all requirements > beforehand. > > > But they're already arguably part of the BRs, as I showed, and it's up to > the relevant groups (WebTrust, ETSI) to ensure that the criteria they adopt > reflect what browsers expect. As we see with ETSI and ACAB-c, if the > auditor fails to meet those requirements, it's the auditor that's at fault. > > 8.1 is the relevant section of the BRs, and the issue was recently discussed on this list: https://groups.google.com/d/msg/mozilla.dev.security.policy/rR9g5BJ6R8E/Gwzqquv6BgAJ ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: OISTE WISeKey Global Root GC CA Root Inclusion Request
On Mon, Jun 25, 2018 at 5:12 PM, Pedro Fuentes via dev-security-policy < dev-security-policy@lists.mozilla.org> wrote: > Hi Ryan, > thanks for your time reviewing this. I really appreciate your comments. > > As I have this week the auditors in the office, I prefer to check with > them before issuing a more formal answer, because you're expressing > concerns related to the audit practices that I'm not qualified enough to > respond. > > In the meantime, please let me advance the following initial comments: > 1.- I can't really understand how it can be expected that a CA is able to > issue a point in time including BR dated the same day of the issuance of a > Root, because that seems not possible. Any CA needs a minimum time to > prepare an issuing CA, OCSP responders and doing SSL certificate tests, and > AFAIK, this lapsed period is not regulated by BR nor Webtrust. > I agree - but WebTrust at least provides a reporting mechanism for this, by indicating the scope of the audit and the (verified) non-performance of certain activities. For comparison, you can look at how the latest illustrative reports formalize what many were already doing (or specifically requested to do), by calling out things like the explicit (and verified) non-existence of RAs or key escrow services. For a new root being spun up, you need to verify that, at the moment the key was created, the policies and procedures were in place to safeguard that key, and then going forward, that those policies and procedures have been examined consistently. This is part of the requirement for an "unbroken series of audits". How it's reported on is an issue - and that's why browsers have been working to communicate directly with the WebTrust TF about these concerns so that they can make sure that their practitioner guidance and illustrative reports call this out, for practitioners working for CAs that wish to be trusted by browsers. I realize that, as a CA, you can be caught unawares if the auditor is not following these discussions or best practices, and we're always keen to make sure there's better understanding. That said, I think the communication of the concerns around root key generation and its ongoing proof of continued compliance is one that browsers have well-represented to auditors, so when there's breakdowns, it's either between the Task Force and the individual practitioners, or between practitioners and their customers. 7. In my humble opinion, I think that these requirements must be formalized > in audit criteria or explicitly in the BR, and not raised "ad hoc". Any CA > embarking in an inclusion process should know all requirements beforehand. But they're already arguably part of the BRs, as I showed, and it's up to the relevant groups (WebTrust, ETSI) to ensure that the criteria they adopt reflect what browsers expect. As we see with ETSI and ACAB-c, if the auditor fails to meet those requirements, it's the auditor that's at fault. ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: OISTE WISeKey Global Root GC CA Root Inclusion Request
Hi Ryan, thanks for your time reviewing this. I really appreciate your comments. As I have this week the auditors in the office, I prefer to check with them before issuing a more formal answer, because you're expressing concerns related to the audit practices that I'm not qualified enough to respond. In the meantime, please let me advance the following initial comments: 1.- I can't really understand how it can be expected that a CA is able to issue a point in time including BR dated the same day of the issuance of a Root, because that seems not possible. Any CA needs a minimum time to prepare an issuing CA, OCSP responders and doing SSL certificate tests, and AFAIK, this lapsed period is not regulated by BR nor Webtrust. 2.- In our particular case we had some issues delaying the readiness of proper BR compliance for GC, mainly for two reasons, one was the summer holidays and also we had to fight with a bug in Microsoft Certificate Services that made the CA certificate to include a '\0' character after the policy qualifier URL, and this delayed the process (you can find a reference here: https://pkisolutions.com/2012r2hotfixes/ check for "Bug 5298357 – Bad ASN.1 encoding of certificate issuance policy extensions"). The auditors detected this issue and only accepted the issuing CA for the point-in-time once this problem was solved. 3.- The key ceremony of this Root was witnessed by the same auditors. I would say that the mere fact that an auditor issues a point in time WT BR report implies undoubtedly full compliance with this requirement, as with any other one set by BR. Therefore, the fact that the PiT exists, means that the key ceremony was executed according to the rule. 4.- Please check in this link (https://filevault.wisekey.com/d/412f61ab26/) the redline intermediate versions. It must be noted that not all versions are formally adopted and go public (i.e. version 2.7 was a working version). These are mostly changes to include the GC hierarchy, properly reflect latest BR (i.e. validity periods, reflect the contact point for incident reporting, etc) and also to correct minor glitches. 5. In 25/July it happened that we published a new version of the CPS, including some changes recommended by the auditors. You can see the differences in the PDF file and judge yourself the relevance of the changes. Any further comment will be welcome. 6.- As a result of these discussions and open concerns, and based in the auditor recommendation to advance in this inclusion process, we already proposed here to change the audit period so it starts the 9th of May 2017 instead of the planned annual renew. Fortunately it was only one month difference, but I must say that I'd have preferred to take this decision based in a formal compliance issue that I could understand, because if it had been several months overlap it would have had a much bigger 7. In my humble opinion, I think that these requirements must be formalized in audit criteria or explicitly in the BR, and not raised "ad hoc". Any CA embarking in an inclusion process should know all requirements beforehand. I'll provide further comments after checking with the auditors. Thanks again and best regards, Pedro El lunes, 25 de junio de 2018, 19:25:34 (UTC+2), Ryan Sleevi escribió: > Hi Pedro, > > I followed-up with folks to better understand the circumstances of your > audits and the existing practicioner guidance. From these conversations, my > understanding is that WebTrust is working to provide better practicioner > clarity around these scenarios. > > To recap, the particular scenario of concern is: > - A new root key is generated (May 2017 - presumably, May 9, 2017 as > expressed in the cert) > - Under BRs 6.1.1.1, this should be witnessed by the auditor (or a video > recorded), and the auditor should issue a report opining on it > - Under WebTrust, using ISAE3000 reporting ( > http://www.webtrust.org/practitioner-qualifications/docs/item85806.pdf ), > that illustrative report is IN5.1 > - The first audit, on September 15, 2017, is a Point in Time assessment > - The next audit provided is for the period of September 16, 2017 to > December 4, 2017 > - The report is based on the CPS dated July 25, 2017 > - Thus, we lack any reporting or opining on the set of controls or > processes, minimally from the period of May 2017 to July 25, 2017 - but > potentially from May 2017 to September 2017. > - As a consequence, we cannot have reasonable assurance that BRs 6.1.1.1, > p3, (5) was upheld - that is, for the period of May to July/September, that > OISTE maintained "effective controls to provide reasonable assurance that > the Private Key was generated and protected in conformance with the > procedures described in its Certificate Policy and/or Certification > Practice Statement and (if applicable) its Key Generation Script" > > In an "ideal" world, for a new CA (since this is not being paired with your > Gen A/Gen B
Re: Certificates with improperly normalized IDNs
On 6/25/18 1:35 PM, swchang10--- via dev-security-policy wrote: > On Friday, August 11, 2017 at 6:54:22 AM UTC-7, Peter Bowen wrote: >> On Thu, Aug 10, 2017 at 1:22 PM, Jonathan Rudenberg via >> dev-security-policy wrote: >>> RFC 5280 section 7.2 and the associated IDNA RFC requires that >>> Internationalized Domain Names are normalized before encoding to punycode. >>> >>> Let’s Encrypt appears to have issued at least three certificates that have >>> at least one dnsName without the proper Unicode normalization applied. >>> >>> It’s also worth noting that RFC 3491 (referenced by RFC 5280 via RFC 3490) >>> requires normalization form KC, but RFC 5891 which replaces RFC 3491 >>> requires normalization form C. I believe that the BRs and/or RFC 5280 >>> should be updated to reference RFC 5890 and by extension RFC 5891 instead. >> >> I did some reading on Unicode normalization today, and it strongly >> appears that any string that has been normalized to normalization form >> KC is by definition also in normalization form C. Normalization is >> idempotent, so doing toNFKC(toNKFC()) will result in the same string >> as just doing toNFKC() and toNFC(toNFC()) is the same as toNFC(). >> Additionally toNFKC is the same as toNFC(toK()). >> >> This means that checking that a string matches the result of >> toNFC(string) is a valid check regardless of whether using the 349* or >> 589* RFCs. It does mean that Certlint will not catch strings that are >> in NFC but not in NFKC. >> >> Thanks, >> Peter >> >> P.S. I've yet to find a registered domain name not in NFC, and that >> includes checking every name in the the zone files for all ICANN gTLDs >> and a few ccTLDs > > Hi, > I have an example international domain that is NFC but not NFKC, > "xn--ttt-8fa.pumesa.com" (this is a fake domain and my focus is on the > general pattern). > The pattern that will cause a domain to be NFC but not NFKC in Golang is: > "xn--" followed by any same three letters followed by a single "-" followed > by any single digit number followed by "fa"; now I know this pattern doesn't > describe real unicode, however the behavior in the programming language is > curious (below). > The pattern described above causes strings to be NFC positive but not NFKC in > Golang; furthermore, I ran a few tests using Golang (version go1.10.3 darwin) > and Java (version "1.8.0_60") and here is the key parts of the code I used: > 1) Golang (Used "ToUnicode" to mimic how Zlint tests): > package main > import ( > "fmt" > "golang.org/x/net/idna" > "golang.org/x/text/unicode/norm" > ) > func main(){ > str := "xn--xxx-7fa.pumesa.com" > punycode,err := idna.ToUnicode(str) > if err != nil { > fmt.Println(err) > } > fmt.Println("Is NFC ", norm.NFC.IsNormalString(punycode)) > fmt.Println("Is NFKC ", norm.NFKC.IsNormalString(punycode)) > } > > The last NFKC check is what causes Zlint to throw an error, stating that the > unicode is not in compliance, seems that Zlint needs to be updated to follow > the latest BR (RFC 5891), meaning check if the unicode in question is NFC > compliant rather than NFKC? > > Below is something even more interesting. > > 2) Java: > import java.net.IDN; > import java.text.Normalizer; > public class Main{ > public static void main(String args[]){ > String cn = "xn--www-0xx.pumesa.com"; > String punycode = IDN.toASCII(cn); > //punycode = IDN.toUnicode(punycode); > System.out.println("is NFC " + Normalizer.isNormalized(punycode, > Normalizer.Form.NFC)); > System.out.println("is NFKC " + Normalizer.isNormalized(punycode, > Normalizer.Form.NFKC)); > } > } > > Per Oracle doc, java.net.IDN.toASCII conforms with RFC 3490, and it throws no > error, this can be double checked within the language by converting the > punycode back to Unicode, both print statements return true. > > So to reiterate, the two main questions are: > 1) Should there be a discussion about why Oracle Java and Golang don't agree > on whether this pattern causes unicode to be NFKC compliant? > The potential impact is that results obtained from a Java system may not be > Zlint compliant. > 2) Should Zlint be updated to the latest BR (RFC 5891) regardless of question > #1? Probably. However, please be aware that the change from RFC 3490 (IDNA) to RFC 5891 (IDNA2008) involved more than just a change from Unicode normalization form C to Unicode normalization form KC. Also relevant: https://tools.ietf.org/html/rfc8399 Peter signature.asc Description: OpenPGP digital signature ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: Certificates with improperly normalized IDNs
On Friday, August 11, 2017 at 6:54:22 AM UTC-7, Peter Bowen wrote: > On Thu, Aug 10, 2017 at 1:22 PM, Jonathan Rudenberg via > dev-security-policy wrote: > > RFC 5280 section 7.2 and the associated IDNA RFC requires that > > Internationalized Domain Names are normalized before encoding to punycode. > > > > Let’s Encrypt appears to have issued at least three certificates that have > > at least one dnsName without the proper Unicode normalization applied. > > > > It’s also worth noting that RFC 3491 (referenced by RFC 5280 via RFC 3490) > > requires normalization form KC, but RFC 5891 which replaces RFC 3491 > > requires normalization form C. I believe that the BRs and/or RFC 5280 > > should be updated to reference RFC 5890 and by extension RFC 5891 instead. > > I did some reading on Unicode normalization today, and it strongly > appears that any string that has been normalized to normalization form > KC is by definition also in normalization form C. Normalization is > idempotent, so doing toNFKC(toNKFC()) will result in the same string > as just doing toNFKC() and toNFC(toNFC()) is the same as toNFC(). > Additionally toNFKC is the same as toNFC(toK()). > > This means that checking that a string matches the result of > toNFC(string) is a valid check regardless of whether using the 349* or > 589* RFCs. It does mean that Certlint will not catch strings that are > in NFC but not in NFKC. > > Thanks, > Peter > > P.S. I've yet to find a registered domain name not in NFC, and that > includes checking every name in the the zone files for all ICANN gTLDs > and a few ccTLDs Hi, I have an example international domain that is NFC but not NFKC, "xn--ttt-8fa.pumesa.com" (this is a fake domain and my focus is on the general pattern). The pattern that will cause a domain to be NFC but not NFKC in Golang is: "xn--" followed by any same three letters followed by a single "-" followed by any single digit number followed by "fa"; now I know this pattern doesn't describe real unicode, however the behavior in the programming language is curious (below). The pattern described above causes strings to be NFC positive but not NFKC in Golang; furthermore, I ran a few tests using Golang (version go1.10.3 darwin) and Java (version "1.8.0_60") and here is the key parts of the code I used: 1) Golang (Used "ToUnicode" to mimic how Zlint tests): package main import ( "fmt" "golang.org/x/net/idna" "golang.org/x/text/unicode/norm" ) func main(){ str := "xn--xxx-7fa.pumesa.com" punycode,err := idna.ToUnicode(str) if err != nil { fmt.Println(err) } fmt.Println("Is NFC ", norm.NFC.IsNormalString(punycode)) fmt.Println("Is NFKC ", norm.NFKC.IsNormalString(punycode)) } The last NFKC check is what causes Zlint to throw an error, stating that the unicode is not in compliance, seems that Zlint needs to be updated to follow the latest BR (RFC 5891), meaning check if the unicode in question is NFC compliant rather than NFKC? Below is something even more interesting. 2) Java: import java.net.IDN; import java.text.Normalizer; public class Main{ public static void main(String args[]){ String cn = "xn--www-0xx.pumesa.com"; String punycode = IDN.toASCII(cn); //punycode = IDN.toUnicode(punycode); System.out.println("is NFC " + Normalizer.isNormalized(punycode, Normalizer.Form.NFC)); System.out.println("is NFKC " + Normalizer.isNormalized(punycode, Normalizer.Form.NFKC)); } } Per Oracle doc, java.net.IDN.toASCII conforms with RFC 3490, and it throws no error, this can be double checked within the language by converting the punycode back to Unicode, both print statements return true. So to reiterate, the two main questions are: 1) Should there be a discussion about why Oracle Java and Golang don't agree on whether this pattern causes unicode to be NFKC compliant? The potential impact is that results obtained from a Java system may not be Zlint compliant. 2) Should Zlint be updated to the latest BR (RFC 5891) regardless of question #1? ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy
Re: OISTE WISeKey Global Root GC CA Root Inclusion Request
Hi Pedro, I followed-up with folks to better understand the circumstances of your audits and the existing practicioner guidance. From these conversations, my understanding is that WebTrust is working to provide better practicioner clarity around these scenarios. To recap, the particular scenario of concern is: - A new root key is generated (May 2017 - presumably, May 9, 2017 as expressed in the cert) - Under BRs 6.1.1.1, this should be witnessed by the auditor (or a video recorded), and the auditor should issue a report opining on it - Under WebTrust, using ISAE3000 reporting ( http://www.webtrust.org/practitioner-qualifications/docs/item85806.pdf ), that illustrative report is IN5.1 - The first audit, on September 15, 2017, is a Point in Time assessment - The next audit provided is for the period of September 16, 2017 to December 4, 2017 - The report is based on the CPS dated July 25, 2017 - Thus, we lack any reporting or opining on the set of controls or processes, minimally from the period of May 2017 to July 25, 2017 - but potentially from May 2017 to September 2017. - As a consequence, we cannot have reasonable assurance that BRs 6.1.1.1, p3, (5) was upheld - that is, for the period of May to July/September, that OISTE maintained "effective controls to provide reasonable assurance that the Private Key was generated and protected in conformance with the procedures described in its Certificate Policy and/or Certification Practice Statement and (if applicable) its Key Generation Script" In an "ideal" world, for a new CA (since this is not being paired with your Gen A/Gen B CAs), we would have - Root Key report issued on Day X - Point in Time assessment issued on Day X - Period of Time assessment issued from Day X to Day Y - If the CA was not issuing certificates / not all controls could be reporterd on, then the scope of the audit would indicate as such, until such a time as the CA does. - Y should not be greater than 90 days after the first publicly trusted certificate was issued. Unfortunately, not all WebTrust practitioners have been given this guidance, and as a result, have not passed it on to the CAs that they are auditing. While some auditors do practice this chain of evidence/audits from the birth of certificate, not all auditors do. At this point, it's a question about how the community feels about the set of changes between the following CP/CPS versions: 2.7, 2.8, 2.9, and 2.10. In particular, the set of changes in 2.9 call out "Minor changes after WebTrust assessment" - which suggests that, prior to the September 15, 2017 PITRA, there were issues or non-conformities that required addressing, before the full engagement. - Can you speak more to what happened on July 25, 2017? - Can you provide diffs for 2.7 to 2.10? Basically, what are things that can the community be confident in the management and scope of the root certificate between May 9, 2017 and September 16, 2017. Examples of considerations can be the adoption of the same CP/CPS, the inclusion in scope of a previous audit (for example, was this included in the scope of the Gen A/Gen B CAs audit for the period ending September 15, 2017?), or other documentary evidence. On Sat, Jun 16, 2018 at 11:45 AM, Pedro Fuentes via dev-security-policy < dev-security-policy@lists.mozilla.org> wrote: > Hello, > Sorry for my insistence, but our audit is scheduled in less than two weeks. > I'd appreciate some feedback in the case there's any deviation with BR-8.1 > that prevent keeping the planned audit scope. > Thanks! > Pedro > > El martes, 5 de junio de 2018, 9:02:42 (UTC+2), Ryan Sleevi escribió: > > Hi Pedro, > > > > I think the previous replies tried to indicate that I will not be > available > > to review your feedback at all this week. > > > > On Mon, Jun 4, 2018 at 9:18 AM, Pedro Fuentes via dev-security-policy < > > dev-security-policy@lists.mozilla.org> wrote: > > > > > Kind reminder. > > > Thanks! > > > > > > ___ > > > dev-security-policy mailing list > > > dev-security-policy@lists.mozilla.org > > > https://lists.mozilla.org/listinfo/dev-security-policy > > > > ___ > dev-security-policy mailing list > dev-security-policy@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-security-policy > ___ dev-security-policy mailing list dev-security-policy@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security-policy