Re: FW: StartCom inclusion request: next steps

2017-09-19 Thread userwithuid via dev-security-policy
On Tue, Sep 19, 2017 at 3:09 PM, Nick Lamb via dev-security-policy
 wrote:
> I have no doubt that this was obvious to people who have worked for a public 
> CA, but it wasn't obvious to me, so thank you for answering. I think these 
> answers give us good reason to be confident that a cross-signed certificate 
> in this situation would not be available to either end subscribers or 
> StartCom unless/ until the CA which cross-signed it wanted that to happen.
>
> It might still make sense for Mozilla to clarify that this isn't a good idea, 
> or even outright forbid it anyway, but I agree with your perspective that 
> this seemed permissible under the rules as you understood them and wasn't 
> obviously unreasonable.

I'm pretty sure it's already forbidden, since policy version 2.5
anyway (has effective date after the Certinomis shenanigans though):

"The CA with a certificate included in Mozilla’s root program MUST
disclose this information within a week of certificate creation, and
before any such subordinate CA is allowed to issue certificates."

2.5 added the "within a week of certificate creation" [1] . "Creation"
vs "My Safe", "Creation" wins.  :-)

[1] 
https://github.com/mozilla/pkipolicy/commit/b7d1b6c04458114fbe73fa3f146ad401235c2a1b
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Old roots to new roots best practice?

2017-09-19 Thread Ryan Sleevi via dev-security-policy
On Tue, Sep 19, 2017 at 10:49 PM, userwithuid via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Either way, in the specific case, StartCom, this criticism seems to be
> inapplicable, as the revoked one was never deployed in the first place.


I don't think that's a fair conclusion. As noted in the StartCom
discussions, the mere existence of such certificate is enough to make it
possible for clients (hostile or misinformed) to serve this chain and
trigger the behaviour, and thus, as Gerv noted: "While this is probably not
a policy violation, it's not good practice." - which is absolutely true.

This is not the first time StartCom has done something 'like' this - that
is, not a policy violation, but not good practice. During the SHA-1/SHA-2
migration, the (old management) of StartCom signed an existing SHA-1
intermediate with the same name, key, and other attributes with SHA-2. That
these were otherwise equivalent meant clients were non-deterministic in
their selection of SHA-1 and SHA-2, and may block such certificates, warn
on such certificates, or fail to work.

The "good practice" in this case is to issue new certificates off a new
(SHA-2) intermediate with a new name and new key - not trying to continue
to use 'both' intermediates. Many other CAs did this successfully.

Things like this are relative in the sense of execution on best practice /
avoidance of issues, both those explicitly prohibited and those being 'bad
ideas'.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Old roots to new roots best practice?

2017-09-19 Thread userwithuid via dev-security-policy
On Monday, September 18, 2017 at 1:58:03 AM UTC, Ryan Sleevi wrote:
> I agree, Gerv's remarks are a bit confusing with respect to the concern.
> You are correct that the process of establishing a new root generally
> involves the creation of a self-signed certificate, and then any
> cross-signing that happens conceptually creates an 'intermediate' - so you
> have a key shared by a root and an intermediate.
> 
> This is not forbidden; indeed, you can see in my recent suggestions to
> Symantec/DigiCert, it can and often is the best way for both compatibility
> and interoperability. Method #2 that you mentioned, while valid, can bring
> much greater compatibility challenges, and thus requires far more careful
> planning and execution (and collaboration both with servers and in
> configuring AIA endpoints)

Great, from that I gather method 1 is indeed not discouraged and maybe even 
preferred over 2, which is certainly good to hear, thanks. :-)

Wrt to the StartCom bulletpoint, I guess this was a mistake on Mozilla's part 
then and should probably be acknowledged as such, @Gerv.

> However, there is a criticism to be landed here - and that's using the same
> name/keypair for multiple intermediates and revoking one/some of them. This
> creates all sorts of compatibility problems in the ecosystem, and is thus
> unwise practice.
> 
> As an example of a compatibility problem it creates, note that RFC5280
> states how to verify a constructed path, but doesn't necessarily specify
> how to discover that path (RFC 4158 covers many of the strategies that
> might be used, but note, it's Informational). Some clients (such as macOS
> and iOS, up to I believe 10.11) construct a path first, and then perform
> revocation checking. If any certificate in the path is rejected, the leaf
> is rejected - regardless of other paths existing. This is similar to the
> behaviour of a number of OpenSSL and other (embedded) PKI stacks.
> Similarly, applications which process their own revocation checks may only
> be able to apply it to the constructed path (Chrome's CRLSets are somewhat
> like this, particularly on macOS platforms). Add in caching of
> intermediates (like mentioned in 4158), and it quickly becomes complicated.
> 
> For this reason - if you have a same name/key pair, it should generally be
> expected that revoking a single one of those is akin to revoking all
> variations of that certificate (including the root!)

Hmmm, I think I see the point you are making here, in general. Like, if both a 
revoked a non-revoked version of an intermediate are sent by a server, it might 
potentially break a client that does "naive" revocation checking like you 
described; path first, then check (e.g. openssl?). Worse, a client that has an 
intermediate cache _and_ does "naive" revocation checking might break long 
after the revoked one has been last sent (maybe the revoked one even came from 
a different server altogether). That would suck and the only server-side fix 
would be to switch to a cert signed by a new intermediate. Do you know a 
real-world client example for the latter though? Old Apple as you mentioned? Or 
was this just theoretical?

Either way, in the specific case, StartCom, this criticism seems to be 
inapplicable, as the revoked one was never deployed in the first place.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


DigiCert mis-issuance report: rekeyed certificates

2017-09-19 Thread Jeremy Rowley via dev-security-policy
Hi all, 

 

On Friday, Sep 15, we discovered that 1090 certificates were rekeyed using
expired domain validation documents. In each case, the original
certificate's domain was properly verified at time of issuance using an
approved method.  Organization verification properly completed for each
rekey prior to issuance.  This means, to get a rekeyed certificate with
expired validation data the following had to be true:

A.  Unexpired organization validation information,
B.  A previously issued, unexpired, and unrevoked certificate for the
same domain as the rekeyed certificate,  
C.  Valid domain documentation for the original certificate at time of
issuance, and
D.  All necessary access and permissions on the account issuing the
original certificate.

 

In parallel to the on-going investigation, we are reverifying the affected
domains using one of the approved domain methods.  So far, we have
reverified 1021 of the 1090 affected certificates, leaving 69 domains
outstanding.  We are working with the domain owners to re-establish domain
approval, but plan to revoke any certs that cannot be revalidated by end of
this week.  

1.  How your CA first became aware of the problem (e.g. via a problem
report submitted to your Problem Reporting Mechanism, via a discussion in
mozilla.dev.security.policy, or via a Bugzilla bug), and the date. 

[DC] We first became aware that there was a potential issue on Sep 14th
during part of a routine internal review.  We noticed some of the domain
control validation documents were older than permitted under both the
Baseline Requirements and EV Guidelines. On Sep 14th, we confirmed that
rekeys in a validation system did not follow the proper path for
verification of the timestamp associated with domain documents, instead
using the timestamp of the domain documents on the original certificate.  We
patched our system early Sep 15th, preventing the system from further
re-using expired domain documentation to rekey a certificate.  Over the
weekend, we worked on this report while revalidating domains associated with
the misissued rekeys.

2.  A timeline of the actions your CA took in response. 

[DC]  The same day the staff reported the potential issue, we began
investigating the root cause. On Sep 15th, we confirmed that older
validation code improperly reused domain validation when rekeying a
certificate. We thought we patched this after the CAB Forum's previous
discussion on what constitutes a new certificate. If you recall those
discussions, DigiCert mistakenly believed rekeyed certificates were
considered the same certificate, not requiring domain reverification.  The
original system believed duplicate certificates and rekeyed certificates
were identical to the original cert (despite a new serial number) and relied
on a previously completed validation outside of the permitted reuse period.
Ultimately, we decided to amend our issuance process because of the CAB
Forum discussions but failed to properly communicate this requirement
internally. Although most workflows were patched to follow our "new
issuance" process, one workflow path was not.  Validation documents properly
expired in the system, preventing issuance of "new" certificate orders using
expired documentation. All organization documents expired in the system
according to the proper guidelines, blocking both new issuance and issuance
of rekeyed certificates.

3.  Confirmation that your CA has stopped issuing TLS/SSL certificates
with the problem. 

[DC] Confirmed.  The system was patched on Sep 15th to require valid domain
validation prior to rekey.  This was within 24 hours of receiving an
internal message about the issue and within two hours of confirming the
issue's existence. 

4.  A summary of the problematic certificates. For each problem: number
of certs, and the date the first and last certs with that problem were
issued. 

[DC] 1090 certificate rekeys are known. Impacted certificates will be
provided in the bug list and will also be available through CT (we're still
working on this part).  

5.  The complete certificate data for the problematic certificates. The
recommended way to provide this is to ensure each certificate is logged to
CT and then list the fingerprints or crt.sh IDs, either in the report or as
an attached spreadsheet, with one list per distinct problem. 

[DC] Provided as an excel file on the bug list (once compiled).

6.  Explanation about how and why the mistakes were made or bugs
introduced, and how they avoided detection until now. 

[DC] The issue was intentionally introduced during DigiCert's earlier days
because of a mis-understanding by the original DigiCert team about whether
rekeys constituted new issuance rather than duplication of a previous
certificate.  We thought we fixed the issue back when the CAB Forum
discussed this topic, but apparently missed a work flow path. One mitigating
factor is that all rekeys were initiated at the request of a 

Re: Public trust of VISA's CA

2017-09-19 Thread Matthew Hardeman via dev-security-policy
On Tuesday, September 19, 2017 at 10:13:26 AM UTC-5, Gervase Markham wrote:

> >From the above, we see that Visa only issues certificates to their own
> customers/clients, and not to the public. They believe that this permits
> them to keep confidential details of the certificates which they wish to
> have public trust.

The overall question of whether they should be issuing special use certificates 
from a publicly trusted CA is worthwhile, but I wonder whether the point about 
disclosure / confidential details of certificate issuance isn't practically 
mooted by the anticipated requirement that from April of next year, leaf 
certificates from included public CAs will require SCT proofs in order to be 
trusted by the (currently) largest market share browser?  (And presumably the 
other browsers will likely follow suit similarly?)

The other matters in discussion regarding this root hierarchy almost certainly 
do merit some attention.

It sounds like VISA is generally using this between software and hardware 
elements of an intranet / extranet nature between the VISA organization and 
partner banking institutions and their service providers.  What interest of 
theirs is served by being included in public trust stores?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAA Certificate Problem Report

2017-09-19 Thread Matthew Hardeman via dev-security-policy
On Tuesday, September 19, 2017 at 10:37:20 AM UTC-5, Gervase Markham wrote:
> On 19/09/17 14:58, Nick Lamb wrote:
> > An attacker only has to _prefer_ one particular CA for any reason,
> 
> 
> Yep, fair.
> 
> Gerv

Quite true.  In the example scenario that I have just posted, such preference 
might well take the form of "Particular CA X is preferred as they don't perform 
DNSSEC validation of their CAA queries."
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAA Certificate Problem Report

2017-09-19 Thread Matthew Hardeman via dev-security-policy
On Tuesday, September 19, 2017 at 8:02:36 AM UTC-5, Gervase Markham wrote:

> I'd be interested in your engagement on my brief threat modelling; it
> seems to me that DNSSEC only adds value in the scenario where an
> attacker has some control of CA Foo's issuance process, but not enough
> to override the CAA check internally, but it also has enough control of
> the network (either at the target, or at the CA) to spoof DNS responses
> to defeat CAA. That seems on the surface like a rare scenario.

The situation involves somewhat more / different risks than you've described.

A (properly positioned -- as further described below) prospective attacker need 
only be aware of any particular CA which might issue a domain validation 
certificate upon his/her request, said CA having been chosen because of support 
of an automated domain validation mechanism which relies exclusively upon DNS 
queries and which fails to enforce the full integrity of DNSSEC validation 
despite the target FQDN having been configured for DNSSEC.

In fairness, the risk that I outline below is principally a risk that has not 
been previously mitigated in the general issuance of DV certificates 
historically.  As such, the risk illuminated here might best be described as a 
"reduction in functional security value and/or reduction in potential 
functional security value derived from CAA checking".  What I mean to 
illustrate is that for the first time, CAA checking in combination with strict 
enforcement of DNSSEC validation -- including strict enforcement of securely 
discovering whether or not a particular zone should be DNSSEC signed -- offers 
the opportunity to introduce a strong cryptographic assurance that a Domain 
Validated certificate issued for a properly DNSSEC signed and CAA implementing 
domain was, in fact, validated without data tampering from the proper name 
servers regardless of any IP address hijack or well positioned 
man-in-the-middle between the CA's validation infrastructure and the 
certificate requestor.

Some significant academic work has recently been performed in the area of the 
danger of specific BGP hijacks of IP space for purposes of improperly acquiring 
DV certificates for chosen target domains. See, for example: 
https://www.princeton.edu/~pmittal/publications/bgp-tls-hotpets17

Additionally, on the basis of this and other work, significant advancement 
toward certain field mitigations of these threats is beginning to advance.  
See: 
https://community.letsencrypt.org/t/validating-challenges-from-multiple-network-vantage-points/40955

These field mitigations, such as redundant checks of the domain validation from 
multiple geo-distinct network vantage points certainly reduce the risk, 
particularly they reduce the risk of super-quiet narrowly scoped, 
geographically specific network space hijacks which might go largely unnoticed.

Those techniques will be significantly more limited and thus significantly less 
effective as pertains quite brief (seconds to minutes) but global, far less 
subtle IP space hijacks.

Risk illustration by positing a hypothetical scenario in which:

My evil reverse twin (see various strange ramblings of Chuck Tingle for 
reference) seeks to improperly acquire a certificate for ipifony.net and/or 
subdomains thereof.

Said evil party is aware that ipifony.net is DNSSEC signed but knows of a CA 
that: doesn't DNSSEC validate and that connects to the broader internet via one 
or more upstreams that he can influence to believe that his systems are the 
paths to the IP space of the zone's authoritative DNS servers, even if only 
briefly.

In less than a minute total, the certificate can be had.

A party well positioned and skilled in the art (the well positioned part, at 
least, is probably far less expensive than many would suspect) advertises the 
two /24s which contain the ipifony.net authoritative DNS servers.  Just needs 
to be an entity well peered at one of the nation's several significant IXP 
hosting carrier hotels.  Again, costs less than one would imagine.

Request for automated fulfillment of DV cert is submitted.

CA performs DNS queries, speaking with fake / substitute DNS servers.  Confirms 
lack of CAA and proper TXT or CNAM records to authenticate DV issuance.

CA issues certificate.

Hijack advertisement withdrawn.

Alternatively, if issuance could happen only after CAA checking with DNSSEC 
validation enforced, this IP space hijack will have been thwarted (assuming I 
have managed my domain configuration at the registrar securely and that I have 
maintained proper custody and control over my DNSSEC signing keys).  With 
DNSSEC validation on, there is no way to successfully persuade the CA that the 
CAA record does not exist by way of a fraudulent authoritative DNS server.

More significantly, ongoing enhancement of work in the CAA sphere will allow 
for CAs to get even more selective about issuance rules specified in CAA 
records.  Let's Encrypt, for example, is cur

Re: Audit Reminder Email Summary

2017-09-19 Thread Kathleen Wilson via dev-security-policy
 Forwarded Message 
Subject: Summary of September 2017 Audit Reminder Emails
Date: Tue, 19 Sep 2017 19:00:08 + (GMT)

Mozilla: Overdue Audit Statements
Root Certificates:
   Autoridad de Certificacion Firmaprofesional CIF A62634068
Standard Audit: https://cert.webtrust.org/SealFile?seal=2032&file=pdf
Audit Statement Date: 2016-04-11
BR Audit: https://bug521439.bmoattachments.org/attachment.cgi?id=8809981
BR Audit Statement Date: 2016-08-05
EV Audit: https://bug521439.bmoattachments.org/attachment.cgi?id=8809982
EV Audit Statement Date: 2016-08-05
CA Comments: BR and EV audits have happened, but there are action plans being 
presented to the auditors. Primary issues are use of UTF8 instead of 
PrintableString in jurisdictionOfIncorporation, and a recently repealed Spanish 
law that required privat



Mozilla: Audit Reminder
Root Certificates:
   Chambers of Commerce Root
   Chambers of Commerce Root - 2008
   Global Chambersign Root
   Global Chambersign Root - 2008
Standard Audit: https://bug986854.bmoattachments.org/attachment.cgi?id=8775118
Audit Statement Date: 2016-06-17
BR Audit: https://bugzilla.mozilla.org/attachment.cgi?id=8800807
BR Audit Statement Date: 2016-08-05
EV Audit: https://bugzilla.mozilla.org/attachment.cgi?id=8800811
EV Audit Statement Date: 2016-08-05
CA Comments: null



Mozilla: Audit Reminder
Root Certificates:
   AC Raíz Certicámara S.A.
Standard Audit: https://cert.webtrust.org/SealFile?seal=2120&file=pdf
Audit Statement Date: 2016-09-15
CA Comments: null



Mozilla: Audit Reminder
Root Certificates:
   Certinomis - Root CA**

** Audit Case in the Common CA Database is under review for this root 
certificate.

Standard Audit: https://bug937589.bmoattachments.org/attachment.cgi?id=8784555
Audit Statement Date: 2016-08-23
BR Audit: https://bug937589.bmoattachments.org/attachment.cgi?id=8784555
BR Audit Statement Date: 2016-08-23
CA Comments: null



Mozilla: Audit Reminder
Root Certificates:
   E-Tugra Certification Authority
Standard Audit: https://bug877744.bmoattachments.org/attachment.cgi?id=8792625
Audit Statement Date: 2016-09-09
BR Audit: https://bug877744.bmoattachments.org/attachment.cgi?id=8792625
BR Audit Statement Date: 2016-09-09
EV Audit: https://bug877744.bmoattachments.org/attachment.cgi?id=8792625
EV Audit Statement Date: 2016-09-09
CA Comments: null



Mozilla: Audit Reminder
Root Certificates:
   GlobalSign ECC Root CA - R5
Standard Audit: https://cert.webtrust.org/SealFile?seal=2287&file=pdf
Audit Statement Date: 2017-07-26
BR Audit: https://bug1388488.bmoattachments.org/attachment.cgi?id=8895040
BR Audit Statement Date: 2017-07-26
EV Audit: https://cert.webtrust.org/SealFile?seal=2055&file=pdf
EV Audit Statement Date: 2016-06-10
CA Comments: null



Mozilla: Audit Reminder
Root Certificates:
   Go Daddy Class 2 CA**
   Go Daddy Root Certificate Authority - G2**
   Starfield Class 2 CA**
   Starfield Root Certificate Authority - G2**

** Audit Case in the Common CA Database is under review for this root 
certificate.

Standard Audit: https://bugzilla.mozilla.org/attachment.cgi?id=8815072
Audit Statement Date: 2016-08-31
BR Audit: https://bugzilla.mozilla.org/attachment.cgi?id=8815073
BR Audit Statement Date: 2016-08-31
EV Audit: https://bugzilla.mozilla.org/attachment.cgi?id=8815074
EV Audit Statement Date: 2016-08-31
CA Comments: null



Mozilla: Audit Reminder
Root Certificates:
   IdenTrust Commercial Root CA 1
   IdenTrust Public Sector Root CA 1
   DST ACES CA X6
   DST Root CA X3
Standard Audit: https://cert.webtrust.org/SealFile?seal=2107&file=pdf
Audit Statement Date: 2016-08-19
BR Audit: https://cert.webtrust.org/SealFile?seal=2106&file=pdf
BR Audit Statement Date: 2016-08-19
CA Comments: null



Mozilla: Audit Reminder
Root Certificates:
   OpenTrust Root CA G1
   OpenTrust Root CA G2
   Certplus Root CA G1
   OpenTrust Root CA G3
   Certplus Root CA G2
Standard Audit: https://bug1297034.bmoattachments.org/attachment.cgi?id=8783476
Audit Statement Date: 2016-08-19
BR Audit: https://bug1297034.bmoattachments.org/attachment.cgi?id=8783476
BR Audit Statement Date: 2016-08-19
EV Audit: https://bug1297034.bmoattachments.org/attachment.cgi?id=8783476
EV Audit Statement Date: 2016-08-19
CA Comments: null



Mozilla: Audit Reminder
Root Certificates:
   PSCProcert
Standard Audit: https://bug593805.bmoattachments.org/attachment.cgi?id=8795484
Audit Statement Date: 2016-08-02
BR Audit: https://bug593805.bmoattachments.org/attachment.cgi?id=8795484
BR Audit Statement Date: 2016-08-02
CA Comments: null



Mozilla: Audit Reminder
Root Certificates:
   Security Communication EV RootCA1
   SECOM Trust.net - Security Communication RootCA1
   Security Communication RootCA2
Standard Audit: https://bugzilla.mozilla.org/attachment.cgi?id=8805235
Audit Statement Date: 2016-08-22
BR Audit: https://bugzilla.mozilla.org/attachment.cgi?id=8805235
BR Audit Statement Date: 2016-08-22
EV Audit: https://bugzilla.mozilla.o

Re: CAA Certificate Problem Report

2017-09-19 Thread Gervase Markham via dev-security-policy
On 19/09/17 14:58, Nick Lamb wrote:
> An attacker only has to _prefer_ one particular CA for any reason,


Yep, fair.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Public trust of VISA's CA

2017-09-19 Thread Gervase Markham via dev-security-policy
On 19/09/17 16:27, Peter Bowen wrote:
> I think your statement is a little broad.  Every CA only issues
> certificates to themselves and their own customers (or as the BRs call
> them "Subscribers").  

Yes, you are right. "Customers" was the wrong word. Perhaps I rather
meant they only issue to "organizations with whom they have a business
relationship which does not centre around certificates"? Or something
like that.

> The included CAs list

i.e.
https://ccadb-public.secure.force.com/mozilla/IncludedCACertificateReport

> indicates it never followed the current Mozilla
> inclusion process, but is one of four "legacy" CAs.  

(See column "Approval Bug")

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Public trust of VISA's CA

2017-09-19 Thread Peter Bowen via dev-security-policy
On Tue, Sep 19, 2017 at 8:12 AM, Gervase Markham via
dev-security-policy  wrote:
> In https://bugzilla.mozilla.org/show_bug.cgi?id=1391087 , as part of
> their comments on a report of BR-non-compliant certificate issuance, a
> representative of VISA said the following:
>
> "Visa has been operating a closed PKI
> system prior to the inception of the Baseline Requirements, which we had
> a number of legacy processes for the issuance and fulfillment of our
> certificates to our clients.  Certificates that are issued by Visa
> public CA’s are issued only to our clients for interconnectivity
> purposes."
>
> From the above, we see that Visa only issues certificates to their own
> customers/clients, and not to the public. They believe that this permits
> them to keep confidential details of the certificates which they wish to
> have public trust.
>
> The Mozilla Root Store Policy, section 2.1, states:
>
> "2.1 CA Operations. CAs whose certificates are included in Mozilla's
> root program MUST:
> 1) provide some service relevant to typical users of our software
> products; ..."
>
> My memory suggests to me that this clause is normally understood to
> preclude the inclusion of companies who wish to only issue certificates
> to themselves and their customers.

Gerv,

I think your statement is a little broad.  Every CA only issues
certificates to themselves and their own customers (or as the BRs call
them "Subscribers").  What is key here is that Mozilla requires the
certificates have relevance to "typical users of [Firefox]" while
while Visa explicitly says they only use the certificates for
"interconnectivity" purposes.  I take it to mean connections between
their clients and Visa, not between clients and the broad Internet
user base.

> We also see that they are unable to provide a timeline for full BR
> compliance. This is despite various assurances of current compliance to
> Mozilla policies (and thereby the BRs) in various CA communications,
> such as April 2017 and March 2016.
>
> In the light of this, I believe it is reasonable to discuss the question
> of whether Visa's PKI (and, specifically, the VISA eCommerce Root,
> https://crt.sh/?id=896972 , which is the one includes in our store)
> meets the criteria for inclusion in Mozilla's Root Store Policy, and
> whether it is appropriate for them to continue to hold public trust.
> Your comments are welcome.

That crt.sh link shows that it only has one unexpired subordinate CA,
the "Visa eCommerce Issuing CA".  CT logs only show 27 unexpired
certificates issued by this subordinate CA:
https://crt.sh/?Identity=%25&iCAID=1414&exclude=expired, of which two
are revoked.

The included CAs list indicates it never followed the current Mozilla
inclusion process, but is one of four "legacy" CAs.  The other three
are operated by CAs that have followed the inclusion process for other
roots, so this means Visa is the only CA operator in the Mozilla
program who is a purely "legacy" CA.

I take legacy to mean that they were never assessed against the
Mozilla inclusion standards, so it does seem reasonable to review
whether they continue to meet the inclusion policy.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Public trust of VISA's CA

2017-09-19 Thread Alex Gaynor via dev-security-policy
https://crt.sh/mozilla-certvalidations?group=version&id=896972 is a very
informative graph for me -- this is the number of validations performed by
Firefox for certs under this CA. It looks like at the absolute peak, there
were 1000 validations in a day. That's very little value for our users, in
return for an awful lot of risk.

Alex

On Tue, Sep 19, 2017 at 11:12 AM, Gervase Markham via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> In https://bugzilla.mozilla.org/show_bug.cgi?id=1391087 , as part of
> their comments on a report of BR-non-compliant certificate issuance, a
> representative of VISA said the following:
>
> "I would like to share with you some details regarding our PKI System
> and our position within the CA/Browser Forum.  Visa is one of the oldest
> operating Certificate Authorities and is currently a non-voting member
> within the CA/Browser Forum.  Visa has been operating a closed PKI
> system prior to the inception of the Baseline Requirements, which we had
> a number of legacy processes for the issuance and fulfillment of our
> certificates to our clients.  Certificates that are issued by Visa
> public CA’s are issued only to our clients for interconnectivity
> purposes.  Unlike other CA’s and particularly those that have undergone
> your Blink Process, our core business is not PKI.  The certificates that
> were impacted with the noted issues were not issued erroneously to a bad
> actor(s) nor do we issue certificates to the open public. Due to our
> unique PKI system, we are not at liberty to divulge with the public our
> list of impacted clients and their certificates without our Legals'
> consent.
>
> Regarding BR compliance, we completed our initial BR audit in September
> of 2016.  Since that time, we have been addressing the observations
> noted by our external auditors.  This also would encompass any
> certificate issues that have been publically reported.  Understanding
> that such changes in adopting a new process will have business impact,
> it is difficult to provide an accurate timeline of complete compliance
> as we are required to assess the impact to our client and payment
> systems to avoid any operational impact.  We are committed to aligning
> with BR and Mozilla requirements as we have continuously move forward in
> making the necessary changes."
>
> From the above, we see that Visa only issues certificates to their own
> customers/clients, and not to the public. They believe that this permits
> them to keep confidential details of the certificates which they wish to
> have public trust.
>
> The Mozilla Root Store Policy, section 2.1, states:
>
> "2.1 CA Operations. CAs whose certificates are included in Mozilla's
> root program MUST:
> 1) provide some service relevant to typical users of our software
> products; ..."
>
> My memory suggests to me that this clause is normally understood to
> preclude the inclusion of companies who wish to only issue certificates
> to themselves and their customers.
>
> We also see that they are unable to provide a timeline for full BR
> compliance. This is despite various assurances of current compliance to
> Mozilla policies (and thereby the BRs) in various CA communications,
> such as April 2017 and March 2016.
>
> In the light of this, I believe it is reasonable to discuss the question
> of whether Visa's PKI (and, specifically, the VISA eCommerce Root,
> https://crt.sh/?id=896972 , which is the one includes in our store)
> meets the criteria for inclusion in Mozilla's Root Store Policy, and
> whether it is appropriate for them to continue to hold public trust.
> Your comments are welcome.
>
> Gerv
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Public trust of VISA's CA

2017-09-19 Thread Gervase Markham via dev-security-policy
In https://bugzilla.mozilla.org/show_bug.cgi?id=1391087 , as part of
their comments on a report of BR-non-compliant certificate issuance, a
representative of VISA said the following:

"I would like to share with you some details regarding our PKI System
and our position within the CA/Browser Forum.  Visa is one of the oldest
operating Certificate Authorities and is currently a non-voting member
within the CA/Browser Forum.  Visa has been operating a closed PKI
system prior to the inception of the Baseline Requirements, which we had
a number of legacy processes for the issuance and fulfillment of our
certificates to our clients.  Certificates that are issued by Visa
public CA’s are issued only to our clients for interconnectivity
purposes.  Unlike other CA’s and particularly those that have undergone
your Blink Process, our core business is not PKI.  The certificates that
were impacted with the noted issues were not issued erroneously to a bad
actor(s) nor do we issue certificates to the open public. Due to our
unique PKI system, we are not at liberty to divulge with the public our
list of impacted clients and their certificates without our Legals' consent.

Regarding BR compliance, we completed our initial BR audit in September
of 2016.  Since that time, we have been addressing the observations
noted by our external auditors.  This also would encompass any
certificate issues that have been publically reported.  Understanding
that such changes in adopting a new process will have business impact,
it is difficult to provide an accurate timeline of complete compliance
as we are required to assess the impact to our client and payment
systems to avoid any operational impact.  We are committed to aligning
with BR and Mozilla requirements as we have continuously move forward in
making the necessary changes."

From the above, we see that Visa only issues certificates to their own
customers/clients, and not to the public. They believe that this permits
them to keep confidential details of the certificates which they wish to
have public trust.

The Mozilla Root Store Policy, section 2.1, states:

"2.1 CA Operations. CAs whose certificates are included in Mozilla's
root program MUST:
1) provide some service relevant to typical users of our software
products; ..."

My memory suggests to me that this clause is normally understood to
preclude the inclusion of companies who wish to only issue certificates
to themselves and their customers.

We also see that they are unable to provide a timeline for full BR
compliance. This is despite various assurances of current compliance to
Mozilla policies (and thereby the BRs) in various CA communications,
such as April 2017 and March 2016.

In the light of this, I believe it is reasonable to discuss the question
of whether Visa's PKI (and, specifically, the VISA eCommerce Root,
https://crt.sh/?id=896972 , which is the one includes in our store)
meets the criteria for inclusion in Mozilla's Root Store Policy, and
whether it is appropriate for them to continue to hold public trust.
Your comments are welcome.

Gerv

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: FW: StartCom inclusion request: next steps

2017-09-19 Thread Nick Lamb via dev-security-policy
On Tuesday, 19 September 2017 15:46:09 UTC+1, Franck Leroy  wrote:
> 1/ When we use our root, we produce a key ceremony report.
> 2/ The signature value doesn’t appears in the report so it is not possible to 
> reproduce the certificate.
> 3/ My safe is in a closet which I don’t have the key, so I have to ask my 
> manager to open it, then I can open my safe this my key.

Thanks Franck,

I have no doubt that this was obvious to people who have worked for a public 
CA, but it wasn't obvious to me, so thank you for answering. I think these 
answers give us good reason to be confident that a cross-signed certificate in 
this situation would not be available to either end subscribers or StartCom 
unless/ until the CA which cross-signed it wanted that to happen.

It might still make sense for Mozilla to clarify that this isn't a good idea, 
or even outright forbid it anyway, but I agree with your perspective that this 
seemed permissible under the rules as you understood them and wasn't obviously 
unreasonable.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: FW: StartCom inclusion request: next steps

2017-09-19 Thread James Burton via dev-security-policy
On Tuesday, September 19, 2017 at 3:46:09 PM UTC+1, Franck Leroy wrote:
> Le lundi 18 septembre 2017 17:28:44 UTC+2, Nick Lamb a écrit :
> > On Monday, 18 September 2017 15:50:16 UTC+1, Franck Leroy  wrote:
> > > This control that StartCom was not allowed to use our path was technical 
> > > in place by the fact that I was the only one to have the intermediate 
> > > cross signed certificates, stored (retained) in my personal safe.
> > 
> > I see. Three (groups of) questions as someone who does not operate a public 
> > CA:
> > 
> > When the cross signature certificate was signed did this result in some 
> > sort of auditable record of the signing? A paper trial, or its electronic 
> > equivalent - so that any audit team would be aware that the certificate 
> > existed, regardless of whether they were present when it was created ?
> > 
> > (If so) Was this record inadequate to reproduce the certificate itself, for 
> > example just consisting of a serial number and other facts ?
> > 
> > Many important functions of a CA are protected by "no lone zone" type 
> > practices, but would it be possible for you to retrieve the certificate 
> > from this safe on your own, without oversight by other employees ?
> > 
> > I suspect all the above questions have answers that would be obvious to me 
> > if I had worked for a public CA but I hope you will humour me with answers 
> > anyway.
> 
> Hello
> 
> You are right, answers are quite obvious, but I don’t understand the purpose 
> of your questions (may be I'm lost in the translation...)
> 
> 1/ When we use our root, we produce a key ceremony report.
> 2/ The signature value doesn’t appears in the report so it is not possible to 
> reproduce the certificate.
> 3/ My safe is in a closet which I don’t have the key, so I have to ask my 
> manager to open it, then I can open my safe this my key.
> 
> These are standard practices, and it changes nothing on the fact that we 
> cross signed in April and send the certificate to StartCom in August and that 
> cannot be mathematically proven, just my declaration.
> 
> 
> Franck

Hi Franck,

Could you provide us with a copy of this key ceremony report after removing 
confidential security information which is not relevant?

James
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: FW: StartCom inclusion request: next steps

2017-09-19 Thread Franck Leroy via dev-security-policy
Le lundi 18 septembre 2017 17:28:44 UTC+2, Nick Lamb a écrit :
> On Monday, 18 September 2017 15:50:16 UTC+1, Franck Leroy  wrote:
> > This control that StartCom was not allowed to use our path was technical in 
> > place by the fact that I was the only one to have the intermediate cross 
> > signed certificates, stored (retained) in my personal safe.
> 
> I see. Three (groups of) questions as someone who does not operate a public 
> CA:
> 
> When the cross signature certificate was signed did this result in some sort 
> of auditable record of the signing? A paper trial, or its electronic 
> equivalent - so that any audit team would be aware that the certificate 
> existed, regardless of whether they were present when it was created ?
> 
> (If so) Was this record inadequate to reproduce the certificate itself, for 
> example just consisting of a serial number and other facts ?
> 
> Many important functions of a CA are protected by "no lone zone" type 
> practices, but would it be possible for you to retrieve the certificate from 
> this safe on your own, without oversight by other employees ?
> 
> I suspect all the above questions have answers that would be obvious to me if 
> I had worked for a public CA but I hope you will humour me with answers 
> anyway.

Hello

You are right, answers are quite obvious, but I don’t understand the purpose of 
your questions (may be I'm lost in the translation...)

1/ When we use our root, we produce a key ceremony report.
2/ The signature value doesn’t appears in the report so it is not possible to 
reproduce the certificate.
3/ My safe is in a closet which I don’t have the key, so I have to ask my 
manager to open it, then I can open my safe this my key.

These are standard practices, and it changes nothing on the fact that we cross 
signed in April and send the certificate to StartCom in August and that cannot 
be mathematically proven, just my declaration.


Franck
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: FW: StartCom inclusion request: next steps

2017-09-19 Thread Inigo Barreira via dev-security-policy
Hi Gerv

> 
> But once the cross-signed cert is publicly available (and it is; it's in CT,
> however it got there), all of those certificates become trusted (or 
> potentially
> trusted, if the owner reconfigures their webserver to serve the intermediate,
> or if Firefox has already encountered it in the current browsing session).
> 

True

> > This is something I don´t understand. Why do you say the audits I
> > presented don´t meet the BRs? Because of the findings? The auditors
> > indicate those were fixed
> 
> I don't believe there's a formal way for an auditor to bindingly say "by the
> way, the problems we found have since been fixed" in an audit report.

Not bindingly, we provided a CAP (Corrective Action Plan) for all those 
findings indicated what we did to fix them and provided the evidences and what 
we were going to do for those that couldn´t be fixed before receiving the 
report.  

 
> But to help me understand: exactly what statement on what page of your audit
> report(s) are you referring to here?

It´s in a section called "other questions" in which they say "Startcom has 
developed a plan of corrective actions with the objective of solving the 
identified exceptions, having been implemented the majority of these actions".

> 
> > About the remediation steps, well, I answered the bug about it providing all
> the info and yes, you haven´t answered yet nor to approve nor to deny.
> 
> Right. So why are you proceeding?
> 
> You might reasonably complain it's taken us a while to respond to that
> comment about the steps. Yes, it has. The Mozilla inclusion process is slow. 
> :-(

Well, because I wanted to speed up the process if possible. We did everything 
what was requested and replied the bug. And also applied for the inclussion and 
none said nothing about it. Kathleen told me that it was going to be slow 
because the queue was long so I was waiting, no problem, but didn´t know that 
need to ask permission for applying.

> 
> >>> In fact, recently, I asked for permission to use the Certinomis
> >>> cross-signed
> >> certificates and have no response. I don´t know if this is an
> >> administrative silence which may allow me to use it but until having
> >> a clear direction we haven´t used it.
> >>
> >> Can you remind me how you asked and when?
> >
> > It was in an email of sept 4th, titled "StartCom communication" in
> > which at the end of the long email I asked for feedback to use the
> > cross-signed certificates and give additional explanations
> 
> I have no record of any email with that title, or any email from you between
> 15th August ("Re: Problem Reporting Mechanism") and 11th September ("Re:
> Remove old Startcom roots from NSS"). Where did you send it?

I sent it to the m.d.s.p list and got a reply from Andrew Ayer almost 
inmediately. 

> 
> Gerv


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAA Certificate Problem Report

2017-09-19 Thread Nick Lamb via dev-security-policy
On Tuesday, 19 September 2017 14:02:36 UTC+1, Gervase Markham  wrote:
> I'd be interested in your engagement on my brief threat modelling; it
> seems to me that DNSSEC only adds value in the scenario where an
> attacker has some control of CA Foo's issuance process, but not enough
> to override the CAA check internally, but it also has enough control of
> the network (either at the target, or at the CA) to spoof DNS responses
> to defeat CAA. That seems on the surface like a rare scenario.

The latter part sounds correct. The first part is definitely inaccurate.

An attacker only has to _prefer_ one particular CA for any reason, it needn't 
be that they can control issuance. Off the top of my head reasons to prefer a 
particular CA for an attack _despite_ no control over issuance might include:

* This CA offers a particular validation method we can exploit. For example 
they accept Domain Authorization Documents and we are expert forgers, we 
anticipate our documents will be entirely convincing but of course we can't 
submit them to a CA which doesn't offer this method. Or, we have a trick where 
we can intercept email for a domain unnoticed, but we cannot use this to 
validate with a CA which doesn't offer email validation.

* This CA is known not to treat our target as "High Value" while others do. If 
we try another CA they will flag the application for scrutiny and a human may 
spot that it is suspicious and notify the target.

* This CA doesn't CT log, or does so only belatedly, buying us more time before 
an alert target will realise what we're doing compared to a CA which logs all 
issuances immediately.

* This CA is notoriously slow to react to problem reports, again buying us more 
time to use the certificate before they revoke it and admit what happened 
compared to a CA which is proactive and would engage immediately.

* This CA's audit logging is poor, so that when our attack succeeds the police 
and other investigators will find the trail of evidence is inadequate and it's 
difficult to track the attackers down and prosecute them. A better CA might 
give investigators a hot trail and lead to us being caught.

* This CA is located in a jurisdiction that's especially inconvenient for the 
target. The target's usual CA probably speaks their language, operates in a 
similar timezone, maybe has extradition treaties and mutual co-operation rules 
that will help if there's a crime, but we can choose one that's as difficult as 
possible.

* This CA is trusted by a particular application or device beyond the scope of 
the Web PKI so that we can leverage an attack on Web PKI assets to actually 
break something far more important, payment gateways, life support, whatever. 
Other CAs are useless for this type of attack because there is no reason for 
them to be trusted in these other applications.

CAA isn't a magic "fix" for any of these, but it follows the principle of least 
privilege. Why let every CA in the world issue for example.com if we, as 
example.com, are confident we only want to use Honest Achmed? If we change our 
minds (perhaps after Achmed's dubious personnel structures are revealed) we can 
just update the CAA record, no trouble.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: FW: StartCom inclusion request: next steps

2017-09-19 Thread Gervase Markham via dev-security-policy
Hi Inigo,

On 15/09/17 17:30, Inigo Barreira wrote:
> There wasn´t a lack of integrity and monitoring, of course not. All PKI logs 
> were and are signed, it´s just the auditors wanted to add the integrity to 
> other systems which is not so clear that should have this enabled. For 
> example, if you want to archive database information for not managing a big 
> one, the integrity of the logs could be a problem when trying to "move" to an 
> archive system. I had some discussions about the "scope" of the integrity. 
> Regarding the monitoring, well, we monitor many things, in both data centers, 
> 24x7, etc. For this specific issue, it´s true that we didn´t have it 
> automatically but manually, but well, and we implement a solution, but this 
> is not a lack of monitoring. I think the audits are to correct and improve 
> the systems and don´t think any CA at the first time had everything correct. 
> So, for example, I thought this finding was good because made us improve.
> 
>> Repairing them afterwards does not remove the uncertainty.
> 
> Well, then any issue that you could find, even repaired or fixed, does not 
> provide you any security and hence you should not trust anyone. 

Not so. There is particular concern about issues with auditing and
monitoring. For other issues, you can check the logs to see whether a
particular bug was abused. If the auditing or monitoring is broken or
inadequate, you can't tell what happened.

>> I may have made a mistake here. I was under the impression that you had
>> told me that your new hierarchy, cross-signed by Certnomis, had issued
>> 50,000 certificates. Did I misunderstand? If so, my apologies.
> 
> No, or not totally. We have issued those certs but not cross-signed by 
> Certinomis because we didn´t have the cross-signed certificates so, all of 
> them were issued under the new startcom hierarchy

But once the cross-signed cert is publicly available (and it is; it's in
CT, however it got there), all of those certificates become trusted (or
potentially trusted, if the owner reconfigures their webserver to serve
the intermediate, or if Firefox has already encountered it in the
current browsing session).

> This is something I don´t understand. Why do you say the audits I presented 
> don´t meet the BRs? Because of the findings? The auditors indicate those were 
> fixed 

I don't believe there's a formal way for an auditor to bindingly say "by
the way, the problems we found have since been fixed" in an audit
report. But to help me understand: exactly what statement on what page
of your audit report(s) are you referring to here?

> About the remediation steps, well, I answered the bug about it providing all 
> the info and yes, you haven´t answered yet nor to approve nor to deny.

Right. So why are you proceeding?

You might reasonably complain it's taken us a while to respond to that
comment about the steps. Yes, it has. The Mozilla inclusion process is
slow. :-(

>>> In fact, recently, I asked for permission to use the Certinomis cross-signed
>> certificates and have no response. I don´t know if this is an administrative
>> silence which may allow me to use it but until having a clear direction we
>> haven´t used it.
>>
>> Can you remind me how you asked and when?
> 
> It was in an email of sept 4th, titled "StartCom communication" in which at 
> the end of the long email I asked for feedback to use the cross-signed 
> certificates and give additional explanations

I have no record of any email with that title, or any email from you
between 15th August ("Re: Problem Reporting Mechanism") and 11th
September ("Re: Remove old Startcom roots from NSS"). Where did you send it?

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAA Certificate Problem Report

2017-09-19 Thread Patrick Figel via dev-security-policy
On 19/09/2017 14:59, Gervase Markham via dev-security-policy wrote:
> It might also be worth thinking about the value that DNSSEC adds, over
> and above a non-secure CAA check, in various attack scenarios. At the
> moment, I'm thinking that DNSSEC doesn't necessarily add much. Here are
> 3 quick scenarios, for a domain which is CAA locked so only CA Bar can
> issue:
> 
> * Misguided employee tries to get CA Foo to issue for your domain - in
> which case, non-DNSSEC-signed checking will do.
> 
> * Attacker has some control of CA Foo but can't override CAA check - in
> which case, non-DNSSEC-signed checking will do.
> 
> * Attacker has control of CA Foo but can override CAA check - in which
> case, it doesn't matter what your DNS says.

An important consideration is that CAA with DNSSEC gives domain owners
the ability to more or less fully mitigate BGP hijacking attempts
against unauthorized CAs.

Right now, this requires domain owners to only permit issuance from CAs
with sufficient mitigations against BGP hijacking on their end, or
special agreements regarding the approval process, so this is probably
not seeing wide use yet, but with the upcoming CAA Record Extensions for
Account URI and ACME Method Binding (which is in WG last call), this
option will (hopefully soon after) become available to the general
public, so this would definitely be an area where DNSSEC improves
things, for a change.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAs not compliant with CAA CP/CPS requirement

2017-09-19 Thread Gervase Markham via dev-security-policy
On 15/09/17 09:38, richmoor...@gmail.com wrote:
> I suspect many smaller CAs are non-compliant too, for example gandi's CPS 
> hasn't changed since 2009 according to its changelog.
> 
> https://www.gandi.net/static/docs/en/gandi-certification-practice-statement.pdf

Thank you for bringing this to my attention; I have emailed Comodo to
ask them what the situation is here.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAA Certificate Problem Report

2017-09-19 Thread Gervase Markham via dev-security-policy
On 13/09/17 23:57, Matthew Hardeman wrote:
> This is especially the case for CAA records, which have an explicit security 
> function: controlling, at a minimum, who may issue publicly trusted 
> certificates for a given FQDN.

I'd be interested in your engagement on my brief threat modelling; it
seems to me that DNSSEC only adds value in the scenario where an
attacker has some control of CA Foo's issuance process, but not enough
to override the CAA check internally, but it also has enough control of
the network (either at the target, or at the CA) to spoof DNS responses
to defeat CAA. That seems on the surface like a rare scenario.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAA Certificate Problem Report

2017-09-19 Thread Gervase Markham via dev-security-policy
Hi Nick,

On 13/09/17 20:39, Nick Lamb wrote:
> Gerv, rather than start by digging into the specific technical details, let 
> me ask a high level question.
> 
> Suppose I have deployed DNSSEC for my domain tlrmx.org and I have a CAA 
> record saying to only permit the non-existent Gotham Certificates 
> gotham.example to issue.
> 
> You say you don't want CAs to need to implement DNSSEC. But you also don't 
> want them issuing for my domain. How did you imagine this circle would be 
> squared?

There seems to have been some progress made on the CAB Forum list in
terms of defining exactly what it means for a domain to have or not have
DNSSEC, and how a CA can determine that.

It might also be worth thinking about the value that DNSSEC adds, over
and above a non-secure CAA check, in various attack scenarios. At the
moment, I'm thinking that DNSSEC doesn't necessarily add much. Here are
3 quick scenarios, for a domain which is CAA locked so only CA Bar can
issue:

* Misguided employee tries to get CA Foo to issue for your domain - in
which case, non-DNSSEC-signed checking will do.

* Attacker has some control of CA Foo but can't override CAA check - in
which case, non-DNSSEC-signed checking will do.

* Attacker has control of CA Foo but can override CAA check - in which
case, it doesn't matter what your DNS says.

Gerv

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: FW: StartCom inclusion request: next steps

2017-09-19 Thread Gervase Markham via dev-security-policy
Hi Franck,

On 18/09/17 15:49, Franck Leroy wrote:
> Our understanding in April was that as long as StartCom is not
> allowed by Certinomis to issue EE certs, the disclosure was not
> mandated immediately.

I think that we need to establish a timeline of the exact events
involved here.

But I would say that it seems to me that Startcom _were_ issuing EE
certs at that time, from the part of their hierarchy that you had
cross-signed. In what way was Certnomis forbidding them from doing so?
My understanding is that your answer to this question is...

> This control that StartCom was not allowed to use our path was
> technical in place by the fact that I was the only one to have the
> intermediate cross signed certificates, stored (retained) in my
> personal safe.

that you had not given Startcom a copy of the cross-sign. However,
leaving aside for the moment the reasonable question about how such an
assertion can be audited, the point is that once the certificate _does_
become public, all of the existing certificates immediately become
publicly trusted. Wouldn't you agree?

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: FW: StartCom inclusion request: next steps

2017-09-19 Thread Gervase Markham via dev-security-policy
On 15/09/17 15:35, Inigo Barreira wrote:
> No, those weren´t tests. We allowed the use of curves permitted by the BRs 
> but this issue came up in the mozilla policy (I think Arkadiusz posted) and I 
> also asked about it in the last CABF F2F (I asked Ryan about it) and then, 
> with that outcome and as the browsers didn´t accept them, we revoked and then 
> not allow the issuance. I think the discussion is still active (i.e. the use 
> of P-521).

Unless Mozilla says otherwise (which we do occasionally), you should not
be basing your practice on what we are discussing, or even on draft
versions of the policy, but on the published version - which clearly
prohibits P-521.

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy