Re: A vision of an entirely different WebPKI of the future...

2018-08-16 Thread Jakob Bohm via dev-security-policy

On 16/08/2018 21:51, Matthew Hardeman wrote:

Of late, there seems to be an ever increasing number of misissuances of various 
forms arising.

Despite certificate transparency, increased use of linters, etc, it's virtually 
impossible to find any CA issuing in volume that hasn't committed some issuance 
sin.



The main cause of this seems to be that CT has allowed much more
vigorous prosecution of even the smallest mistake.  Your argument
is a sensationalist attack on an thoroughly honest industry.


Simultaneously, there seems to be an increasing level of buy-in that the only 
useful identifying element(s) in a WebPKI certificate today are the domain 
labels covered by the certificate and that these certificates should be issued 
only upon demonstrated control of the included domain labels.



That is a viewpoint promoted almost exclusively by a company that has
way too much power and is the subject of some serious public
prosecution.  Cow-towing to that mastodont is not buy-in or agreement,
merely fear.

The rest of your proposal follows from your bad premises and must be
rejected.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DEFCON Talk - Lost and Found Certificates

2018-08-16 Thread Jakob Bohm via dev-security-policy

On 16/08/2018 16:24, Eric Mill wrote:

On Wed, Aug 15, 2018 at 6:36 AM Wayne Thayer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


I'd like to call this presentation to everyone's attention:

Title: Lost and Found Certificates: dealing with residual certificates for
pre-owned domains

Slide deck:

https://media.defcon.org/DEF%20CON%2026/DEF%20CON%2026%20presentations/DEFCON-26-Foster-and-Ayrey-Lost-and-Found-Certs-residual-certs-for-pre-owned-domains.pdf

(NOTE: this PDF loads in Firefox, but not in Safari and not, I'm told, in
Chrome's native PDF viewer).

Demo website: https://insecure.design/

The basic idea here is that domain names regularly change owners, creating
"residual certificates" controlled by the previous owner that can be used
for MITM. When a bunch of unrelated websites are thrown into the same
certificate by a service provider (e.g. CDN), then this also creates the
opportunity to DoS the sites by asking the CA to revoke the certificate.

The deck includes some recommendations for CAs.

What, if anything, should we do about this issue?



I think this paper provides a good impetus to look at further shortening
certificate lifetimes down to 13 months. That would better match the annual
cadence of domain registration so that there's a smaller window of time
beyond domain expiration for which a certificate would be valid, and would
continue the momentum Mozilla and the CA/B Forum have been building around
reducing certificate lifetimes and encouraging automation.

The presentation suggests having certificates only be valid through the
expiration date of the relevant registered domain, but I think that's
unrealistic. Most of the time, domains are set to autorenew so that people
never have to think about them, and their renewal cadence is totally
disconnected from certificate renewal cadence. If a domain is 6 days from
autorenew, a CA offering a 6-day-long cert and forcing someone to come back
a week later for another one would be very unreasonable.

I don't think the presentation points to building in stronger support for
revocation. If anything, it points to revocation being a threat vector for
DoS-ing sites that have nothing to do with the problem at hand, due to the
long-standing (and reasonable) practice of multi-SAN certs that combine
clumps of customers into individual certificates. Ryan points out that SNI
is becoming something that can be relied on more universally, which would
reduce the need for multi-SAN certificates, but multi-SAN certificates also
provide useful operational benefits to organizations who are using CAs with
rate limits, or simply for whom the ability to use 100x fewer certificates
relieves an operational scaling burden.

It may still be useful to deprecate multi-SAN certificates over time, but I
think the single biggest thing to take away from the presentation is that
long-lived certs create invisible risks during domain transfers, and that
the risk is more than just theoretical when looking at the whole of the
web. It's been a year and a half now since the last discussion and vote
that went from a 39-month max to a 27-month max, so I think it's a great
time to start talking about a 13-month maximum.




It seems that my response to this presentation has brought out the crowd
of people who are constantly looking to reduce the usefulness of
certificates to anyone but the largest mega-corporations.

To summarize my problem with this:

 - While some large IT operations (and a minority of small ones) run
  fully automated setups that can trivially handle replacing
  certificates many times per year, many other certificate holders treat
  certificate replacement as a rare event that involves a lot of manual
  labor.  Shortening the maximum duration of certificates down to Let's
  encrypt levels will be a massive burden in terms of wasted man-hours
  accumulated over millions (billions?) of organizations having to do 4
  times a year what they used to do every two or five years.

 - In terms of end user security, having certificates and domains expire
  at different times significantly decreases the risk of legal domain
  hijacks by "domain parking" providers, domain squatters and the like,
  as it reduces the role of domain renewal as a single point of failure.
   If certificate pinning protocols weren't broken by design (such as
  preventing private key replacement), they would further increase this
  end user protection against trusting an unexpected site that has no
  legitimate relationship with a dubiously obtained domain.
   Thus the papers suggestion that certificates should be limited to the
  domain lifetime is a security loss in the common case where the domain
  owner has some trust that cannot be forwarded to any unrelated new
  domain owner.  It also does nothing for the cases where domain control
  is consensually transferred such as when the domain owner changes
  hosting providers or sells the domain.

 - While infinitely 

Re: A vision of an entirely different WebPKI of the future...

2018-08-16 Thread Matthew Hardeman via dev-security-policy
On Thursday, August 16, 2018 at 3:34:01 PM UTC-5, Paul Wouters wrote:
> Why would people not in the business of being a CA do a better job than
> those currently in the CA business?

I certainly do not assert that there would be no learning curve.  However, 
these same registries for the generic TLDs are already implementing 
cryptographic signatures and delegations at scale for DNSSEC, including 
signatures over the delegation records to authoritative DNS as well as 
cryptographic assurances that DNSSEC is not enabled for a given domain, many of 
the operational concerns of a CA are already being undertaken today as part of 
the job routinely performed by the registries.

> If you want a radical change that makes it simpler, start doing TLSA in
> DNSSEC and skip the middle man that issues certs based on DNS records.

The trouble that I see with that scheme is the typical location of DNSSEC 
validation.  DNSSEC + a third party CA witnessing point-in-time correctness of 
the validation challenge as DNSSEC signed allows for DNSSEC to provide some 
improvement to issuance-time DNS validation.  However, as soon as you take a 
third party CA out of the picture, you no longer have a "witness" with 
controlled environment (independent network vantage point, proper ensuring of 
DNSSEC signature validity, etc).  Desktop clients today don't generally perform 
DNSSEC validation themselves, relying upon the resolver that they reference to 
perform that task.  This opens a door for a man in the middle between the 
desktop and the recursive resolver.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: A vision of an entirely different WebPKI of the future...

2018-08-16 Thread Matthew Hardeman via dev-security-policy
On Thursday, August 16, 2018 at 3:18:38 PM UTC-5, Wayne Thayer wrote:
> What problem(s) are you trying to solve with this concept? If it's
> misissuance as broadly defined, then I'm highly skeptical that Registry
> Operators - the number of which is on the same order of magnitude as CAs
> [1] - would perform better than existing CAs in this regard. You also need
> to consider the fact that ICANN has little authority over ccTLDs.

One issue that would be solved in such a scheme as I've proposed is that only a 
single administrative hierarchy may issue certificates for a given TLD and 
further that that hierarchy is the same as that which has TLD level 
responsibility over domains within that TLD.

Pedantic as it may be, there's virtually no such thing as a misissuance by a 
registry, if only because literally whatever they say about a domain at any 
given moment is "correct" and is the authoritative answer.

A scheme such as I've proposed also eliminates all the other layers of failure 
which may occur that can yield undesirable issuances today: concerns over BGP 
hijacks of authoritative DNS server IP space are eliminated, concerns over 
authoritative DNS server compromise of other forms is eliminated, concern over 
compromise of a target web server is eliminated.

In the scheme I propose, the registry is signing only upon orders from the 
registrar responsible for the given domain within the TLD and the registrar 
gives such orders only upon authenticated requests that are authenticated at 
least to the same level of assurance as would be required to alter the 
authoritative DNS delegations for the domain.  (Consequently, that level of 
access today is certainly sufficient to achieve issuance from any CA that 
issues automatically upon validation against DNS records.)

I concede that ICANN would have no means to impose this upon the CC TLDs, 
leaving a gap to be figured out.

I recognize that this is a maverick idea, nearly completely divorced from the 
current WebPKI's structure.  Having said that, I do think it aligns the 
capability to issue a certificate to the administrative structures which 
already determine the very definition of what is meant by a given dnsName.  In 
addition, it reduces many diverse attack surface areas down to a single one 
(account takeover / infrastructure takeover of registrar/registry) that is 
already in the overall threat model.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Misissuance and BR Audit Statements

2018-08-16 Thread Wayne Thayer via dev-security-policy
Thank you for responding on behalf of ETSI ESI and ACABc! I believe that
this is an important topic and I hope that ETSI ESI and ACABc members will
continue to participate in the discussion.

On Thu, Aug 16, 2018 at 11:11 AM clemens.wanko--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Dear all,
> this is a joint response from ETSI ESI and ACABc:
>
> ETSI have published a supplement to its audit requirements specifically to
> address specific requirements of Mozilla, and other CA/Browser Forum
> members, for auditing Trust Service Providers that issue Publicly-Trusted
> Certificates TS 119 403-2.  This is available for download at:
> https://www.etsi.org/standards-search#search=TS119403-2

>
This is extremely helpful in normalizing ETSI Audit Attestations and
ensuring that they contain the basic information that Mozilla requires, but
it provides no guidance on documenting non-conformities.
>

>
> With regard to the treatment of non-conformities it says in PTA-4.3-08:
> The Audit Attestation shall be issued only if no critical non-conformities
> are identified.
>
> >
Yes, as I mentioned in my original email, this is the fundamental concern I
have with ETSI audits: qualified Audit Attestation reports cannot be
issued. I do not even understand how it is possible for ETSI auditors to
provide an unbroken sequence of period-of-time Audit Attestations if a
"critical non-conformity" occurs. Apparently once the non-conformity is
remediated it is no longer "critical"? This process gives relying parties
no visibility into the problems identified by ETSI audits.
>

> ETSI audits do cover the CA incident management. That includes the whole
> process including the timely treatment of incidents as well how to
> guarantee proper and comprehensive responses to incidents. In ETSI EN 310
> 401 corresponding requirements are not only provided directly by section
> 7.9 Incident Management but also through the requirement for a ISMS as
> stated in section 5. Assessing that, the ETSI auditor will look at the CA
> incident treatment during the Stage 1 as well as during the Stage 2 onsite
> part of the audit. This includes the treatment of Bugzilla and cert.sh
> listed issues. Incidents closed by the CA may have resulted in a change in
> the CA operations. In such cases the auditor checks that the changes are
> functioning correctly as defined by the CA. In that way the auditor is
> assessing the incident management as such including possible measures taken
> to avoid such incidents in the future and at the implemented measures
> itself.
>
> >
Good. But if, for example, a CA were to repeatedly and systematically fail
to respond to incident reports within 24 hours. Would that non-conformity
appear on the Audit Attestation? This has appeared as a qualification on a
number of recently completed WebTrust audit statements.
>

> Another matter is the question of how to handle security related incidents
> and the counter measures taken by a CA in audit reports. In order to keep
> the security issue confidential as well as the details of the measures
> taken by the CA, the accredited CABS (ETSI auditors) decided to document
> such findings in their detailed audit reports. These detailed reports list
> all relevant non conformities and the counter measures taken by the CA. It
> is handed over to the CA in addition to the audit attestation. Based upon
> that detailed report, the ETSI auditor will compile the Audit Attestation
> as the browsers have it in their hand. The contents of the Audit
> Attestation as summary document was agreed upon between ACAB’c and the CA/B
> Browser Forum. If you regard it helpful to add information about the audit
> results gained in the area of the CA incident treatment, we can certainly
> discuss that. We then should reach a common agreement on what exactly we
> add.
>
>
Yes, this would be helpful information, and I agree that there should be a
discussion resulting in auditor guidance on what to report to ensure that
we get consistent information from all auditors.
>

>
> We certainly believe however, that it is not advisable to publish detected
> weak points. E.g. there might be findings in the way that the CA has NOT
> correctly treated an incident. In that case the ETSI auditor will document
> such findings and will not issue a positive report as well as no Audit
> Attestation. The CA is then obliged to immediately install appropriate
> counter measures which again will be judged by the auditor. Only if the
> counter measures are rated sufficient in coverage and suitability by the
> auditor, he will issue a positive report and an Audit Attestation.
>
> >
Any sufficiently serious vulnerability should of course be fixed
immediately. Once it has been fixed, there is value in disclosing the
problem. It provides relying parties with information on the risks they are
taking in trusting the CA and can result in the development of better
practices across the industry. This is 

Re: A vision of an entirely different WebPKI of the future...

2018-08-16 Thread Paul Wouters via dev-security-policy

On Thu, 16 Aug 2018, Matthew Hardeman via dev-security-policy wrote:


1.  Run one or more root CAs


Why would people not in the business of being a CA do a better job than
those currently in the CA business?


I recognize it's a radical departure from what is.  I'm interested in 
understanding if anything proposed here is impossible.  If what's proposed here 
CAN happen, AND IF we are confident that valid certificates for a domain label 
should unambiguously align to domain control, isn't this the ultimate solution?


If you want a radical change that makes it simpler, start doing TLSA in
DNSSEC and skip the middle man that issues certs based on DNS records.

Paul
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: A vision of an entirely different WebPKI of the future...

2018-08-16 Thread Wayne Thayer via dev-security-policy
What problem(s) are you trying to solve with this concept? If it's
misissuance as broadly defined, then I'm highly skeptical that Registry
Operators - the number of which is on the same order of magnitude as CAs
[1] - would perform better than existing CAs in this regard. You also need
to consider the fact that ICANN has little authority over ccTLDs.

- Wayne

[1] https://icannwiki.org/Category:Registries

On Thu, Aug 16, 2018 at 12:51 PM Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Of late, there seems to be an ever increasing number of misissuances of
> various forms arising.
>
> Despite certificate transparency, increased use of linters, etc, it's
> virtually impossible to find any CA issuing in volume that hasn't committed
> some issuance sin.
>
> Simultaneously, there seems to be an increasing level of buy-in that the
> only useful identifying element(s) in a WebPKI certificate today are the
> domain labels covered by the certificate and that these certificates should
> be issued only upon demonstrated control of the included domain labels.
>
> DNS is already authoritatively hierarchical.
>
> ICANN has pretty broad authority to impose requirements upon the gTLDs...
>
> What if the various user agents' root programs all lobbied ICANN to impose
> a new technical requirement upon TLD REGISTRY operators?
>
> Specifically, that all registry operators would:
>
> 1.  Run one or more root CAs (presumably a Root CA per TLD under their
> management), to be included in the user agents' root program trust stores,
> such that each of these certificates is technically constrained to the
> included gTLD(s) contractually managed by that registry operator.
>
> - and further -
>
> 2.  That all such registries be required to make available to the
> registrars an automated interface for request of certificates signed by
> said CA (or a registry held and controlled issuing CA descendent of said
> root) over domain labels within that TLD on behalf of the customer holding
> a domain.  (For example, Google Domains has an interface by which it can
> request a certificate to be created and signed over a customer provided
> public key for requested labels within that registrar's customer's account.)
>
> - and further -
>
> 3.  The registrars be required to provide appropriate interfaces to their
> customers (just as they do for DNSSEC DS records today) to have the
> registry issue certificates over those domains they hold.
>
> If you wanted to spice it up, you could even require that the domain
> holder be able to request a signature over a technically constrained
> SubCA.  Then the domain holders can do whatever they like with their
> domains' certificates.
>
> Perform validation of technical requirements (like no 5 year EE certs)
> into the product and enforce at the product level.
>
> If the WebPKI is truly to be reduced to identifying specific domain labels
> in certificates issued only for those demonstrating control over those
> labels, why do we really need a marketplace where multiple entities can
> provide those certificates?
>
> The combination of registrar and registry already have complete trust in
> these matters because those actors can hijack control of their domains in
> an instant and properly ask any CA to issue.  That can happen today.
>
> What this would improve, however, is that there's one and only one place
> to get a certificate for your example.com.  From the registrar for that
> domain, with the signing request authenticated by the registrar as being
> for the customer of the registrar who holds that domain and then further
> delegated for signature by the registry itself.
>
> Such a mechanism could even be incrementally rolled out in parallel to the
> current scheme.  Over time, modern TLS endpoints meant to be accessed to
> browsers would migrate to certificates issued descending from these
> registry held and managed roots.
>
> From a practicality perspective, I don't see why this couldn't happen,
> should enough lobbying of ICANN be provided.  Today, ICANN already imposes
> certain technical requirements upon both Registries and Registrars as well
> as constraints upon their interactions with each other.  As a not entirely
> unrelated example -- this one involving cryptography and key management --
> today registries of generic TLDs are required to implement DNSSEC.
>
> I recognize it's a radical departure from what is.  I'm interested in
> understanding if anything proposed here is impossible.  If what's proposed
> here CAN happen, AND IF we are confident that valid certificates for a
> domain label should unambiguously align to domain control, isn't this the
> ultimate solution?
>
> Thanks,
>
> Matt Hardeman
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___

A vision of an entirely different WebPKI of the future...

2018-08-16 Thread Matthew Hardeman via dev-security-policy
Of late, there seems to be an ever increasing number of misissuances of various 
forms arising.

Despite certificate transparency, increased use of linters, etc, it's virtually 
impossible to find any CA issuing in volume that hasn't committed some issuance 
sin.

Simultaneously, there seems to be an increasing level of buy-in that the only 
useful identifying element(s) in a WebPKI certificate today are the domain 
labels covered by the certificate and that these certificates should be issued 
only upon demonstrated control of the included domain labels.

DNS is already authoritatively hierarchical.

ICANN has pretty broad authority to impose requirements upon the gTLDs...

What if the various user agents' root programs all lobbied ICANN to impose a 
new technical requirement upon TLD REGISTRY operators?

Specifically, that all registry operators would:

1.  Run one or more root CAs (presumably a Root CA per TLD under their 
management), to be included in the user agents' root program trust stores, such 
that each of these certificates is technically constrained to the included 
gTLD(s) contractually managed by that registry operator.

- and further -

2.  That all such registries be required to make available to the registrars an 
automated interface for request of certificates signed by said CA (or a 
registry held and controlled issuing CA descendent of said root) over domain 
labels within that TLD on behalf of the customer holding a domain.  (For 
example, Google Domains has an interface by which it can request a certificate 
to be created and signed over a customer provided public key for requested 
labels within that registrar's customer's account.)

- and further -

3.  The registrars be required to provide appropriate interfaces to their 
customers (just as they do for DNSSEC DS records today) to have the registry 
issue certificates over those domains they hold.

If you wanted to spice it up, you could even require that the domain holder be 
able to request a signature over a technically constrained SubCA.  Then the 
domain holders can do whatever they like with their domains' certificates.

Perform validation of technical requirements (like no 5 year EE certs) into the 
product and enforce at the product level.

If the WebPKI is truly to be reduced to identifying specific domain labels in 
certificates issued only for those demonstrating control over those labels, why 
do we really need a marketplace where multiple entities can provide those 
certificates?

The combination of registrar and registry already have complete trust in these 
matters because those actors can hijack control of their domains in an instant 
and properly ask any CA to issue.  That can happen today.

What this would improve, however, is that there's one and only one place to get 
a certificate for your example.com.  From the registrar for that domain, with 
the signing request authenticated by the registrar as being for the customer of 
the registrar who holds that domain and then further delegated for signature by 
the registry itself.

Such a mechanism could even be incrementally rolled out in parallel to the 
current scheme.  Over time, modern TLS endpoints meant to be accessed to 
browsers would migrate to certificates issued descending from these registry 
held and managed roots.

>From a practicality perspective, I don't see why this couldn't happen, should 
>enough lobbying of ICANN be provided.  Today, ICANN already imposes certain 
>technical requirements upon both Registries and Registrars as well as 
>constraints upon their interactions with each other.  As a not entirely 
>unrelated example -- this one involving cryptography and key management --  
>today registries of generic TLDs are required to implement DNSSEC.

I recognize it's a radical departure from what is.  I'm interested in 
understanding if anything proposed here is impossible.  If what's proposed here 
CAN happen, AND IF we are confident that valid certificates for a domain label 
should unambiguously align to domain control, isn't this the ultimate solution?

Thanks,

Matt Hardeman

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Issuance with improper domain validation

2018-08-16 Thread Jeremy Rowley via dev-security-policy
I posted this to Bugzilla last night. Basically, we had an issue with
validation that resulted in some certs issuing without proper (post-Aug 1)
domain verification. Still working out how many. The major reason was lack
of training by the validation staff combined with a lack of strict document
controls in the early part of the Symantec-DigiCert transition. 

 

1. How your CA first became aware of the problem (e.g. via a problem report
submitted to your Problem Reporting Mechanism, a discussion in
mozilla.dev.security.policy, a Bugzilla bug, or internal self-audit), and
the time and date.

 

On 2018/08/07 at 17:00 UTC, a customer submitted a request for information
about our validation process for the verification of four of their domains.
Upon investigation, we found that the four domains were not properly
validated using a post-Aug 1 domain validation method.  When attempting to
revalidated the domains prior to August 1, the random value was sent to an
address other than the WHOIS contact. This launched a broader investigation
into our overall revalidation efforts. This investigation is ongoing.  

 

2. A timeline of the actions your CA took in response. A timeline is a
date-and-time-stamped sequence of all relevant events. This may include
events before the incident was reported, such as when a particular
requirement became applicable, or a document changed, or a bug was
introduced, or an audit was done.

>From approximately February through April 2018, DigiCert permitted some
legacy Symantec customers to use Method 1 to validate their domains. Use of
the method was subject to manager approval and reserved only for those
companies that had urgent replacement deadlines that could not be met with
an alternative validation method. Under this process, prior to approval, the
validation staff was required to match the WHOIS company information and
obtain approval using the WHOIS email address. 

 

Around April, this process was modified to include a BR-compliant Random
Value that the validation staff sent using the WHOIS contact information.
Use of the random value indicated acceptance.  Adding the random value
effectively transformed the validation from Method 1 to Method 2. The email
could include multiple domains with the understanding that the WHOIS contact
information had to match each domain listed. 

 

We believe that in some cases either the validation staff failed to match
the WHOIS contact information for each domain listed, approving the
certificate solely based on the existing verified registrant info, or the
system did not check whether the WHOIS contact information matched the email
address used in the original confirmation. 

 

On Aug 1, 2018, Ballot 218 took effect, deprecating Method 1.

 

On, August 7, 2018, a customer requested the audit trail of a certificate
issued using our new process. Upon review, validation management discovered
the validation was improper because the previously verified email contact
information did not match the WHOIS contact information.  This discovery
created an escalation up to management.

 

On August 13, 2018, we stopped all issuance based on the process that
converted Method 1 validations to Method 2 validations. 

 

We're currently investigating and will post an update when we know the
number of certificates and more about what went wrong. For now, we know the
number of impacted certificates is just under 2,500. We should have a
clearer picture shortly, after we have conducted a manual review of all
2,500 certificates.

 

3. Whether your CA has stopped, or has not yet stopped, issuing certificates
with the problem. A statement that you have will be considered a pledge to
the community; a statement that you have not requires an explanation.

 

On August 13, although most of these validations were likely properly
completed, we stopped issuance using information converted from Method 1
until completing a more thorough investigation.

 

4. A summary of the problematic certificates. For each problem: number of
certs, and the date the first and last certs with that problem were issued.

 

Approximately 2,500 certificates are under review for validation issues. We
wanted to get the incident report out quickly as an FYI while our
investigation continues. We'll update this section in a final report.

 

5. The complete certificate data for the problematic certificates. The
recommended way to provide this is to ensure each certificate is logged to
CT and then list the fingerprints or crt.sh IDs, either in the report or as
an attached spreadsheet, with one list per distinct problem.

 

Still under review. We will upload them to the bug once we have a complete
list. Because the error was human, we are reviewing each validation to
determine whether Method 2 was correctly used.  Once we complete our review,
we'll post a Bugzilla attachment with the links for revoked certificates.

 

6. Explanation about how and why the mistakes were made or bugs 

RE: Misissuance and BR Audit Statements

2018-08-16 Thread Ben Wilson via dev-security-policy
What about all of the other audit firms?  

 

From: Wayne Thayer  
Sent: Wednesday, August 15, 2018 1:09 PM
To: Ben Wilson 
Cc: Ryan Sleevi ; mozilla-dev-security-policy 

Subject: Re: Misissuance and BR Audit Statements

 

I went ahead and noted these DigiCert audits as a concern on the CCADB record 
for Scott S. Perry CPA, PLLC.

 

I do think it's important for CAs to disclose these issues to their auditors, 
but I also expect auditors to discover them.

 

- Wayne

 

On Wed, Aug 15, 2018 at 8:21 AM Ben Wilson mailto:ben.wil...@digicert.com> > wrote:

Re-sending

-Original Message-
From: Ben Wilson 
Sent: Wednesday, August 15, 2018 8:34 AM
To: 'r...@sleevi.com  ' mailto:r...@sleevi.com> >; Wayne Thayer mailto:wtha...@mozilla.com> >
Cc: mozilla-dev-security-policy mailto:mozilla-dev-security-pol...@lists.mozilla.org> >
Subject: RE: Misissuance and BR Audit Statements

Thanks, Ryan and Wayne,

Going forward we'll work to improve our management letter disclosures to 
include reported mis-issuances during the audit period.

Sincerely yours,

Ben 

-Original Message-
From: dev-security-policy mailto:dev-security-policy-boun...@lists.mozilla.org> > On Behalf Of Ryan 
Sleevi via dev-security-policy
Sent: Monday, August 13, 2018 3:57 PM
To: Wayne Thayer mailto:wtha...@mozilla.com> >
Cc: mozilla-dev-security-policy mailto:mozilla-dev-security-pol...@lists.mozilla.org> >
Subject: Re: Misissuance and BR Audit Statements

Wayne,

Thanks for raising this. I definitely find it surprising to see nothing noted 
on Comodo's report, as you call out.

As another datapoint, consider this recent audit that is reported to be from 
DigiCert, by way of Amazon Trust Services' providing the audits for their 
externally operated sub-CAs in [A]. The scope of the WebTrust BR audit report 
in [B] contains in its scope "DigiCert ECC Extended Validation Server CA" of 
hash FDC8986CFAC4F35F1ACD517E0F61B879882AE076E2BA80B77BD3F0FE5CEF8862,
which [C]. During that time, this CA issued a cert [D] as part of their 
improperly configured Onion issuance in [E], which was remediated in early 
March, within the audit period for [B]. I couldn't find it listed in the report.

Looking over that period, there were two other (resolved) DigiCert issues, [F] 
and [G], which affect the CAs listed in scope of [B].

I was a bit surprised by this, as like you, I would have expected these to be 
called out by both Management's Assertion and the auditor.
http://www.webtrust.org/practitioner-qualifications/docs/item85808.pdf
provides some of the illustrative reports, but it appears to only provide 
templates for management on the result of obtaining a qualified report.

[A] https://bugzilla.mozilla.org/show_bug.cgi?id=1482930
[B] https://bug1482930.bmoattachments.org/attachment.cgi?id=8999669
[C] https://crt.sh/?id=23432431
[D] https://crt.sh/?id=351449246
[E] https://bugzilla.mozilla.org/show_bug.cgi?id=1447192
[F] https://bugzilla.mozilla.org/show_bug.cgi?id=1465600
[G] https://bugzilla.mozilla.org/show_bug.cgi?id=1398269#c29

On Tue, Aug 7, 2018 at 1:32 PM, Wayne Thayer via dev-security-policy < 
dev-security-policy@lists.mozilla.org 
 > wrote:

> Given the number of incidents documented over the past year [1][2] for 
> misissuance and other nonconformities, I would expect many of the 2018 
> period-of-time WebTrust audit statements being submitted by CAs to 
> include qualifications describing these matters. In some cases, that 
> is exactly what we’re seeing. One of many positive examples is 
> Deloitte’s report on Entrust [3] that includes 2 of the 3 issues documented 
> in Bugzilla.
>
> Unfortunately, we are also beginning to see some reports that don’t 
> meet my expectations. I was surprised by GlobalSign’s clean reports 
> [4] from Ernst & Young, but after examining their incident bugs, it 
> appears that the only documented misissuance that occurred during 
> their audit period was placing metadata in Subject fields. I can 
> understand how this could be regarded as a minor nonconformity rather 
> than a qualification, but I would have liked to at least see the issue noted 
> in the reports.
>
> Ernst & Young’s clean reports on Comodo CA [5] is the example that 
> prompted this message. We have documented the following issues that 
> occurred during Comodo’s last audit period:
> * Misissuance using "CNAME CSR Hash 2" method of domain control 
> validation (bug 1461391)
> * Assorted misissuances and failure to respond to an incident report 
> within
> 24 hours (bug 1390981)
> * CAA misissuance (bugs 1398545,1410834, 1420858, and 1423624 )
>
> I would like to know if Comodo reported these issues to EY. I asked 
> Comodo this question four weeks ago [6] but have not received a response.
>
> I will acknowledge that ETSI audits are an even bigger problem 
> (Actalis and SwissSign are recent examples [7][8][9]). Due to the 
> structure of those audits, there is no 

Re: Misissuance and BR Audit Statements

2018-08-16 Thread clemens.wanko--- via dev-security-policy
Dear all,
this is a joint response from ETSI ESI and ACABc:

ETSI have published a supplement to its audit requirements specifically to 
address specific requirements of Mozilla, and other CA/Browser Forum members, 
for auditing Trust Service Providers that issue Publicly-Trusted Certificates 
TS 119 403-2.  This is available for download at:
https://www.etsi.org/standards-search#search=TS119403-2 
With regard to the treatment of non-conformities it says in PTA-4.3-08: The 
Audit Attestation shall be issued only if no critical non-conformities are 
identified.

ETSI audits do cover the CA incident management. That includes the whole 
process including the timely treatment of incidents as well how to guarantee 
proper and comprehensive responses to incidents. In ETSI EN 310 401 
corresponding requirements are not only provided directly by section 7.9 
Incident Management but also through the requirement for a ISMS as stated in 
section 5. Assessing that, the ETSI auditor will look at the CA incident 
treatment during the Stage 1 as well as during the Stage 2 onsite part of the 
audit. This includes the treatment of Bugzilla and cert.sh listed issues. 
Incidents closed by the CA may have resulted in a change in the CA operations. 
In such cases the auditor checks that the changes are functioning correctly as 
defined by the CA. In that way the auditor is assessing the incident management 
as such including possible measures taken to avoid such incidents in the future 
and at the implemented measures itself.
 
Another matter is the question of how to handle security related incidents and 
the counter measures taken by a CA in audit reports. In order to keep the 
security issue confidential as well as the details of the measures taken by the 
CA, the accredited CABS (ETSI auditors) decided to document such findings in 
their detailed audit reports. These detailed reports list all relevant non 
conformities and the counter measures taken by the CA. It is handed over to the 
CA in addition to the audit attestation. Based upon that detailed report, the 
ETSI auditor will compile the Audit Attestation as the browsers have it in 
their hand. The contents of the Audit Attestation as summary document was 
agreed upon between ACAB’c and the CA/B Browser Forum. If you regard it helpful 
to add information about the audit results gained in the area of the CA 
incident treatment, we can certainly discuss that. We then should reach a 
common agreement on what exactly we add. 
 
We certainly believe however, that it is not advisable to publish detected weak 
points. E.g. there might be findings in the way that the CA has NOT correctly 
treated an incident. In that case the ETSI auditor will document such findings 
and will not issue a positive report as well as no Audit Attestation. The CA is 
then obliged to immediately install appropriate counter measures which again 
will be judged by the auditor. Only if the counter measures are rated 
sufficient in coverage and suitability by the auditor, he will issue a positive 
report and an Audit Attestation.

Regards
Clemens
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DEFCON Talk - Lost and Found Certificates

2018-08-16 Thread Wayne Thayer via dev-security-policy
On Thu, Aug 16, 2018 at 7:25 AM Eric Mill  wrote:

>
> I think this paper provides a good impetus to look at further shortening
> certificate lifetimes down to 13 months. That would better match the annual
> cadence of domain registration so that there's a smaller window of time
> beyond domain expiration for which a certificate would be valid, and would
> continue the momentum Mozilla and the CA/B Forum have been building around
> reducing certificate lifetimes and encouraging automation.
>
> The presentation suggests having certificates only be valid through the
> expiration date of the relevant registered domain, but I think that's
> unrealistic. Most of the time, domains are set to autorenew so that people
> never have to think about them, and their renewal cadence is totally
> disconnected from certificate renewal cadence. If a domain is 6 days from
> autorenew, a CA offering a 6-day-long cert and forcing someone to come back
> a week later for another one would be very unreasonable.
>
> I don't think the presentation points to building in stronger support for
> revocation. If anything, it points to revocation being a threat vector for
> DoS-ing sites that have nothing to do with the problem at hand, due to the
> long-standing (and reasonable) practice of multi-SAN certs that combine
> clumps of customers into individual certificates. Ryan points out that SNI
> is becoming something that can be relied on more universally, which would
> reduce the need for multi-SAN certificates, but multi-SAN certificates also
> provide useful operational benefits to organizations who are using CAs with
> rate limits, or simply for whom the ability to use 100x fewer certificates
> relieves an operational scaling burden.
>
> It may still be useful to deprecate multi-SAN certificates over time, but
> I think the single biggest thing to take away from the presentation is that
> long-lived certs create invisible risks during domain transfers, and that
> the risk is more than just theoretical when looking at the whole of the
> web. It's been a year and a half now since the last discussion and vote
> that went from a 39-month max to a 27-month max, so I think it's a great
> time to start talking about a 13-month maximum.
>
> I have to agree that the most practical improvement here is the reduction
of max validity to 13 months. As pointed out by Ryan, a step in that
direction would be to reduce the max data reuse period to 13 months or less.

I've also proposed a CAB Forum ballot [1] that should make it a bit easier
for domain owners to get residual certificates revoked. It includes a more
specific revocation requirement covering this scenario and clearer
disclosure of the CA's problem reporting mechanism.

- Wayne

[1] https://cabforum.org/pipermail/servercert-wg/2018-August/93.html
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DEFCON Talk - Lost and Found Certificates

2018-08-16 Thread ianfoster--- via dev-security-policy
Hey Everyone,

Author here, happy to answer any questions. Wayne did a good job summarizing 
the two problems, MitM and DoS. Basically there should be extra caution 
whenever sharing a certificate between different users/organizations. And We'd 
like to suggest that CA's not issue certificates that live beyond their 
domain's current lifetime.


I'm not sure what happened with the DEFCON slides, but I've uploaded a newer 
version here that seems to work better (at least in Chrome) here: 
https://insecure.design/BygoneSSL_DEFCON.pdf
The recording of the talk should be up in a few weeks.

On Wednesday, August 15, 2018 at 3:36:14 AM UTC-7, Wayne Thayer wrote:
> I'd like to call this presentation to everyone's attention:
> 
> Title: Lost and Found Certificates: dealing with residual certificates for
> pre-owned domains
> 
> Slide deck:
> https://media.defcon.org/DEF%20CON%2026/DEF%20CON%2026%20presentations/DEFCON-26-Foster-and-Ayrey-Lost-and-Found-Certs-residual-certs-for-pre-owned-domains.pdf
> 
> (NOTE: this PDF loads in Firefox, but not in Safari and not, I'm told, in
> Chrome's native PDF viewer).
> 
> Demo website: https://insecure.design/
> 
> The basic idea here is that domain names regularly change owners, creating
> "residual certificates" controlled by the previous owner that can be used
> for MITM. When a bunch of unrelated websites are thrown into the same
> certificate by a service provider (e.g. CDN), then this also creates the
> opportunity to DoS the sites by asking the CA to revoke the certificate.
> 
> The deck includes some recommendations for CAs.
> 
> What, if anything, should we do about this issue?
> 
> - Wayne

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DEFCON Talk - Lost and Found Certificates

2018-08-16 Thread Ryan Sleevi via dev-security-policy
On Wed, Aug 15, 2018 at 11:41 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On 14/08/2018 02:10, Wayne Thayer wrote:
> > I'd like to call this presentation to everyone's attention:
> >
> > Title: Lost and Found Certificates: dealing with residual certificates
> for
> > pre-owned domains
> >
> > Slide deck:
> >
> https://media.defcon.org/DEF%20CON%2026/DEF%20CON%2026%20presentations/DEFCON-26-Foster-and-Ayrey-Lost-and-Found-Certs-residual-certs-for-pre-owned-domains.pdf
> >
> > (NOTE: this PDF loads in Firefox, but not in Safari and not, I'm told, in
> > Chrome's native PDF viewer).
> >
> > Demo website: https://insecure.design/
> >
> > The basic idea here is that domain names regularly change owners,
> creating
> > "residual certificates" controlled by the previous owner that can be used
> > for MITM. When a bunch of unrelated websites are thrown into the same
> > certificate by a service provider (e.g. CDN), then this also creates the
> > opportunity to DoS the sites by asking the CA to revoke the certificate.
> >
> > The deck includes some recommendations for CAs.
> >
> > What, if anything, should we do about this issue?
> >
> > - Wayne
> >
>
> Suggested corrective processes that may be added to BRs, Mozilla
> policies or similar, and which the relevant parties (CAs and browsers)
> can begin to implement before they are standardized, as none contradict
> urrent policies, and several require coding and testing.  Backend
> uppliers (such as ejbCA and NSS) will probably be doing most of the work
> for the smaller players.
>
> 1. Browser members of CAB/F MUST do revocation checking, revocation
>being semi- or completely disabled in browsers is a glaring security
>hole that also affects these scenarios.  Browsers MUST support OCSP,
>CRL and other (future?) revocation protocols, in order to work
>securely with a heterogeneous mix of public CAs (that currently must
>run OCSP) and non-public offline organizational CAs.  Certificate
>client libraries made for/by major players should do the same, so they
>can be used in minor clients such as server side https clients and
>SMTP sending implementations.


The profound harm this would cause to the ecosystem is not worth
considering further. I am not sure why there is such a strong and
supportive view of CA-lead censorship, but given these issues have been
repeatedly pointed out - that such a system gives a few dozen entities
complete and total censorship control of the Internet - it likely does not
bear further discussion as at all being viable without substially larger
improvements.

2. When updating a CDN-style multi-SANs certificate and the replacement
>omits at least one of the previous SANs, CAs must revoke the old cert
>versions after a short administrative delay that allows the CDN to
>deploy the replacement cert.  Because this is not hard evidence of
>certificate invalid/misissued (this is a voluntary retraction due to
>non-compromise), something like 72 hours would be fine unless a
>faster revocation is explicitly requested by the previous cert holder
>(the CDN), the domain owner or any other relevant entity.


This is already required of the BRs, at 24 hours, and should remain so.

3. When updating a normal multi-SAN certificate (less than 3 different
>directly-below public-suffix DNS labels) always ask the certificate
>holder if and how quickly they want the old certificate voluntarily
>revoked (again no presumption of misissuance or compromise, domain
>owner may simply be regrouping his servers, rotating SANs between
>certificates from multiple CAs).  Also, with some CAs, the updating
>process is identical to the process for getting duplicate certs
>corresponding to different server end HSMs/TLS accelerators with an
>explicit intent to keep both certs valid for years.
> Unless of cause a faster revocation is explicitly requested by the
>previous cert holder, the domain owner or any other relevant entity.
>
> For example a certificate with the following SANs would fall under
>this more permissive rule:
>   example.com
>   www.example.com
>   static.example.com
>   mail.example.com
>   example.org
>   www.example.org
>   example.net
>   www.example.net
>   example.co.uk
>   web.example.co.uk
>   example.blogblog.com
>   beispiel.de
>   www.beispiel.de
>   eksempel.no
>   www.eksempel.no
> The labels directly below public suffix in this cert are "example",
> "beispiel" and "eksempel" totaling the maximum 3.  In a real case
> these would typically be names associated with a single real world
> entity that has registered its domains under a bunch of available
> suffixes, however the counting to 3 rule is easier to explain and
> enforce than subjective rules about companies and trademarks.  (Hint:
> In this example, the 3 

Re: DEFCON Talk - Lost and Found Certificates

2018-08-16 Thread Eric Mill via dev-security-policy
On Wed, Aug 15, 2018 at 6:36 AM Wayne Thayer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I'd like to call this presentation to everyone's attention:
>
> Title: Lost and Found Certificates: dealing with residual certificates for
> pre-owned domains
>
> Slide deck:
>
> https://media.defcon.org/DEF%20CON%2026/DEF%20CON%2026%20presentations/DEFCON-26-Foster-and-Ayrey-Lost-and-Found-Certs-residual-certs-for-pre-owned-domains.pdf
>
> (NOTE: this PDF loads in Firefox, but not in Safari and not, I'm told, in
> Chrome's native PDF viewer).
>
> Demo website: https://insecure.design/
>
> The basic idea here is that domain names regularly change owners, creating
> "residual certificates" controlled by the previous owner that can be used
> for MITM. When a bunch of unrelated websites are thrown into the same
> certificate by a service provider (e.g. CDN), then this also creates the
> opportunity to DoS the sites by asking the CA to revoke the certificate.
>
> The deck includes some recommendations for CAs.
>
> What, if anything, should we do about this issue?
>

I think this paper provides a good impetus to look at further shortening
certificate lifetimes down to 13 months. That would better match the annual
cadence of domain registration so that there's a smaller window of time
beyond domain expiration for which a certificate would be valid, and would
continue the momentum Mozilla and the CA/B Forum have been building around
reducing certificate lifetimes and encouraging automation.

The presentation suggests having certificates only be valid through the
expiration date of the relevant registered domain, but I think that's
unrealistic. Most of the time, domains are set to autorenew so that people
never have to think about them, and their renewal cadence is totally
disconnected from certificate renewal cadence. If a domain is 6 days from
autorenew, a CA offering a 6-day-long cert and forcing someone to come back
a week later for another one would be very unreasonable.

I don't think the presentation points to building in stronger support for
revocation. If anything, it points to revocation being a threat vector for
DoS-ing sites that have nothing to do with the problem at hand, due to the
long-standing (and reasonable) practice of multi-SAN certs that combine
clumps of customers into individual certificates. Ryan points out that SNI
is becoming something that can be relied on more universally, which would
reduce the need for multi-SAN certificates, but multi-SAN certificates also
provide useful operational benefits to organizations who are using CAs with
rate limits, or simply for whom the ability to use 100x fewer certificates
relieves an operational scaling burden.

It may still be useful to deprecate multi-SAN certificates over time, but I
think the single biggest thing to take away from the presentation is that
long-lived certs create invisible risks during domain transfers, and that
the risk is more than just theoretical when looking at the whole of the
web. It's been a year and a half now since the last discussion and vote
that went from a 39-month max to a 27-month max, so I think it's a great
time to start talking about a 13-month maximum.

-- Eric



> - Wayne
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>


-- 
konklone.com | @konklone 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy