Re: DigiCert-Symantec Announcement

2017-08-03 Thread Jakob Bohm via dev-security-policy

On 02/08/2017 23:12, Jeremy Rowley wrote:

Hi everyone,

  


Today, DigiCert and Symantec announced that DigiCert is acquiring the
Symantec CA assets, including the infrastructure, personnel, roots, and
platforms.  At the same time, DigiCert signed a Sub CA agreement wherein we
will validate and issue all Symantec certs as of Dec 1, 2017.  We are
committed to meeting the Mozilla and Google plans in transitioning away from
the Symantec infrastructure. The deal is expected to close near the end of
the year, after which we will be solely responsible for operation of the CA.

From there, we will migrate customers and systems as necessary to

consolidate platforms and operations while continuing to run all issuance
and validation through DigiCert.  We will post updates and plans to the
community as things change and progress.

  


Wasn't the whole outsourced SubCA plan predicated on the outsourcing
being done to someone else.  But with this, DigiCert (as successor to
Symantec) would be outsourcing to itself?



I wanted to post to the Mozilla dev list to:

1.  Inform the public,
2.  Get community feedback about the transition and concerns, and
3.  Get an update from the browsers on what this means for the plan,
noting that we fully commit to the stated deadlines. We're hoping that any
changes

  


Two things I can say we plan on doing (following closing) to address
concerns are:

a.  We plan to segregate certs by type on each root. Going forward, we
will issue all SSL certs from a root while client and email come from
different roots. We also plan on limiting the number of organizations on
each issuing CA.  We hope this will help address the "too big to fail" issue
seen with Symantec.  By segregating end entities into roots and sub CAs, the
browsers can add affected Sub CAs to their CRL lists quickly and without
impacting the entire ecosystem.  This plan is very much in flux, and we'd
love to hear additional recommendations.
b.  Another thing we are doing is adding a validation OID to all of our
certificates that identifies which of the BR methods were used to issue the
cert. This way the entire community can readily identify which method was
used when issuing a cert and take action if a method is deemed weak or
insufficient.  We think this is a huge improvement over the existing
landscape, and I'm very excited to see that OID rolled out.



If you take over the roots, why continue to keep separate roots for the
former Symantec brands (except as an "old hierarchy used mostly for
their own CRLs and historic relying parties such as certain Microsoft
products")?


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Found something I can't understand in these cerificates.

2017-08-02 Thread Jakob Bohm via dev-security-policy

On 02/08/2017 04:28, Han Yuwei wrote:

在 2017年8月1日星期二 UTC+8下午8:47:57,Nick Lamb写道:

On Tuesday, 1 August 2017 08:39:28 UTC+1, Han Yuwei  wrote:

1. the CN of two cerificates are same. So it is not necessary to issue two 
certificates in just 2 minutes.


I think the most likely explanation is the difference in signature algorithm, 
but it is also not uncommon for subscribers to have more than one certificate 
fo the same name for operational reasons, this is not prohibited although it 
can be useful to watch for the rate at which this happens to an issuing system 
as a possible sign of trouble.


2. second one used SHA1, though is consistent with BR, but first one used 
SHA256.


It is possible that a customer ordered a certificate and then, very quickly but 
alas after issuance they realised they had more specific needs, the SHA-256 
algorithm and the longer expiry date. Or maybe even they simply asked for the 
longer expiry and WoSign correctly pointed out that it would silly to use SHA-1 
with the longer expiry as it was to be (and has been) distrusted by that date.


3. first one has 39 month period of validity which is very rare.


Although rare this is permissible, and even, if the subscriber had a previous 
certificate for roughly the same name, a common business practice in order to 
secure customer loyalty.


4. Since they are issued so close they should be logged at CT same time but 
second one are too late.


CT logging was not mandatory at the time, and WoSign subsequently volunteered 
to upload all the extant certificates in mid-2016 during Mozilla's 
investigation of other (serious) problems.

I think these certificates are, though perhaps not entirely regular, not a sign 
of any problem at WoSign.


Thanks for your explanation. So maybe some devices require SHA1 certificate to 
operate normally?



This certificate was issued during the SHA-1 transition, and many
website operators probably wanted to have an SHA-1 certificate in case
some clients (web browsers etc.) were not yet ready.  This often
involved getting matching SHA-256 and SHA-1 certificates before the BR
deadline, then keeping the SHA-1 certificate "in stock" in case it was
needed during 2016 (when new SHA-1 web certs could no longer be issued,
but could still be valid if requested in 2015).

No/Little way of knowing if they ever deployed that SHA-1 certificate,
either generally (for all visitors), or behind some special logic to
only use it for some known-broken client types.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Final Decision by Google on Symantec

2017-07-31 Thread Jakob Bohm via dev-security-policy

On 31/07/2017 16:06, Gervase Markham wrote:

On 31/07/17 15:00, Jakob Bohm wrote:

It was previously stated in this newsgroup that non-SSLServer trust
would not be terminated, at least for now.


It was? Reference, please?



That was my general impression, I don't have a good way to search the
list for this.  I was just trying to be helpful.


- Due to current Mozilla implementation bugs,


Reference, please?



I am referring to the fact that EV-trust is currently assigned to roots,
not to SubCAs, at least as far as visible root store descriptions go.

Since I know of no standard way for a SubCA certificate to state if it
intended for EV certs or not, that would cause EV-trust to percolate
into SubCAs that were never intended for this purpose by the root CA.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Final Decision by Google on Symantec

2017-07-31 Thread Jakob Bohm via dev-security-policy

On 30/07/2017 00:45, Peter Bowen wrote:

On Thu, Jul 27, 2017 at 11:14 PM, Gervase Markham via
dev-security-policy <dev-security-policy@lists.mozilla.org> wrote:

Google have made a final decision on the various dates they plan to
implement as part of the consensus plan in the Symantec matter. The
message from blink-dev is included below.


[...]


We now have two choices. We can accept the Google date for ourselves, or
we can decide to implement something earlier. Implementing something
earlier would involve us leading on compatibility risk, and so would
need to get wider sign-off from within Mozilla, but nevertheless I would
like to get the opinions of the m.d.s.p community.

I would like to make a decision on this matter on or before July 31st,
as Symantec have asked for dates to be nailed down by then in order for
them to be on track with their Managed CA implementation timetable. If
no alternative decision is taken and communicated here and to Symantec,
the default will be that we will accept Google's final proposal as a
consensus date.


Gerv,

I think there three more things that Mozilla needs to decide.

First, when the server authentication trust will bits be removed from
the existing roots.  This is of notable importance for non-Firefox
users of NSS.  Based on the Chrome email, it looks like they will
remove trust bits in their git repo around August 23, 2018.  When will
NSS remove the trust bits?

Second, how the dates apply to email protection certificates, if at
all.  Chrome only deals with server authentication certificates, so
their decision does not cover other types of certificates.  Will the
email protection trust bits be turned off at some point?



It was previously stated in this newsgroup that non-SSLServer trust
would not be terminated, at least for now.


Third, what the requirements are for Symantec to submit new roots,
including any limit to how many may be submitted.
https://ccadb-public.secure.force.com/mozilla/IncludedCACertificateReport
shows that there are currently 20 Symantec roots included.  Would it
be reasonable for them to submit replacements on a 1:1 basis -- that
is 20 new roots?



Those 20 old roots were the result of decades of mergers and
acquisitions, causing lots of "duplicates".

That said, it is better for overall security to have meaningful splits
of root CAs for any new CAs.  Most notably:

- Issuing a new (cross-signed) root for each new signature algorithm.  A
 full service CA should currently have roots for RSA-SHA256, RSA-SHA384,
 RSA-SHA512, and the ECDSA curves above 256 bits, each paired with the
 matching SHA size.  Others would be added 6 to 24 months after becoming
 BR-permitted.

- Due to current Mozilla implementation bugs, having separate roots for
 EV certificates is currently the only way to only accept EV certs from
 EV SubCAs.  This is specific to Mozilla.

- If Symantec wants to again set up a trust tree dedicated to cross-
 signing lots of outside parties (similar to the old "GeoRoot" program),
 it would be best to put that under separate roots.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Final Decision by Google on Symantec

2017-07-31 Thread Jakob Bohm via dev-security-policy

On 28/07/2017 18:36, David E. Ross wrote:

On 7/28/2017 6:34 AM, Alex Gaynor wrote:

Frankly I was surprised to see Chromium reverse course on this -- they have
a history of aggressive leadership in their handling of CA failures, it's a
little disappointing to see them abandon that.

I'd strongly advocate for us perusing an earlier date -- December 1st at
the latest. Reasons:

1) Chromium's stated reason for avoiding dates around the holidays makes no
sense -- organizations with change freezes they need to adhere to have a
simple solution: obtain and deploy a new certificate at an earlier date!
They have 4 months between now and December 1st, if you can't deploy a cert
in 4 months, I submit you have larger problems.

2) It is important that CAs not be rewarded for the length of time this
process takes. CAs should be encouraged and rewarded for active
participation and engagement in this list.

3) Mandatory CT (well, mandatory for trust in Chromium) is a significant
win for security and transparency. At the moment, even discussing the
parameters of the distrust is complicated by the fact that we have limited
visibility into the iceberg of their PKI before June 1st, 2016 (see the
other thread where I attempt to discuss the count they provide of
outstanding certs that would be impacted). Given the challenges we know
exist in their legacy PKI, I think it's fair to say that continuing to
trust these certs represents real risk for our users's security.

Alex


I strongly agree.  The focus must be on protecting end-users, not on
Symantec or on Symantec's customers.

Symantec must know who has subscriber certificates that chain to
Symantec's roots.  Those customers could all be notified very quickly
that their certificates are about to be distrusted.  Those customers
would then have ample time to obtain and install replacement subscriber
certificates chaining to alternative roots of other certification
authorities.

As for any disruption of secure transactions, consider the abrupt
termination of DigiNotar when that certification authority was found to
have serious lapses in its operations.  The world did not end.



Note that DigiNotar was a country-local CA, not a global CA.  The risk
profile (for distrust, not for mis-issuance) was much lower.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Final Decision by Google on Symantec

2017-07-28 Thread Jakob Bohm via dev-security-policy
le on October 23, 2018, which is approximately 5 months
after Chrome 66’s corresponding dates.


By these dates, affected site operators will need to have fully replaced
any TLS server certificates issued from Symantec’s old infrastructure,
using any trusted CA including the new Managed Partner Infrastructure.
Failure to migrate a site to one of these two options will result in
breakage when Chrome 70 is released.


Reference Timeline:

In order to distill Chrome’s final plan into an actionable set of
information for site operators, we’ve drawn up a timeline of relevant
dates associated with this plan. As always, Chrome release dates can
vary by a number of days, but upcoming release dates can be tracked here
<https://www.chromium.org/developers/calendar>.


Date



Event

July 27, 2017

through

~March 15, 2018



Site Operators using Symantec-issued TLS server certificates issued
before June 1, 2016 should replace these certificates. These
certificates can be replaced by any currently trusted CA, including
Symantec.

~October 24, 2017



Chrome 62 released to Stable, which will add alerting in DevTools when
evaluating certificates that will be affected by the Chrome 66 distrust.

December 1, 2017



According to Symantec, the new Managed Partner Infrastructure will at
this point be capable of full issuance. Any certificates issued by
Symantec’s old infrastructure after this point will cease working in a
future Chrome update.



From this date forward, Site Operators can obtain TLS server

certificates from the new Managed Partner Infrastructure that will
continue to be trusted after Chrome 70 (~October 23, 2018).


December 1, 2017 does not mandate any certificate changes, but
represents an opportunity for site operators to obtain TLS server
certificates that will not be affected by Chrome 70’s distrust of the
old infrastructure.

~March 15, 2018



Chrome 66 released to beta, which will remove trust in Symantec-issued
certificates with a not-before date before June 1, 2016. As of this
date, in order to ensure continuity of operations, Site Operators must
be using either a Symantec-issued TLS server certificate issued on or
after June 1, 2016 or a currently valid certificate issued from any
other trusted CA as of Chrome 66.


Site Operators that obtained a certificate from Symantec’s old
infrastructure after June 1, 2016 are unaffected by Chrome 66 but will
need to obtain a new certificate by the Chrome 70 dates described below.

~April 17, 2018



Chrome 66 released to Stable.

~September 13, 2018



Chrome 70 released to Beta, which will remove trust in the old
Symantec-rooted Infrastructure. This will not affect any certificate
chaining to the new Managed Partner Infrastructure, which Symantec has
said will be operational by December 1, 2017.


Only TLS server certificates issued by Symantec’s old infrastructure
will be affected by this distrust regardless of issuance date.

~October 23, 2018



Chrome 70 released to Stable.



A note on the Blink process and this Intent:

As mentioned at the start of this discussion, the Google Chrome team
<https://groups.google.com/a/chromium.org/d/msg/blink-dev/eUAKwjihhBs/rpxMXjZHCQAJ>decided
to use the Blink Process <http://www.chromium.org/blink#new-features>in
discussing this change, as a way to gather feedback from site operators,
the Chromium community, other browsers, and the broader ecosystem about
how to balance the interoperability risk and compatibility risk. A goal
of this process is to balance risk by aligning on interoperable
solutions, minimize ambiguity, and provide transparency into the
decision making process. This process was designed around balancing
changes to the Web Platform APIs, and we recognize there are further
opportunities to improve this for Certificate Authority decisions. As
those improvements are not yet in place, we will be forgoing the Blink
API owner LGTM process for approval, and treating this more as a
product-level decision instead.


Thanks to everyone who put in so much time and energy to arrive at this
point.


(snip earlier e-mails)



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Update on SubCA Proposal

2017-07-26 Thread Jakob Bohm via dev-security-policy

On 25/07/2017 22:28, Rick Andrews wrote:

...

You are correct in that most customers are indeed not prepared to 
deal with potential crises in the SSL system. We have all witnessed 
this first hand with Heartbleed, the replacement of SHA1

certificates, etc. A four month replacement window for a forced
replacement of this magnitude is unprecedented and we know that
things will break. In the recent CA survey, most major CAs reported
that replacing certificates annually is something that many
organizations are not prepared for – a conclusion that is reinforced
by the recent CA/Browser Forum vote rejecting ballot 185, which
proposed to limit the maximum validity of SSL/TLS certificates
issued by all CAs to 13 months. Do you have data leading you to
believe that this replacement can be executed with limited Internet
ecosystem disruption, particularly amongst the largest enterprises
globally whose certificates would be impacted? If so, we would welcome
seeing that data/rationale. The issues that we have all witnessed
with other forced replacement events on much longer timelines indicate 
that the community is not yet at a place of automation to deal with 
such a transition, especially in a short timeframe. In this case, 
forcing a distrust date of December 1, 2017 (vs. our May 1, 2018 
distrust date recommendation) for certificates issued prior to 
June 1, 2016 increases the total number of premature replacement

certificates that would be need to be issued by approximately 50%
and gives website operators substantially less time (4 months vs.
9 months) in which to plan and execute such a replacement. A 
December 1, 2017 distrust date for certificates issued prior to

June 1, 2016 would introduce a known, actual, material risk to the
Internet ecosystem given the industry’s prior experience with forced
mass replacement episodes. We do not think the perceived benefit of
accelerating distrust for Symantec certificates issued before
June 1, 2016 from May 1, 2018 to December 1, 2017 (5 months of
validity) can possibly justify the significant ecosystem disruption 
that is likely to result from not accepting our proposed May 1, 2018

distrust date for certificates issued before June 1, 2016. We agree
with your public comments on June 19, 2017 that it is not
constructive to get into a date-based "negotiation" over the SubCA
proposal. We have worked backwards from our best estimate for how
long it would take us and our Managed CA partner(s) to implement the
SubCA proposal in a manner that allows for an orderly transition of
Symantec’s existing PKI infrastructure for SSL/TLS certificates to
a Managed CA(s) while minimizing disruption to websites and web
end-users, and have proposed aggressive, yet achievable deadlines
accordingly. As such, while we are willing to go down the SubCA path
overall, we strongly believe that this must be done in a way that
aims to minimize website disruption.



Where exactly was it suggested to distrust certificates issued before
Jun 1, 2016 on December 1, 2017?

So far most of the discussion seems to have been about distrusting
Symantec certs issued after December 1, 2017, at least as I read it.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Private key corresponding to public key in trusted Cisco certificate embedded in executable

2017-07-26 Thread Jakob Bohm via dev-security-policy

On 25/07/2017 14:58, simon.wat...@surevine.com wrote:

On Tuesday, 20 June 2017 10:43:37 UTC+1, Nick Lamb  wrote:

On Tuesday, 20 June 2017 05:50:06 UTC+1, Matthew Hardeman  wrote:

The right balance is probably revoking when misuse is shown.


Plus education. Robin has stated that there _are_ suitable CA products for this 
use case in existence today, but if I didn't know it stands to reason that at 
least some of the engineers at Cisco didn't know either.

Knowing what the Right Thing is makes it easier to push back when somebody 
proposes (as they clearly did here) the Wrong Thing. If, at the end of the day, 
Cisco management signs off on the additional risk from doing the Wrong Thing 
because it's cheaper, or faster, or whatever, that's on them. But if nobody in 
their engineering teams is even aware of the alternative it becomes a certainty.


I'm aware of another instance of this pattern in widely deployed software, 
discovered through security testing 3rd party applications (Thanks to Hanno 
discussing related issue on twitter).

The pattern "Using a certificate with DNS resolving to 127.0.0.1 to allow a browser 
page to communicate with a local web server".

Although it was done in an insecure fashion (not yet fully resolved, so 
disclosure would be unhelpful), I don't see a good alternative model for this 
vendor's use case.

I also looked when I first found the encrypted P12 file (and password hardcoded 
into the application), and found nothing useful online discussing this pattern, 
or alternatives.

I only found that you could no longer register certificates for 127.0.0.1 (the 
world only needs one such).

The mass issuance of certificates described above doesn't seem any better, 
since there still needs to be a trust relationship with an unprotected 
appliance or application. Or is there some aspect of the certificates issuance 
that bind them to a particular implementation (I thought all such binding were 
basically broken in most TLS implementations).

What is the recommended practice here?



In general: Do not conflate an outside address (such as the domain of
the vendors online service) with anything local (like 127.0.0.1 or the
actual IP of some local IoT device).  That is a gross violation of one
of the most fundamental security barriers (mine vs. yours).

The fundamental question is: What do they want to access locally, and
why should a remote web domain get any access to it?

From there logic solutions can be built that don't compromise the
browser security model and don't involve sharing secret keys with
strangers.

One common solution discussed by others in this thread is to serve
everything within the trust barrier from a localhost http server and
relying on browsers trusting explicit localhost http connections more
than remote http connections.

Another similar solution is to use a self-signed random per installation
localhost certificate generated on first run, then server everything
from a local https server.

For some cases, even simpler solutions such as using file:// URLs or
file upload form fields is appropriate.

Whatever is done, within the browser any data sharing between local and
remote needs to be treated as cross-site data sharing with all
associated security restrictions and concerns.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Regarding CA requirements as to technical infrastructure utilized in automated domain validations, etc. (if any)

2017-07-24 Thread Jakob Bohm via dev-security-policy

On 22/07/2017 02:38, birge...@princeton.edu wrote:

On Friday, July 21, 2017 at 5:06:42 PM UTC-5, Matthew Hardeman wrote:

It seems that a group of Princeton researchers just presented a live 
theoretical* misissuance by Let's Encrypt.

They did a sub-prefix hijack via a technique other than those I described here 
and achieved issuance while passing-through traffic for other destination 
within the IP space of the hijacked scope.

They've got a paper at: 
https://petsymposium.org/2017/papers/hotpets/bgp-bogus-tls.pdf

I say that theoretical because they hijacked a /24 of their own /23 under a different ASN 
but I am given to believe that the "adversarial" ASN is also under their 
control or that they had permission to use it.  In as far as this is the case, this 
technically isn't a misissuance because hijacking ones own IP space is technically just a 
different routing configuration diverting the traffic to the destination they properly 
control to another point of interconnection they properly controlled.


Hi,

I am Henry Birge-Lee, one of the researchers at Princeton leading that effort. 
I just performed that live demo a couple hours ago. You are correct about how 
we performed that attack. One minor detail is that we were forced to use the 
same ASN twice (for both the adversary and the victim). The adversary and the 
victim were two different routers peering with completely different ASes, but 
we were forced to reuse the AS because we were performing the announcements 
with the PEERING testbed (https://peering.usc.edu/) and are not allowed to 
announce from another ASN. Thus from a policy point of view this was not a 
misissuance and our BGP announcements would likely not have triggered an alarm 
from a BGP monitoring system. Even if we had the ability to hijack another ASes 
prefix and trigger such alarms we would be hesitant to because of ethical 
considerations. Our goal was to demonstrate the effectiveness and ease of 
interception of the technique we used, not freak out network operators because 
of potential hijacks.

I know some may argue that had we triggered alarms from the major BGP 
monitoring frameworks, CAs might not have issued us the certificates the did. 
We find that this is unlikely because 1) The DV certificate signing process is 
automated but the type of BGP announcements we made would likely require manual 
review before they could be definitively flagged as an attack 2) There is no 
evidence CAs are doing this (we know Let's Encrypt does not use BGP data 
because of their transparency and conversations with their executive director 
Josh Aas as well as their engineering team).



Another testing option would have been to use another AS legitimately
operated by someone associated with your research team.  Unless
Princeton has historically obtained 2 AS numbers (this is not uncommon),

Cooperating with a researcher at another research facility could obtain
the other AS number without any actual breach or hijack.


As for further work, implementation of countermeasures into the CA and Browser 
forum Baseline Requirements is our eventual goal and we see engaging with this 
ongoing discussion as a step in the right direction.

Over the next couple days I will look over these conversations in more detail 
and look for ways that we can integrate these ideas into the research we are 
doing.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Regarding CA requirements as to technical infrastructure utilized in automated domain validations, etc. (if any)

2017-07-20 Thread Jakob Bohm via dev-security-policy
nd not always using all of them can also help hide
the complete list of probe locations from attackers (who might otherwise
just log the accesses to one of their own servers during a legitimate 
request).


A public many-locations VPN service such as TOR could be used as a
supplemental check, but cannot be audited to CA network security
standards and thus would be an additional check.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Validation of Domains for secure email certificates

2017-07-20 Thread Jakob Bohm via dev-security-policy
is paying for that e-mail account by an
  identity tied method (e.g. credit card) and has marked that account as
  being their own (i.e. not a gift/sponsorship payment, e.g. as a gift
  to a friend or as part of a payment/salary package).  This of cause
  combined with strong identity verification, e.g. government photo ID.
   Example: If mail is hosted by a major telecoms operation such as BT
  in the UK, then BT may provide a BT associated CA with confirmation
  (not actual data sharing) that an e-mail account is associated with
  a telephone bill which is authenticated to a specific individual at a
  specific street address, allowing the CA to do final confirmation via
  a robocall/fax to that phone or by snail mail to that street address.

C. Evidence that this is not an e-mail hoster combined with domain
  validation at the first level below public suffixes (e.g. domain
  validation for "example.com", not "smtp.example.com" (which may be
  outsourced to a spam filtering service or similar).

Note, if a mail service provider wants certificates in the names of its
customers, it will have to ask them individually, and not as part of
"terms of service", "signup" etc.  And such certificates should be
required (by policy / CA subscriber contract) to include an "on behalf
of" text visible to any relying party in most existing S/MIME e-mail
clients.  For example "Microsoft on behalf of Exam Ple
<exam...@outlook.com>", not "Exam Ple <exam...@outlook.com>".


You may also want to look at the BRs for EV code signing certificates
(which may include e-mail addresses) for inspiration.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [EXT] Symantec Update on SubCA Proposal

2017-07-19 Thread Jakob Bohm via dev-security-policy

On 19/07/2017 17:31, Steve Medin wrote:

-Original Message-
From: dev-security-policy [mailto:dev-security-policy-
bounces+steve_medin=symantec@lists.mozilla.org] On Behalf Of
Jakob Bohm via dev-security-policy
Sent: Tuesday, July 18, 2017 4:39 PM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: [EXT] Symantec Update on SubCA Proposal


Just for clarity:

(Note: Using ISO date format instead of ambiguous local date format)

How many Symantec certs issued prior to 2015-06-01 expire after 2018-
06-01, and how does that mesh with the alternative date proposed
below:

On 18/07/2017 21:37, Steve Medin wrote:

Correction: Summary item #3 should read:

3. May 1, 2018
 a. Single date of distrust of certificates issued prior to 6/1/2016.

(changed from August 31,2017 for certificates issued prior to 6/1/2015 and
from January 18, 2018 for certificates issued prior to 6/1/2016).




Over 34,000 certificates were issued prior to 2015-06-01 and expire after 
2018-06-01. This is in addition to almost 200,000 certificates that would also 
need to be replaced under the current SubCA proposal assuming a May 1, 2018 
distrust date. We believe that nine months (from August 1, 2017 to May 1, 2018) 
is aggressive but achievable for this transition — a period minimally necessary 
to allow for site operators to plan and execute an orderly transition and to 
reduce the potential risk of widespread ecosystem disruption. Nevertheless, we 
urge the community to consider moving the proposed May 1, 2018 distrust date 
out even further to February 1, 2019 in order to minimize the risk of end user 
disruption by ensuring that website operators have a reasonable timeframe to 
plan and deploy replacement certificates.



So when and why did Symantec issue 34,000 WebPKI certificates valid
longer than 3 years, that would expire after 2018-06-01 ?

Are these certificates issued before 2015-04-01 with validity periods
longer than 39 months?

Are they certificates issued under "special circumstances" ?

Are they certificates with validity periods between 36 and 39 months?




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [EXT] Symantec Update on SubCA Proposal

2017-07-18 Thread Jakob Bohm via dev-security-policy
her technical implementations that may be incompatible
with and may delay individual organization's transitions to the new SubCA
model. In addition, although we believe the Managed CA(s) and Symantec
could be ready in December 2017, this time frame falls squarely within
most organizations' blackout periods.  To accommodate the need for
customer transitions as well as any required reissuances of existing
certificates by the Managed CA, we request that any distrust dates be
delayed until May 1, 2018.  During that time, we will work with customers
and partners to update their certificates and will continue
   to improve and test the Managed CA implementation. Additionally, while
Symantec and its Managed CA partner(s) will be ready to issue properly
validated certificates in December, the actual deployment of these
certificates by customers who are replacing unexpired certificates that were
issued before June 1, 2016 will take time, as described earlier. We would
also like to develop a clear process to evaluate exception requests that
includes consultations with the browsers to handle corner cases of system
incompatibility for situations with complex interdependencies that pose
material ecosystem risks.



*Summary*



Implementing this proposal is a major effort for Symantec as well as the
prospective Managed CAs, but it is an effort that we believe will minimize
customer, browser user, and ecosystem disruption.



The implementation and distrust dates we have recommended here are
based on our understanding of the constraints of the consensus proposal
and at our potential Managed CA partners:



1. December 1, 2017

a. Initial implementation date (changed from August 8, 2017) of
operational Managed CA.

b. Domain validation for all new certificates is performed by Managed
CA(s) (changed from November 1, 2017).



2. February 1, 2018

a. Full validation for all certificates is performed by Managed CA(s). Prior
to this date, reuse of Symantec authenticated organization information
would be allowable for certificates of <13 months in validity.



3. May 1, 2018

a. Single date of distrust of certificates issued prior to 6/1/2016. 
(changed
from August 31,2017 for certificates issued prior to 6/1/2015 and from
January 18, 2018 for certificates issued prior to 6/1/2018)



We expect to conclude our final selection of a Managed CA partner(s) within
the next 2 weeks.  During this time, we welcome any final clarifications from
Google, Mozilla and the rest of the community regarding the key activities
and other assumptions we have outlined in this post to inform the final
dates that we can incorporate into the contracting and implementation
phase of this Managed CA plan.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with invalid "double dot" dnsNames issued from Comodo intermediates

2017-07-18 Thread Jakob Bohm via dev-security-policy

On 18/07/2017 16:44, Rob Stradling wrote:

On 18/07/17 15:31, Jakob Bohm via dev-security-policy wrote:

On 18/07/2017 16:19, Rob Stradling wrote:

On 17/07/17 16:14, Jonathan Rudenberg via dev-security-policy wrote:
This certificate, issued by “Intesa Sanpaolo CA Servizi Esterni 
Enhanced” which chains up to a Baltimore CyberTrust root, contains 
an invalid dnsName of “www.intesasanpaolovita..biz” (note the two 
dots):


https://crt.sh/?q=2B95B474A2646CA28DC244F1AE829C850EA41CF64C75E11A94FE8D228735977B=cablint,x509lint 



This raises some questions about the technical controls in place for 
issuance from this CA.


Yesterday evening Jonathan privately made me aware of a leaf 
certificate (https://crt.sh/?id=73190674) with two SAN:dNSNames that 
contain consecutive dots, which was issued by a Comodo intermediate. 
He found this cert using the crt.sh DB's lint records.


This morning Robin and I have investigated this bug in our code and 
we've taken the following actions:
   - We've deployed a hotfix to our CA system to prevent any further 
"double dot" mis-issuances.


   - We've confirmed that the bug only affected labels to the left of 
the registrable domain.  (e.g., dNSNames of the form www..domain.com 
were not always rejected, but those of the form www.domain..com were 
always rejected).


This doesn't match the one reported by Ben Wilson, which also exhibits 
various Microsoft related oddities:


https://crt.sh/?id=172218371=cablint,x509lint


Hi Jakob.  Why would you expect it to?

Jonathan found certs containing "double dots" in dNSNames in leaf certs 
that chain to both DigiCert roots and Comodo roots.


Note that DigiCert != Comodo.



Sorry, I was mislead by the fact that you replied to a thread that only 
discussed the Baltimore certificates.


P.S.

I am subscribed to the newsgroup, no need to CC me on replies.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificates with invalid "double dot" dnsNames issued from Comodo intermediates

2017-07-18 Thread Jakob Bohm via dev-security-policy

On 18/07/2017 16:19, Rob Stradling wrote:

On 17/07/17 16:14, Jonathan Rudenberg via dev-security-policy wrote:
This certificate, issued by “Intesa Sanpaolo CA Servizi Esterni 
Enhanced” which chains up to a Baltimore CyberTrust root, contains an 
invalid dnsName of “www.intesasanpaolovita..biz” (note the two dots):


https://crt.sh/?q=2B95B474A2646CA28DC244F1AE829C850EA41CF64C75E11A94FE8D228735977B=cablint,x509lint 



This raises some questions about the technical controls in place for 
issuance from this CA.


Yesterday evening Jonathan privately made me aware of a leaf certificate 
(https://crt.sh/?id=73190674) with two SAN:dNSNames that contain 
consecutive dots, which was issued by a Comodo intermediate.  He found 
this cert using the crt.sh DB's lint records.


This morning Robin and I have investigated this bug in our code and 
we've taken the following actions:
   - We've deployed a hotfix to our CA system to prevent any further 
"double dot" mis-issuances.


   - We've confirmed that the bug only affected labels to the left of 
the registrable domain.  (e.g., dNSNames of the form www..domain.com 
were not always rejected, but those of the form www.domain..com were 
always rejected).


This doesn't match the one reported by Ben Wilson, which also exhibits 
various Microsoft related oddities:


https://crt.sh/?id=172218371=cablint,x509lint

   - We've performed an exhaustive search of our certificate database 
and found 2 further unexpired leaf certificates that exhibit this 
"double dot" problem.  I've submitted both of them to some CT logs:

https://crt.sh/?id=174668364
https://crt.sh/?id=174668366

We will revoke all 3 of these leaf certificates ASAP.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Regarding CA requirements as to technical infrastructure utilized in automated domain validations, etc. (if any)

2017-07-18 Thread Jakob Bohm via dev-security-policy
at the route seen to 
certain infrastructure that a CA wishes to test control of can not be hijacked 
at all, there are definitely ways to greatly reduce the risk and significantly 
curb the number of organizations well positioned to execute such an attack.

Questions I pose:

1.  Should specific procedurals as to how one _correctly_ and via best practice 
performs the validation of effective control of a file served up on the web 
server or correctly validates a DNS challenge be part of the baseline 
requirements and/or root program requirements?

2.  What specific elements would strike the right balance of need for security 
vs cost of implementation?

3.  Are CAs today already contemplating this?  I note that code commits in 
Let's Encrypt's Boulder CA recently include the notion of remotely reached 
validation agents and coordinating the responses that the validation agents got 
and establishing rules for quorate interpretation of the results of dispersed 
validators.   I can not imagine that said work occurred in a vacuum or without 
some thought as to the kinds of risks I am speaking of.

Even if we stop short of specifying the kinds of disparate networks and 
locations that CA validation infrastructure should measure validations from, 
there are other questions I think that are appropriate to discuss:

For example, 3.2.2.4.6 mentioned validation via HTTP or HTTPS access to an FQDN for a 
given blessed path.  It never says how to fetch that or how to look up where to fetch it 
from.  It may be tempting to say "In the absence of other guidance, behave like a 
browser."  I believe, however, that this would be an error.  A browser would accept 
non-standard ports.  We should probably only allow 80 or 443.  A browser wouldn't load 
over HTTPS if the current certificate were untrusted.  This is presumably irrelevant to 
the validation check and should probably ignore the certificate.  An HSTS preload might 
well be incorporated into a browser, but should probably be ignored by the validator.

At the network and interconnection layer, I think there are significant 
opportunities for a bad actor to compromise domain (and email, etc, etc) 
validation in ways that parties not intimately familiar with how service 
providers interconnect and route between themselves could fail to even 
minimally mitigate.

If I am correctly reading between the lines in commit messages and capabilities 
being built into Let's Encrypt's Boulder CA software, it would appear that 
there are others concerned about the limitations inherent in single point of 
origination DNS queries being relied upon for validation purposes.  If that is 
the case, what is the appropriate forum to discuss the risks and potential 
mitigations?  If there is reasonable consensus as to those, what is the proper 
place to lobby for adoption of standards?

Thanks,

Matt Hardeman




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Leaking private keys through web servers

2017-07-14 Thread Jakob Bohm via dev-security-policy

On 14/07/2017 21:04, Ryan Sleevi wrote:

On Fri, Jul 14, 2017 at 2:07 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


That's my point.  The current situation is distinct from weak keys, and
we shouldn't sacrifice the weak keys BR to make room for a compromised
keys BR.



But a weak key is always suspected of having suffered a Key Compromise - is
it not?




That is, changing to from "weak keys" to "suspected or known to have
suffered Key Compromise" in 6.1.1.3 would fully include weak keys (which
are already in scope) as well as include those excluded (compromised,
strong). This applies in addition to the requirements already present in
6.1.5/6.1.6 regarding key sizes and strengths (which already counter your
hypothetical), and 4.9.1.1/4.9.1.2 address the situation if a strong key,
post issuance, becomes either weak or compromised.



I would say a key whose strength against attack is significantly below
that of normally accepted keys (say 256 times less work factor with the
best known attack) would be a weak key, even though that might still
require a year or more on the best known hardware at fort Meade.

A suspected key compromise is a higher degree of badness (say > 0.01%
chance that attackers will have the key now or in the next 72 hours).

Heartbleed, Debian weak keys or leaving the private key in a public
place would be suspected or actual key compromise, because attackers may
have already gotten the private key using a known attack and almost
trivial effort, and there is no way to tell.

A 2048 bit RSA key which somehow has only the cryptographic strength
normally associated with a 1536 bit RSA key would be a weak key, but
still unlikely to be cracked within a few weeks of revealing the public
key.  And since an actual 1536 bit RSA key would be too weak to allow
issuance under BR 6.1.5, issuing a new certificate issued to such a
"special form" 2048 bit RSA key would be reasonably considered against
6.1.1.3 .

Now we obviously don't know the nature of yet-to-be-published 
cryptanalytical attacks, but if a new attack applies to only some keys, 
and those keys can be identified from the public keys, similar to how 
there is currently a formula for detecting "weak SHA-1" hashes, then CAs 
should apply that formula in a pre-issuance test under 6.1.1.3, but 
would not require instant revocation of all such keys under 4.9.1.2 #3, 
though CA subscribers would be encouraged to voluntarily reissue and 
revoke such keys as they are spotted by e.g. Qualsys SSL checks or 
database scans by the CA.


P.S.

The classical example of "weak keys" is of cause the special forms of
DES symmetric keys that were known 30 years ago, at a time when cracking
regular full DES keys was still beyond most attackers.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Leaking private keys through web servers

2017-07-14 Thread Jakob Bohm via dev-security-policy

On 14/07/2017 18:19, Ryan Sleevi wrote:

On Fri, Jul 14, 2017 at 11:11 AM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


On 14/07/2017 15:53, Ryan Sleevi wrote:


On Fri, Jul 14, 2017 at 1:29 AM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:



But that doesn't clearly include keys that are weak for other reasons,
such as a 512 bit RSA key with an exponent of 4 (as an extreme example).



Yes. Because that's clearly not necessary - because it's already covered
by
4.9.1.1 #3 and 6.1.5/6.1.6. So I don't think this serves as a valid
criticism to the proposed update.



That's why I called it an "extreme example".  Point was that the current
wording requires CAs to reject public keys that fail any reasonable test
for weakness not just the explicit cases listed in the BRs (such as too
short RSA keys or small composite public exponents).

For example if it is published that the RSA requirements in 6.1.6 are
insufficient (for example that moduli with more than 80% 1-bits are
weak), then the current wording of 6.1.1.3 would require CAs to
instigate such a test without waiting for a BR update.



Sure, but that's unrelated to the discussion at hand, at least from what
you've described. However, if I've misunderstood you, it might help if you
rephrase the argument from what was originally being discussed - which is
CAs issuing certificates for compromised keys - which are arguably distinct
from weak keys (which was the point I was making).



That's my point.  The current situation is distinct from weak keys, and
we shouldn't sacrifice the weak keys BR to make room for a compromised
keys BR.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Leaking private keys through web servers

2017-07-14 Thread Jakob Bohm via dev-security-policy

On 14/07/2017 16:07, Alex Gaynor wrote:

On Fri, Jul 14, 2017 at 10:03 AM, Ryan Sleevi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


On Fri, Jul 14, 2017 at 9:44 AM, Hanno Böck via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

...

>> ...

Ultimately I'm inclined to say that there really shouldn't be any good
reason at all to ever reuse a key. (Except... HPKP)



I see. I think I'd strongly disagree with that assertion. There are lots of
good reasons to reuse keys. The most obvious example being for
shorter-lived certificates (e.g. 90 days), which allow you to rotate the
key in case of compromise, but otherwise don't require you to do so.
Considering revocation information is no longer required to be provided
once a certificate expires, it _also_ means that in the CA Foo case, with
Key X compromised, the subscriber could get another cert for it once the
original cert has expired (and thus revocation information no longer able
to be provided)



What you described is a case where it's not harmful to reuse a key, not a
case in which it's a good reason to. Indeed defaulting to rotating your key
on every new certificate is probably the safest choice, as it ensures that
"key compromise" is no different from any other rotation, and keeps that
hinge well oiled.



His scenario did include "Key X compromised" .


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Leaking private keys through web servers

2017-07-14 Thread Jakob Bohm via dev-security-policy

On 14/07/2017 15:53, Ryan Sleevi wrote:

On Fri, Jul 14, 2017 at 1:29 AM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


But that doesn't clearly include keys that are weak for other reasons,
such as a 512 bit RSA key with an exponent of 4 (as an extreme example).



Yes. Because that's clearly not necessary - because it's already covered by
4.9.1.1 #3 and 6.1.5/6.1.6. So I don't think this serves as a valid
criticism to the proposed update.



That's why I called it an "extreme example".  Point was that the current
wording requires CAs to reject public keys that fail any reasonable test
for weakness not just the explicit cases listed in the BRs (such as too
short RSA keys or small composite public exponents).

For example if it is published that the RSA requirements in 6.1.6 are
insufficient (for example that moduli with more than 80% 1-bits are
weak), then the current wording of 6.1.1.3 would require CAs to
instigate such a test without waiting for a BR update.




Maybe it would be better stylistically to add this to one of the other
BR clauses.



Considering that the goal is to make it clearer, I'm not sure this
suggestion furthers that goal.



It could be in a new clause 6.1.1.3.1 (not applicable to SubCAs) or a
new clause 6.1.1.4 (applicable to all public keys, not just subscribers)
or a new clause 6.1.6.1 (ditto), or it could be added as an additional
textual paragraph in 6.1.1.3 or 6.1.6 .




Anyway, I think this is covered by BR 4.9.1.1 #3, although it might not
be obvious to the CA that they should have set up checks for this, since
most key compromise reports come from the subscriber, who would be a lot
less likely to make this mistake after revoking the key themselves,
except when the revocation was mistaken (this happens, and in that case,
reusing the key is not a big problem).



I'm afraid you may have misunderstood the point. Certainly, 4.9.1.1 #3
covers revocation. However, my suggestion was about preventing issuance,
which is why I was talking about 6.1.1.3. That is, unquestionably, if a CA
revokes a certificate for key compromise, then issues a new one for that
same key, they're obligated under 4.9.1.1 #3 to revoke within 24 hours. My
point was responding to Hanno's suggestion of preventing them from issuing
the second certificate at all.



But at least 4.9.1.1 #3 requires them to revoke without waiting for a
new report.  And it would be obviously and patently bad faith to revoke
the same key every 24 hours and claim all is well (once or twice may be
an understandable oversight, since this is not such a common scenario,
but after that they should start automating the rejection/revocation).


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Leaking private keys through web servers

2017-07-13 Thread Jakob Bohm via dev-security-policy

On 12/07/2017 16:47, Ryan Sleevi wrote:

On Wed, Jul 12, 2017 at 10:19 AM, Hanno Böck via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


* Comodo re-issued certs with the same key. I wonder if there should be
   a rule that once a key compromise event is known to the CA it must
   make sure this key is blacklisted. (Or maybe one of the existing
   rules already apply, I don't know.)



BRs 1.4.5 6.1.1.3 only requires the CA to reject a certificate if it
doesn't mean 6.1.5/6.1.6 or is known weak private key.

While the example is given (e.g. Debian weak keys), one could argue that
'weak' includes 'disclosed'. Of course, given that the specific term "Key
Compromise" is also provided in the BRs, that seems a stretch.

One could also argue 6.1.2 is applicable - that is, revocation was
immediately obligated because of the awareness - but that also seems
tortured.

Probably the easiest thing to do is update the BRs in 6.1.1.3 to replace
"known weak private key" to just say "If the private key is suspected or
known to have suffered Key Compromise" - which includes known weak private
keys, as well as the broader sense.



But that doesn't clearly include keys that are weak for other reasons,
such as a 512 bit RSA key with an exponent of 4 (as an extreme example).


One challenge to consider is how this is quantified. Obviously, if you
reported to Comodo the issue with the key, and then they issued another
certificate with that key, arguably that's something Comodo should have
caught. However, if you reported the compromise to, say, ACME CA, and then
Comodo issued an equivalent cert, that's questionable. I'm loathe to make
CAs rely on eachothers' keyCompromise revocation reasons, simply because we
have no normative guidance in the BRs (yet) that require CAs be honest or
competent with their revocation reasons (... yet). Further, we explicitly
don't want to have a registry (of compromised keys, untrustworthy orgs,
etc), for various non-technical reasons.

I'm curious if you have thoughts there - particularly, how you reported the
private key was compromised (did you provide evidence - for example, a
signed message, or simply a link to "Here's the URL, go see for yourself"?)
- and how you see it working cross-CA boundaries.



Maybe it would be better stylistically to add this to one of the other
BR clauses.

Anyway, I think this is covered by BR 4.9.1.1 #3, although it might not
be obvious to the CA that they should have set up checks for this, since
most key compromise reports come from the subscriber, who would be a lot
less likely to make this mistake after revoking the key themselves,
except when the revocation was mistaken (this happens, and in that case,
reusing the key is not a big problem).


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Machine- and human-readable format for root store information?

2017-06-27 Thread Jakob Bohm via dev-security-policy

On 26/06/2017 23:53, Moudrick M. Dadashov wrote:

Hi Gerv,

FYI: ETSI TS 119 612 V2.2.1 (2016-04), Electronic Signatures and 
Infrastructures (ESI); Trusted Lists
http://www.etsi.org/deliver/etsi_ts/119600_119699/119612/02.02.01_60/ts_119612v020201p.pdf 



Having skimmed through this document, I find that particular format
unsuited for general use, due to the following issues:

- Excessive inclusion of information duplicated from the certificates
 themselves.
- Complete repetition of all information for any root that is trusted
 for multiple purposes.
- The use of long ETSI/EU-specific uris to specify simply things such as
 "trusted"/"not trusted".
- Apparent lack of syntax for specifying scopes that are global but do
 not represent a global authority (such as the UN).

- A notable lack of fields to represent the trust data that real world
 commercial root programs typically need to specify for trusted CA
 certs.
- The apparent need to go through ETSI-specific registration procedures
 to add "extensions" and/or "identifiers" for anything missing.
- Mandatory provision of snail-mail technical support.

- EU specific oddities, such as alternative identifiers for some some EU
 member states.

That said, it could provide some inspiration.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Machine- and human-readable format for root store information?

2017-06-27 Thread Jakob Bohm via dev-security-policy

On 27/06/2017 19:49, Gervase Markham wrote:

On 27/06/17 10:35, Ryan Sleevi wrote:

> ...

Further, one could
reasonably argue that an authroot.stl approach would trouble Apple, much as
other non-SDO driven efforts have, due to IP concerns in the space.
Presumably, such collaboration would need to occur somewhere with
appropriate IP protections.


Like, really? Developing a set of JSON name-value pairs to encode some
fairly simple structured data has potential IP issues? What kind of mad
world do we live in?



I think he was referring to possible IP concerns with reusing
Microsoft's ASN.1 based format.

P.S.

Note that Microsoft has two variants of their "Certificate Trust List"
format:

One that actually includes all the trusted certificates, which is more
useful as inspiration for this effort.

Another that contains only metadata, but not the actual certs.  This is
the one loosely described at http://unmitigatedrisk.com/?p=259



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Auditor Qualifications

2017-06-27 Thread Jakob Bohm via dev-security-policy

On 27/06/2017 06:47, Kathleen Wilson wrote:

All,

We've added new Auditor objects to the Common CA Database. Previously auditor 
information was just in text fields, and the same auditor could be represented 
different ways. Now we will have a master list of auditors that CAs can select 
from when entering their Audit Cases to provide their annual updates. The root 
store operator members of the CCADB will update this data as we encounter audit 
statements from new auditor/locations that we are able to verify.

I have started the master list based on auditors encountered in the CCADB for 
root certificates.

https://ccadb-public.secure.force.com/mozilla/AuditorQualificationsReport

I will greatly appreciate it if you will review the list and let me know if 
I've made any mistakes in the data. Also, I will greatly appreciate good links 
to the qualifications to the ETSI auditors (I'm not sure if the 
links/qualifications I've used are the best).

Thanks,
Kathleen




Maybe add an extra column indicating if this auditor is currently
distructed by Mozilla (and thus not accepted for new audits on Mozilla
trusted roots).  Alternatively a list of CCADB-based root programs
distrusting an auditor.

This could become the canonical place for Mozilla and other CCADB-based
root programs to indicate when they override the trust programs
(WebTrust, ETSI, ...) decision on this.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Unknown Intermediates

2017-06-23 Thread Jakob Bohm via dev-security-policy

On 23/06/2017 14:59, Rob Stradling wrote:

On 22/06/17 10:51, Rob Stradling via dev-security-policy wrote:

On 19/06/17 20:41, Tavis Ormandy via dev-security-policy wrote:



Is this useful? if not, what key usage is interesting?

https://lock.cmpxchg8b.com/ServerOrAny.zip


Thanks for this, Tavis.  I pointed my certscraper 
(https://github.com/robstradling/certscraper) at this URL a couple of 
days ago.  This submitted many of the certs to the Dodo and Rocketeer 
logs.


However, it didn't manage to build chains for all of them.  I haven't 
yet had a chance to investigate why.


There are ~130 CA certificates in 
https://lock.cmpxchg8b.com/ServerOrAny.zip that I've not yet been able 
to submit to any CT logs.


Reasons:
   - Some are only trusted by the old Adobe CDS program.
   - Some are only trusted for Microsoft Kernel Mode Code Signing.
   - Some are very old roots that are no longer trusted.
   - Some are corrupted.
   - Some seem to be from private PKIs.



The SubCAs for Windows 5.01 (XP) to 6.03 (Eight point One) kernel mode
signing are all 10 year cross-certs from a dedicated single-purpose
Microsoft root CA to well known roots from companies like Symantec and
GlobalSign.

They can (or could) be downloaded from a Microsoft support page, I know
of 6 that expired in 2016, 19 that will expire in 2021 and 4 that will
expire in 2023.

The issuing 20 year root is

http://www.microsoft.com/pki/certs/MicrosoftCodeVerifRoot.crt

CN=Microsoft Code Verification Root, O=Microsoft Corporation, L=Redmond, 
ST=Washington, C=US


SHA1 Fingerprint=8F:BE:4D:07:0E:F8:AB:1B:CC:AF:2A:9D:5C:CA:E7:28:2A:2C:66:B3

The relevant root store contains *only* this root, so the issuing (and
possible revocation) of the SubCA/crosscerts acts as a dedicated root
program more restrictive than the normal Microsoft root program.  Chain
validation is often done during boot before TCP/IP is up and running
(even the network drivers are signed with this), so there is no AIA or
OCSP available.  Pre-download CRLs could be checked, but I don't know if
they do that.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On GitHub, Leaked Keys, and getting practical about revocation

2017-06-22 Thread Jakob Bohm via dev-security-policy

On 22/06/2017 15:02, Ryan Sleevi wrote:

On Thu, Jun 22, 2017 at 1:59 PM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


> (Snip long repeat of the same opinion)

You seem to argue:

- Because the recent research on efficient central CRL distribution was
 based on a novel optimization of a previously inefficient algorithm,
 then it is nothing new and should be ignored.

- Operating a central CRL distribution service is a suspect commercial
 enterprise with a suspect business model, not a service to the
 community.

- OCSP stapling of intermediary certificates is inefficient because we
 should all just use central CRL distribution in a form where not all
 revocations are included.

- Because most/all browsers contain security holes in their revocation
 checking, making those holes bigger is not a problem.

- Revocation by the issuing CA is not trustworthy, thus nothing is,
 therefore everybody should just trust compromised keys as if they were
 not compromised.

- An attacker with a stolen/compromised key not doing OCSP stapling is
 the fault of the legitimate key holder.

- Forcing people to use OCSP stapling will magically cause software that
 allows this to spring into existence overnight.  If this doesn't happen
 it is the fault of the server operator, not because the demand was
 premature.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: On GitHub, Leaked Keys, and getting practical about revocation

2017-06-22 Thread Jakob Bohm via dev-security-policy
 in readiness and incentivize deployment of OCSP 
must-staple at the operations level.  Maybe CAs are able to / encouraged to 
offer longer validity periods on OCSP must-staple certificates?

In terms of dollars and cents, I think the various giant organizations (Hi, 
Google, how are you today?) most interested and best positioned to benefit from 
a secure web would regard the costs of a coordinated effort such as this pretty 
insubstantial.

Through a little Google digging, I find numerous comments and references from 
well informed parties going back quite several years lamenting the poor state 
of support of OCSP stapling in both Apache HTTPD and NGINX.  I'm well aware of 
the rising power that is Caddy, but it's not there yet.  The whole ecosystem 
could be greatly helped by making the default shipping versions of those two 
daemons in the major distros be ideal OCSP-stapling ready.



Please note that Apache and NGINX are by far not the only TLS servers
that will need working OCSP stapling code before must-staple can become
default or the only method checked by Browsers and other TLS clients.

What is needed is:

1. An alternative checking mechanism, such as the compact super-CRL
  developed by some researchers earlier this year.  This needs to be
  deployed and used *before* turning off traditional CRL and OCSP
  checking in relying party code, since leaving a relying party
  application without any checking for revoked certificates is a pretty
  obvious security hole.

2. Full OCSP stapling support in all TLS libraries, including the LTS
  branches of OpenSSL, mbedTLS, NSS, Java, Android etc.  Providing this
  only in the "next great release that is incompatible with many
  existing users" is a common malpractice among security library
  vendors, just as there are still systems that refuse to support TLS
  1.2 and ubiquitous SHA-256 signatures in todays world, simply because
  the original system/library vendor refuses to backport the needed
  changes.

   Here is the status as I know it so far:
  OpenSSL 1.0.x (LTS, backward compatible): Partial support for leaf
OCSP stapling, no code to actually help programs maintain fresh
OCSP responses to be stapled, no support for full chain OCSP
stapling.
  OpenSSL 1.1.x (current, redesigned API): Maybe more implemented, not
sure.
  OpenSSL clones (LibreSSL, BoringSSL etc.): Unknown status, probably no
better than OpenSSL itself.
  mbedTLS: Apparently no OCSP stapling support
  NSS as a client: Apparently at least some OCSP stapling support.
  NSS as a server: Unknown status.
  Java runtime (Oracle/OpenJDK): Unknown status
  Android Java runtime/BouncyCastle: Unknown status
  Microsoft SCHANNEL: Apparently at least full support for leaf OCSP
stapling, at least since the version shipped with Windows 6.0 .
The version shipped with Windows 6.0 doesn't support TLS 1.1/1.2.
  Microsoft CryptoAPI (for non-TLS cert checks): Support for SHA-256
signatures is inconsistent between Windows 5.x and Windows 6.x
because different API identifiers are used for SHA-256 related
operations.  Related code signing doesn't support SHA-256 signatures
until Windows 6.1 .

3. Once #2 is achieved, actual TLS servers and clients using those
  libraries can begin to enable OCSP stapling.

4. Once #3 is achieved and deployed, then OCSP stapling might become
  mandatory by default.

Note that the real world time to achieve #2..#4 is unfortunately
significant, hence my suggesting to use #1 as a faster solution.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: When are public applications embedding certificates pointing to 127.0.0.1 OK?

2017-06-22 Thread Jakob Bohm via dev-security-policy

On 21/06/2017 22:01, andrewm@gmail.com wrote:

On Wednesday, June 21, 2017 at 1:35:13 PM UTC-5, Matthew Hardeman wrote:

Regarding localhost access, you are presently incorrect.  The browsers do not 
allow access to localhost via insecure websocket if the page loads from a 
secure context.  (Chrome and Firefox at least, I believe do not permit this 
presently.)  I do understand that there is some question as to whether they may 
change that.


Right, I wasn't taking about WebSockets in particular, but about any possible 
form of direct communication between the web app and desktop application. 
That's why I pointed to plain old HTTP requests as an example.


As for whether or not access to localhost from an externally sourced web site is 
"inherently a bad thing".  I understand that there are downsides to proxying 
via the server in the middle in order to communicate back and forth with the locally 
installed application.  Having said that, there is a serious advantage:

>From a security perspective, having the application make and maintain a 
connection or connections out to the server that will act as the intermediary 
between the website and the application allows for the network administrator to 
identify that there is an application installed that is being manipulated and 
controlled by an outside infrastructure.  This allows for visibility to the fact 
that it exists and allows for appropriate mitigation measures if any are needed.

For a website to silently contact a server application running on the loopback 
and influence that software while doing so in a manner invisible to the network 
infrastructure layer is begging to be abused as an extremely covert command and 
control architecture when the right poorly written software application comes 
along.


I guess I don't completely understand what your threat model here is. Are you 
saying you're worried about users installing insecure applications that allow 
remote code execution for any process that can send HTTP requests to localhost?

Or are you saying you're concerned about malware already installed on the 
user's computer using this mechanism for command and control?

Both of those are valid concerns. I'm not really sure whether they're 
significant enough though to break functionality over, since they both require 
the user to already be compromised in some way before they're of any use to 
attackers. Though perhaps requiring a permissions prompt of some kind before 
allowing requests to localhost may be worth considering...

As I said though, this is kinda straying off topic. If the ability of web apps 
to communicate with localhost is something that concerns you, consider starting 
a new topic on this mailing list so we can discuss that in detail without 
interfering with the discussion regarding TLS certificates here.



The most obvious concern to me is random web servers, possibly through
hidden web elements (such as script tags) gaining access to anything
outside the Browser's sandbox without clear and separate user
action.  For example, if I visit a site that carries an advertisement
for Spotify, I don't want that site to have any access to my locally
running Spottify software, its state or even its existence.

The most obvious way to have a local application be managed from a local
standard web browser while also using resources obtained from a central
application web site is for the local application to proxy those
resources from the web site.  Thus the Browser will exclusively be
talking to a localhost URL, probably over plain HTTP or some locally
generated localhost certificate, that may or may not be based on
existing machine certificate facilities in some system configurations.

In other words, the user might open http://localhost:45678 to see the
App user interface, consisting of local element and some elements which
the app backend might dynamically download from the vendor before
serving them within the http://localhost:45678/ URL namespace.

This greatly reduces the need for any mixing of origins in the Browser,
and also removes the need to have publicly trusted certificates revealed
to such local applications.

For some truly complex scenarios, more complex techniques are needed to
avoid distributing private keys, but that's not needed for the cases
discussed here.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec response to Google proposal

2017-06-20 Thread Jakob Bohm via dev-security-policy

On 20/06/2017 08:08, Gervase Markham wrote:

On 20/06/17 01:21, Jakob Bohm wrote:

2. For any certificate bundle that needs to be incorporated into the
   Mozilla root stores, a significant period (3 to 6 months at least)
   will be needed between acceptance by Mozilla and actual trust by
   Mozilla users.


Not if the roots were cross-signed by the old PKI.



Then they don't "need to be incorporated into the Mozilla root stores",
although incorporating them may be useful as part of removing the old
roots.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [EXT] Mozilla requirements of Symantec

2017-06-20 Thread Jakob Bohm via dev-security-policy

On 20/06/2017 09:05, Ryan Sleevi wrote:

On Mon, Jun 19, 2017 at 7:01 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


NSS until fairly recently was in fact used for code signing of Firefox
extensions using the public PKI (this is why there is a defunct code
signing trust bit in the NSS root store).



This is not an accurate representation on why there is a code signing trust
bit in the NSS root store.



Then what is an accurate representation?

I am of cause aware that before the xpi format was even invented,
Netscape, Mozilla and Sun/Oracle used the NSS key store and possibly the
NSS code to validate signatures on Java applets.  But I am unsure if and
when they stopped doing that.




I also believe that the current checking of "AMO" signatures is still
done by NSS, but not using the public PKI.



If you mean with respect to code, sure, but that is a generic signature
checking.



Really?  I would have thought it was the same validation code previously
used for public PKI signatures on the same file types.

Anyway, it is most certainly checking signatures on code in a way
consistent with the general concept of "code signing" (the exact
placement and formatting of "code signing" signatures is extremely
vendor and file format dependent).




This makes it completely reasonable for other users of the NSS libraries
to still use it for code signing, provided that the "code signing" trust
bits in the NSS root store are replaced with an independent list,
possibly based on the CCADB.



This is not correct. The NSS team has made it clear the future of this code
with respect to its suitability as a generic "code signing" functionality -
that is, that it is not.



Pointer?

Was this communicated in a way visible to all NSS using software?




It also makes it likely that systems with long development / update
cycles have not yet deployed their own replacement for the code signing
trust bits in the NSS root store, even if they have a semi-automated
system importing changes to the NSS root store.  That would of cause be
a mistake on their part, but a very likely mistake.



This was always a mistake, not a recent one. But a misuse of the API does
not make a valid use case.



How was it a mistake back when Mozilla was using NSS for "code signing"?
(Whenever that was).

P.S.

I am following the newsgroup, no need to CC me on replies.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec response to Google proposal

2017-06-19 Thread Jakob Bohm via dev-security-policy
illa.org
https://lists.mozilla.org/listinfo/dev-security-policy









Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [EXT] Mozilla requirements of Symantec

2017-06-19 Thread Jakob Bohm via dev-security-policy

On 12/06/2017 22:12, Nick Lamb wrote:

On Monday, 12 June 2017 17:31:58 UTC+1, Steve Medin  wrote:

We think it is critically important to distinguish potential removal of support 
for current roots in Firefox versus across NSS. Limiting Firefox trust to a 
subset of roots while leaving NSS unchanged would avoid unintentionally 
damaging ecosystems that are not browser-based but rely on NSS-based roots such 
as code signing, closed ecosystems, libraries, etc.


Abusing NSS to support code signing or "closed ecosystems" would be an error 
regardless of what happens to Symantec, it makes no real sense for us to prioritize 
supporting such abuse. To the extent that m.d.s.policy represents consumers of NSS 
certdata other than Firefox, they've _already_ represented very strongly that what they 
want is for this data to follow Mozilla's trust decisions more closely not less.



I believe you are exaggerating in that assertion.

NSS until fairly recently was in fact used for code signing of Firefox
extensions using the public PKI (this is why there is a defunct code
signing trust bit in the NSS root store).

I also believe that the current checking of "AMO" signatures is still
done by NSS, but not using the public PKI.

This makes it completely reasonable for other users of the NSS libraries
to still use it for code signing, provided that the "code signing" trust
bits in the NSS root store are replaced with an independent list,
possibly based on the CCADB.

It also makes it likely that systems with long development / update
cycles have not yet deployed their own replacement for the code signing
trust bits in the NSS root store, even if they have a semi-automated
system importing changes to the NSS root store.  That would of cause be
a mistake on their part, but a very likely mistake.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New undisclosed intermediates

2017-06-09 Thread Jakob Bohm via dev-security-policy

On 09/06/2017 12:29, Rob Stradling wrote:

On 09/06/17 11:16, Jakob Bohm via dev-security-policy wrote:


What in the policy says they become in-scope from a certificate chain
that isn't "anchored" at a Mozilla trusted root?

And would someone please post those alleged certificate chains 
*explicitly* here, not just say they saw it "somehow".


Hi Jakob.  Let me run through one of them as an example:

https://crt.sh/?id=12977063 is a self-signed root certificate that is 
also an NSS built-in trust anchor.


https://crt.sh/?id=149444544 is a self-signed root certificate that is 
_not_ an NSS built-in trust anchor.




Ah, that wasn't clear from the previous posts in this thread.

So basically, this is *identical* to one of the trusted roots, but with
a different self-signature hash algorithm (and for at least this pair, a
different serial number).

This seems to directly violate the often proposed (but apparently not
yet enacted) rule that different root certs must have different keys
(if that rule has been incorporated into a current policy).

It's also risky cryptographic practice, although for RSA, the PKCS#1
padding ensures no direct collision risk (but still, once the weaker
hash is broken, each of the certs previously signed with that hash
become a reason to distrust the root in software that does not filter
the hash algorithm for each issued certificate).  The safer design would
have been to create a new key pair and subject name, then set it up as a
cross-signed root, which would be a SubCA for those only trusting the
the older root.

Without the no-reuse rule, the most reasonable interpretation of such a
certificate is as a refresh of the same root CA, which must be disclosed
in the same way as any other such refresh (such as a change in the date
fields).  Both certificates must be subjected to the same audits,
disclosures and trust bits.  Both certificates must be somehow listed
in the entry for that root CA.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New undisclosed intermediates

2017-06-09 Thread Jakob Bohm via dev-security-policy

On 09/06/2017 11:57, Rob Stradling wrote:

On 09/06/17 03:16, Peter Bowen via dev-security-policy wrote:

On Thu, Jun 8, 2017 at 7:09 PM, Jonathan Rudenberg via
dev-security-policy <dev-security-policy@lists.mozilla.org> wrote:


On Jun 8, 2017, at 20:43, Ben Wilson via dev-security-policy 
<dev-security-policy@lists.mozilla.org> wrote:


I don't believe that disclosure of root certificates is the 
responsibility
of a CA that has cross-certified a key.  For instance, the CCADB 
interface
talks in terms of "Intermediate CAs".  Root CAs are the 
responsibility of

browsers to upload.  I don't even have access to upload a "root"
certificate.


I think the Mozilla Root Store policy is pretty clear on this point:

All certificates that are capable of being used to issue new 
certificates, and which directly or transitively chain to a 
certificate included in Mozilla’s CA Certificate Program, MUST be 
operated in accordance with this policy and MUST either be 
technically constrained or be publicly disclosed and audited.


The self-signed certificates in the present set are all in scope for 
the disclosure policy because they are capable of being used to issue 
new certificates and chain to a certificate included in Mozilla’s CA 
Certificate Program. From the perspective of the Mozilla root store 
they look like intermediates because they can be used as 
intermediates in a valid path to a root certificate trusted by Mozilla.


There are two important things about self-issued certificates:

1) They cannot expand the scope of what is allowed.
Cross-certificates can create alternative paths with different
restrictions.  Self-issued certificates do not provide alternative
paths that may have fewer constraints.

2) There is no way for a "parent" CA to prevent them from existing.
Even if the only cross-sign has a path length constraint of zero, the
"child" CA can issue self-issued certificates all day long.  If they
are self-signed there is no real value in disclosing them, given #1.

I think that it is reasonable to say that self-signed certificates are
out of scope.


There's a signature chain, so they're clearly in scope (as far as the 
current policy is concerned).


The policy would need to be updated before we could say that they "*are* 
out of scope".


(FWIW, I agree that it's pointless for them to be in scope.  However, 
the policy trumps my opinion).




What in the policy says they become in-scope from a certificate chain
that isn't "anchored" at a Mozilla trusted root?

And would someone please post those alleged certificate chains 
*explicitly* here, not just say they saw it "somehow".




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: New undisclosed intermediates

2017-06-08 Thread Jakob Bohm via dev-security-policy

On 09/06/2017 04:09, Jonathan Rudenberg wrote:



On Jun 8, 2017, at 20:43, Ben Wilson via dev-security-policy 
<dev-security-policy@lists.mozilla.org> wrote:

I don't believe that disclosure of root certificates is the responsibility
of a CA that has cross-certified a key.  For instance, the CCADB interface
talks in terms of "Intermediate CAs".  Root CAs are the responsibility of
browsers to upload.  I don't even have access to upload a "root"
certificate.


I think the Mozilla Root Store policy is pretty clear on this point:


All certificates that are capable of being used to issue new certificates, and 
which directly or transitively chain to a certificate included in Mozilla’s CA 
Certificate Program, MUST be operated in accordance with this policy and MUST 
either be technically constrained or be publicly disclosed and audited.


The self-signed certificates in the present set are all in scope for the 
disclosure policy because they are capable of being used to issue new 
certificates and chain to a certificate included in Mozilla’s CA Certificate 
Program. From the perspective of the Mozilla root store they look like 
intermediates because they can be used as intermediates in a valid path to a 
root certificate trusted by Mozilla.



This is getting awfully confusing.

What exactly about which specific certificates makes them both "self-
signed" and "part of a chain to a root in the Mozilla root store" ?

This seems to be a direct logical contradiction.

Here is how I see it:

If you are talking about two different certificates with the same public
key, then each such certificate needs to be considered separately.

If you are talking about two different certificates with the same
Subject Distinguished Name and optional id, but with different contents,
then again they need to be considered separately, as there is nothing
guaranteeing uniqueness of those fields.

If you are talking about two different certificates that differ only in 
the Issuer and Signature fields (and maybe in the serial number too), 
then again they need to be considered separately.


Chaining is defined by the tuple [Public Key, Subject, Optional Key ID],
deviating in either public key or subject certainly makes them unrelated
entities for chain building (except that sending irrelevant certificates
to the Browser can cause it to waste time comparing them).  Deviating in
the key ID may or may not cause certificates to not chain together
depending on the certificate checking library used (NSS being most
important for Mozilla, BouncyCastle and BoringSSL being of additional
interest for Chrome).

Certificates (not keys or names) that actually chain (as defined above)
to a root certificate in the CCADB are in scope for CCADB policy,
others are not.

Certificates that are in scope and have CA:TRUE in basic constraints
and/or CertificateSigning in Extended Key Usage or (maybe) have an
equivalent bit set in the historic Netscape/Mozilla key usage attribute
belong in CCADB.

Certificates that are in scope but lack all of these CA-usage flags are
end entity certificates that must satisfy Mozilla policy on correct
issuance (such as ownership checks, no RFC1918 addresses as SANs etc.).

Certificates that are not in scope are not in scope.

For example if a virus-scanning middle box has a locally generated root
CA trusted by the local clients (via configuration), and that virus
scanning middle box generates on-the fly certificates matching the names
(but not the keys) in the real public certificates it sees, then those
on-the fly certificates may have the same Subject as the real
certificates, but don't become in scope that way, even if they leak back
out through various forms of "telemetry".  Because there is no actual
way in which they would be trusted by a Mozilla browser that hasn't been
locally reconfigured to trust that local root CA.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Make it clear that Mozilla policy has wider scope than the BRs

2017-06-08 Thread Jakob Bohm via dev-security-policy

On 08/06/2017 18:47, David E. Ross wrote:

On 6/8/2017 2:38 AM, Gervase Markham wrote:

On 02/06/17 11:28, Gervase Markham wrote:

Proposal: add a bullet to section 2.3, where we define BR exceptions:

"Insofar as the Baseline Requirements attempt to define their own scope,
the scope of this policy (section 1.1) overrides that. Mozilla expects
CA operations relating to issuance of all SSL certificates in the scope
of this policy to conform to the Baseline Requirements."


Implemented as specced.

Gerv



This seems self-contradictory.

How about adding only 2 words ("Nevertheless" and "also") to the second
sentence:

Insofar as the Baseline Requirements attempt to define their own scope,
the scope of this policy (section 1.1) overrides that. Nevertheless,
Mozilla expects CA operations relating to issuance of all SSL
certificates in the scope of this policy to conform also to the Baseline
Requirements.



How about the following, which seems more correct

Insofar as the Baseline Requirements attempt to define their own scope,
the scope of this policy (section 1.1) overrides that. Mozilla
thus requires CA operations relating to issuance of all SSL certificates
in the scope of this policy to conform to the Baseline Requirements.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla requirements of Symantec

2017-06-08 Thread Jakob Bohm via dev-security-policy

On 08/06/2017 18:52, Peter Bowen wrote:

On Thu, Jun 8, 2017 at 9:38 AM, Jakob Bohm via dev-security-policy
<dev-security-policy@lists.mozilla.org> wrote:


As the linked proposal was worded (I am not on Blink mailing lists), it
seemed obvious that the original timeline was:

   Later: Once the new roots are generally accepted, Symantec can actually
issue from the new SubCAs.

   Long term: CRL and OCSP management for the managed SubCAs remain with the
third party CAs.  This continues until the managed SubCAs expire or are
revoked.


I don't see this last part in the proposal.  Instead the proposal
appears to specifically contemplate the SubCAs being transferred to
Symantec once the new roots are accepted in the required trust stores.



That last part was derived purely from the logistical difficulty of
moving private keys compared to just keeping CRL and OCSP running in an
infrastructure that would keep running anyway (for the hosting CAs own
CA certificates).




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla requirements of Symantec

2017-06-08 Thread Jakob Bohm via dev-security-policy

On 08/06/2017 11:09, Gervase Markham wrote:

On 07/06/17 22:30, Jakob Bohm wrote:

Potential clarification: By "New PKI", Mozilla apparently refers to the
"Managed CAs", "Transition to a New Symantec PKI" and related parts of
the plan, not to the "new roots" for the "modernized platform" / "new
infrastructure".


I expect those things to be interlinked; by "New PKI" I was referring to
them both.

Symantec has not yet stated how they plan to structure their new
arrangements, but I would expect that the intermediate certs run by the
managed CAs would in some way become part of Symantec's new PKI,
operated by them, once it was up and running. Ryan laid out a way
Symantec could structure this on blink-dev, I believe, but the final
structure is up to them.



As the linked proposal was worded (I am not on Blink mailing lists), it 
seemed obvious that the original timeline was:


  August 2017: All new certs issued by Managed SubCAs that chain to the 
old Symantec roots.  Private keys for these SubCAs reside an the third 
party CAs in secure hardware which will presumable prevent sharing them 
with Symantec.


  Much later: The new infrastructure passes all readiness audits.

  Then: A signing ceremony creates the new roots and their first set of 
SubCAs.  Cross signatures are created from the old roots to the new 
roots.   Maybe/Maybe not cross signatures are also created from the new 
roots to the managed SubCAs.


  Next: Symantec reapplies for inclusion with the new roots.

  Later: Once the new roots are generally accepted, Symantec can 
actually issue from the new SubCAs.


  Long term: CRL and OCSP management for the managed SubCAs remain with 
the third party CAs.  This continues until the managed SubCAs expire or 
are revoked.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla requirements of Symantec

2017-06-07 Thread Jakob Bohm via dev-security-policy

Hi Gervase,

there seems to be a slight inconsistency between the terminology in the 
plan posted at


https://groups.google.com/a/chromium.org/d/msg/blink-dev/eUAKwjihhBs/ovLalSBRBQAJ

And the official letter quoted below.  I have added potential 
clarifications to fix this, please indicate, for the benefit of the 
community and Symantec if those clarifications are correct interpretations.


On 07/06/2017 20:51, Gervase Markham wrote:

Hi Steve,

I'm writing to you in your role as the Primary Point of Contact for
Symantec with regard to the Mozilla Root Program. I am writing with a
list of Mozilla-specific additions to the consensus remediation proposal
for Symantec, as documented by Google.

We note that you have raised a number of objections and queries with
regard to the consensus proposal. As you know, we are considering our
responses to those. We reserve the right to make additional requests of
Symantec in relation to any changes which might be made to that
proposal, or for other reasons.

However, we have formulated an initial list of Mozilla-specific addenda
to the consensus proposal and feel now is a good time to pass them on to
Symantec for your official consideration and comment. We would prefer
comments in mozilla.dev.security.policy (to which this notice has been
CCed), and in any event by close of business on Monday 12th June.

1) Mozilla would wish, after the 2017-08-08 date as documented in the
consensus proposal, to alter Firefox such that it trusts certificates
issued in the "new PKI" directly by embedding a set of certs or trust
anchors which are part of that PKI, and can therefore distrust any new
cert which is issued by the old PKI on a "notBefore" basis. We therefore
require that Symantec arrange their new PKI and provide us with
sufficient information in good time to be able to do that.



Potential clarification: By "New PKI", Mozilla apparently refers to the 
"Managed CAs", "Transition to a New Symantec PKI" and related parts of 
the plan, not to the "new roots" for the "modernized platform" / "new 
infrastructure".



2) Mozilla would wish, at some point in the future sooner than November
2020 (39 months after 2017-08-08, the date when Symantec need to be
doing new issuance from the new PKI), to be certain that we are fully
distrusting the old PKI. As things currently stand technically,
distrusting the old PKI would mean removing the roots, and so Symantec
would have to move their customers to the new PKI at a rate faster than
natural certificate expiry. Rather than arbitrarily set a date here, we
are willing to discuss what date might be reasonable with Symantec, but
would expect it to be some time in 2018.

As you know, Firefox currently does not act upon embedded CT
information, and so CT-based mechanisms are not a suitable basis for us
to determine trust upon. Were that to change, we may be able to consider
a continued trust of CT-logged certs, but would still want to dis-trust
non-CT-logged certs sooner than November 2020.

3) If any additional audit is performed by Symantec, including but not
limited to one that "that includes a description of the auditor’s tests
of controls and results", then the intended users of the audit report
must also include persons who assist in decisions related to the trusted
status of Certification Authorities within Mozilla products. For any
audit to unusually detailed criteria, it is permitted to place this
information behind a login (or require it to be so placed) as long as
Mozilla is allowed to give access to any member of our community that we
wish.



Potential clarification: Mozilla's #3 requirement applies to both the 
"new PKI" and the "new roots" for the "new infrastructure".



We look forward to hearing Symantec's response to these requirements.

With best wishes,

Gerv




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Add definition of "mis-issuance"

2017-06-07 Thread Jakob Bohm via dev-security-policy

On 07/06/2017 17:41, Ryan Sleevi wrote:

On Wed, Jun 7, 2017 at 11:25 AM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


Note that I also had a second, related, point: The possibility that such
a new piece of infrastructure was, for other reasons, not endorsed by
Mozilla, but of great interest to one of the other root programs (not
all of which are browser vendors).

Conversely consider the possibility that some other root programs had a
similarly restrictive policy and refused to support the introduction of
CT PreCertificates.  This could have stopped a useful improvement that
Mozilla seems to like.

The Golden rule would then imply that Mozilla should not reserve to
itself a power it would not want other root programs to have.




So much rhetoric, so little substance...



As much as I love an application of the Golden Rule as the next person, I
think it again misunderstands the realities on the ground in the Web PKI,
and puts an ideal ahead of the practical security implications.
 > Root programs do sometimes disagree. For example, Microsoft reserves the
right to revoke certificates it disagrees with. That right means that other
browser programs cannot effectively use CRLs or OCSP for revocation, as to
do so is to recognize Microsoft's ability to (negatively) affect their
users. They have means of working out those disagreements.


Bad example, but illustrates that reasonable agreement is not always
obtainable.



You're positioning the hypothetical as if it's inflexible - but the reality
is, each root program exists to first and foremost best reflect the needs
of its userbase. For Microsoft, they've determined that's best accomplished
by giving themselves unilateral veto power over the CAs they trust. For
Mozilla, recognizing its global mission, it tries to operate openly and
best serving the ecosystem.



I'm saying that if someone not interested in some good protocol had veto
power over CAs using that protocol, then that someone could be
inflexible to the detriment of everyone else.  Hence the desire not to
establish a precedent for such veto power.

The goodness of Mozilla does not minimize that precedent, only increases
the danger it might be cited by less honorable root programs.


The scenario you remark as undesirable - the blocking of precertificates -
is, in fact, a desirable and appropriate outcome. As Nick has noted, these
efforts do not spring forth like Athena, fully grown and robust, but
iteratively develop through the community process - leaving ample time for
Mozilla to update and respond. This, too, has ample evidence of it
happening - see the discussions related to CAA, or the validation methods
employed by ACME.


So what would/should the CT project have done if some inflexible root 
program had vetoed its deployment?


And ample time to respond does not guarantee a fair response, especially
when considering the extension of such veto power to less flexible root
programs.



The idealized view of SDOs - that they are infallible organizations or
represent some neutral arbitration - is perhaps misguided. For example,
it's trivial to publish an I-D within the IETF as a draft, and it's not
unreasonable to have an absolutely terrible technology assigned an RFC (for
example, an informational submission documenting an existing, but insecure,
technology). As Nick mentions, the reality - and far more common occurrence
- is someone being clever and a half and doing it with good intentions, but
bad security. And those decisions - and that flexibility - create
significantly more risk for the overall ecosystem.


When talking about the IETF as an SDO, I am obviously talking about IETF
standards track working groups, not individual submission RFCs or BOF
talks at IETF meetings.



I suspect we will continue to disagree on this point, so I doubtful can
offer more meaningful arguments to sway you, and certainly would not appeal
to authority. I would simply note that the scenarios you raise as
hypotheticals do not and have not played out as you suggest, and if the
overall goal of this discussion is to ensure the security of Mozilla users,
then the contents, provenance, and correctness of what is signed plays an
inescapable part of that security, and the suggested flexibility is
inimical to that security.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Add definition of "mis-issuance"

2017-06-07 Thread Jakob Bohm via dev-security-policy

On 07/06/2017 16:43, Nick Lamb wrote:

On Tuesday, 6 June 2017 21:08:54 UTC+1, Ryan Sleevi  wrote:

Standards defining organization.


More usually a Standards _Development_ Organization. I wouldn't usually feel 
the need to offer this correction but in this context we care a good deal about 
the fact that SDOs are where the actual engineering is done, where the 
expertise about the particular niche being standardised exists.

Even in the IETF, which is unusual in having some pretty technical people 
making its top level decisions, the serious work is mostly done in specialist 
working groups, with their products percolating up afterwards. For most 
standards bodies the top level stuff is purely politics - at the ITU the 
members are (notionally) sovereign nations themselves, same for the UPU, at ISO 
they're entire national standards bodies, and so on - utterly unsuitable to the 
meat of standards development itself. Most expertise is instead present in 
smaller, specialised SDOs in these cases.

Anyway, to Jakob's point it is _extremely_ unlikely that a new piece of 
infrastructure will spring into existence fully formed and ready for use in 
anger in the Web PKI without enough time for Mozilla, and m.d.s.policy to 
evaluate it and if necessary update the relevant policy documents. Much more 
likely, in my opinion, is that something half-baked is tried by a CA, and later 
realised to have opened an unsuspected hole in security.

Nothing even prevents this policy being updated to permit, for example, trials 
of some particular promising new idea that needs testing at scale, although I 
think in most cases that won't be necessary. Consider the CRL signing idea, 
this can be tested perfectly well without using any trusted CA or subCA keys at 
all. A final production version would probably use trusted keys, but you don't 
need to start with them to see it work.



Note that I also had a second, related, point: The possibility that such
a new piece of infrastructure was, for other reasons, not endorsed by
Mozilla, but of great interest to one of the other root programs (not
all of which are browser vendors).

Conversely consider the possibility that some other root programs had a
similarly restrictive policy and refused to support the introduction of
CT PreCertificates.  This could have stopped a useful improvement that
Mozilla seems to like.

The Golden rule would then imply that Mozilla should not reserve to
itself a power it would not want other root programs to have.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Add definition of "mis-issuance"

2017-06-07 Thread Jakob Bohm via dev-security-policy

On 07/06/2017 12:55, Rob Stradling wrote:

On 06/06/17 22:26, Jakob Bohm wrote:

On 06/06/2017 22:08, Ryan Sleevi wrote:



Signing data is heavily reliant on CA competency, and that's in
unfortunately short supply, as the economics of the CA market make it 
easy to fire all the engineers, while keeping the sales team, and

outsourcing the rest.


Ryan, thankfully at least some CAs have some engineers.  :-)


Which is why I am heavily focused on allowing new technology to be be
developed by competent non-CA staff (such as IETF),


Jakob, if I interpret that literally it seems you're objecting to CA 
staff contributing to IETF efforts.  If so, may I advise you to beware 
of TLS Feature (aka Must Staple), CAA, CT v1 (RFC6962) and especially CT 
v2 (6962-bis)?




No, I was just stating that if (as suggested by Mr. Sleevi) the Mozilla
root program does not trust CA engineers to design new to-be-signed data
formats, maybe Mozilla could at least trust designs that have been
positively peer reviewed in organizations such as the IETF, the NIST
computer security/crypto groups, etc. etc.

I was in no way suggesting that CA engineers do not participate in those
efforts, giving as an example their participation in early CT
deployments together with Google engineers.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Add definition of "mis-issuance"

2017-06-06 Thread Jakob Bohm via dev-security-policy

On 06/06/2017 22:08, Ryan Sleevi wrote:

On Tue, Jun 6, 2017 at 2:28 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


I am saying that setting an administrative policy for inclusion in a
root program is not the place to do technical reviews of security
protocols.



Of course it is. It is the only one that has reliably worked in the history
of the Web PKI. I would think that would be abundantly evident over the
past five years.



I have yet to see (but I haved studied ancient archives) the root
program and or the CAB/F doing actual review of technical security
protocols and data formats.




And I proceeded to list places that *do* perform such peer
review at the highest level of competency, but had to note that the list
would be too long to enumerate in a stable root program policy.



Except none of them are, as evidenced by what they've turned out. The only
place where Mozilla users are considered, en masse, is in Mozilla policy.
It is the one and only place Mozilla can ensure its needs are appropriately
and adequately reflected.



SDO?  Unfamiliar with that TLA.



Standards defining organization.



Ah, like the very examples I gave of competent protocol review
organizations that should do this.




And why should Mozilla (and every other root program) be consulted to
unanimously preapprove such technical work?  This will create a massive
roadblock for progress.  I really see no reason to create another FIPS
140 style bureaucracy of meaningless rule enforcement (not to be
confused with the actual security tests that are also part of FIPS 140
validation).



This is perhaps the disconnect. It's not meaningless. A significant amount
of the progress made in the past five years in the Web PKI has come from
one of two things:
1) Mozilla or Google forbidding something
2) Mozilla or Google requiring something



Yes, but there is a fundamental difference between Mozilla/Google
enforcing best practices and Mozilla/Google arbitrarily banning
progress.


The core of your argument seems to be that you don't believe Mozilla can
update it's policy in a timely fashion (which this list provides ample
counter-evidence to this), or that the Mozilla community should not be
consulted about what is appropriate for the Mozilla community (which is, on
its face, incorrect).



No, I am saying the the root program is the wrong place to do technical
review and acceptance/rejection of additional CA features that might
improve security with non-Mozilla code, with the potential that at some
future point in time Mozilla might decide to start including such
facilities.

For example, the Mozilla root program was not the right place to discuss
if CAs should be allowed to do CT logging at a time when only Google
code was actually using that.

The right place was Google submitting the CT system to a standard
organization (in this case the IETF), and once any glaring security
holes had been reviewed out, begin to have some CAs actually do this,
before the draft RFC could have the implementations justifying
publication as a standards track RFC.  Which is, I believe, exactly what
happened.  The Mozilla root policy did not need to change to allow this
work to be done.

One litmus-test for a good policy would be "If this policy had existed
before CT, and Mozilla was not involved with CT at all, would this
policy had interfered with the introduction of CT by Google".



Look, you could easily come up with a dozen examples of improved validation

methods - but just because they exist doesn't mean keeping the "any other
method" is good. And, for what it's worth, of those that did shake out of
the discussions, many of them _were_ insecure at first, and evolved
through
community discussion.



Interestingly, the list of revocation checking methods supported by
Chrome (and proposed to be supported by future Firefox versions) is
essentially _empty_ now.  Which is completely insecure.



Not really "interestingly", because it's not a response to the substance of
the point, but in fact goes to an unrelated (and technically incorrect)
tangent.

Rather than engage with you on that derailment, do you agree with the
easily-supported (by virtue of the CABF Validation WG's archives) that CAs
proposed the use of insecure methods for domain validation, and those were
refined in time to be more appropriately secure? That's something easily
supported.



I am not at all talking about "domain validation" and the restrictions
that had to be imposed to stop bad CA practices.

I am talking about allowing non-Mozilla folk, working with competent
standard defining organizations to create additional security
measures requiring signatures from involved CAs.





Within *this thread* proposed policy language would have banned that.




And neither I, nor any other participant seemed to realize this specific
omission until my post this morning.



Yes, and? You're showing exa

Re: Symantec response to Google proposal

2017-06-06 Thread Jakob Bohm via dev-security-policy
he high-frequency 3/6 month one proposed.



Sounds fair.


7) Detailed Audits

Google proposal: Symantec may be requested to provide "SOC2" (more
detailed) audits of their new infrastructure prior to it being ruled
acceptable for use.
Symantec proposal: such audits should be provided only under NDA.
Rationale: they include detailed information of a sensitive nature.



I don't see a problem in access to this being subject to a reasonable
NDA that allows Mozilla to show it to their choice of up to 50 external
experts (I don't expect to be one of those 50).



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Add definition of "mis-issuance"

2017-06-06 Thread Jakob Bohm via dev-security-policy

On 06/06/2017 07:45, Ryan Sleevi wrote:

On Mon, Jun 5, 2017 at 6:21 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


If you read the paper, it contains a proposal for the CAs to countersign
the computed super-crl to confirm that all entries for that CA match the
actual revocations and non-revocations recorded by that CA.  This is not
currently deployed, but is an example of something that CAs could safely
do using their private key, provided sufficient design competence by the
central super-crl team.



I did read the paper - and provide feedback on it.

And that presumption that you're making here is exactly the reason why you
need a whitelist, not a blacklist. "provided sufficient design competence"
does not come for free - it comes with thoughtful peer review and community
feedback. Which can be provided in the aspect of policy.



I am saying that setting an administrative policy for inclusion in a
root program is not the place to do technical reviews of security
protocols.  And I proceeded to list places that *do* perform such peer
review at the highest level of competency, but had to note that the list
would be too long to enumerate in a stable root program policy.




Another good example could be signing a "certificate white-list"
containing all issued but not revoked serial numbers.  Again someone
not a random CA) should provided a well thought out data format
specification that cannot be maliciously confused with any of the
current data types.



Or a bad example. And that's the point - you want sufficient technical
review (e.g. an SDO ideally, but minimally m.d.s.p review).


SDO?  Unfamiliar with that TLA.

And why should Mozilla (and every other root program) be consulted to
unanimously preapprove such technical work?  This will create a massive
roadblock for progress.  I really see no reason to create another FIPS
140 style bureaucracy of meaningless rule enforcement (not to be
confused with the actual security tests that are also part of FIPS 140
validation).



Look, you could easily come up with a dozen examples of improved validation
methods - but just because they exist doesn't mean keeping the "any other
method" is good. And, for what it's worth, of those that did shake out of
the discussions, many of them _were_ insecure at first, and evolved through
community discussion.



Interestingly, the list of revocation checking methods supported by
Chrome (and proposed to be supported by future Firefox versions) is
essentially _empty_ now.  Which is completely insecure.




Here's one item no-one listed so far (just to demonstrate our collective
lack of imagination):



This doesn't need imagination - it needs solid review. No one is
disagreeing with you that there can't be improvements. But let's start with
the actual concrete matters at hand, appropriately reviewed by the
Mozilla-using community that serves a purpose consistent with the mission,
or doesn't pose risks to users.



Within *this thread* proposed policy language would have banned that.

And neither I, nor any other participant seemed to realize this specific
omission until my post this morning.




However the failure mode for "signing additional CA operational items"
would be a lot less risky and a lot less reliant on CA competency.



That is demonstrably not true. Just look at the CAs who have had issues
with their signing ceremonies. Or the signatures they've produced.


Did any of those involve erroneously signing non-certificates of a
wholly inappropriate data type?





It is restrictions for restrictions sake, which is always bad policy
making.



No it's not. You would have to reach very hard to find a single security
engineer would argue that a blacklist is better than a whitelist for
security. It's not - you validate your inputs, you don't just reject the
badness you can identify. Unless you're an AV vendor, which would explain
why so few security engineers work at AV vendors.


I am not an AV vendor.

Technical security systems work best with whitelists wherever possible.

Human-to-human policy making works best with blacklists wherever
possible.

Root inclusion policies are human-to-human policies.





If necessary, one could define a short list of technical characteristics
that would make a signed item non-confusable with a certificate.  For
example, it could be a PKCS#7 structure, or any DER structure whose
first element is a published specification OID nested in one or more
layers of SEQUENCE or SET tags, perhaps more safe alternatives could be
added to this.



You could try to construct such a definition - but that's a needless
technical complexity with considerable ambiguity for a hypothetical
situation that you are the only one advocating for, and using an approach
that has repeatedly lead to misinterpretations and security failures.



Indeed, and I was trying not to until forced by posts rejecting simply
saying 

Re: Policy 2.5 Proposal: Add definition of "mis-issuance"

2017-06-05 Thread Jakob Bohm via dev-security-policy

On 02/06/2017 17:12, Ryan Sleevi wrote:

On Fri, Jun 2, 2017 at 10:09 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


On 02/06/2017 15:54, Ryan Sleevi wrote:

On Fri, Jun 2, 2017 at 9:33 AM, Peter Bowen <pzbo...@gmail.com> wrote:


On Fri, Jun 2, 2017 at 4:27 AM, Ryan Sleevi <r...@sleevi.com> wrote:
Yes, my concern is that this could make SIGNED{ToBeSigned} considered
misissuance if ToBeSigned is not a TBSCertificate.  For example, if I
could sign an ASN.1 sequence which had the following syntax:

TBSNotCertificate ::= {
 notACertificateUTF8String,
 COMPONENTS OF TBSCertificate
}

Someone could argue that this is mis-issuance because the resulting
"certificate" is clearly corrupt, as it fails to start with an
INTEGER.  On the other hand, I think that this is clearly not
mis-issuance of a certificate, as there is no sane implementation that
would accept this as a certificate.



Would it be a misissuance of a certificate? Hard to argue, I think.

Would it be a misuse of key? I would argue yes, unless the
TBSNotCertificate is specified/accepted for use in the CA side (e.g. IETF
WD, at the least).

As a practical matter, this largely only applies to the use of signatures
for which collisions are possible - since, of course, the

TBSNotCertificate

might be constructed in such a way to collide with the TBSCertificate.
As a "assume a jackass genie is interpreting the policy" matter, what

about

situations where a TBSNotCertificate has the same structure as
TBSCertificate? The fact that they are identical representations
on-the-wire could be argued as irrelevant, since they are non-identical
representations "in the spec". Unfortunately, this scenario has come up
once before already - in the context of RFC 6962 (and hence the
clarifications in the Baseline Requirements) - so it's not unreasonable a
scenario to expect.

The general principle I was trying to capture was one of "Only sign these
defined structures, and only do so in a manner conforming to their
appropriate encoding, and only do so after validating all the necessary
information. Anything else is 'misissuance' - of a certificate, a CRL, an
OCSP response, or a Signed-Thingy"



Thing is, that there are still serious work involving the definition of
new CA-signed things, such as the recent (2017) paper on a super-
compressed CRL-equivalent format (available as a Firefox plugin).



This does ny rely on CA signatures - but also perfectly demonstrates the
point - that these things should be getting widely reviewed before
implementing.



If you read the paper, it contains a proposal for the CAs to countersign
the computed super-crl to confirm that all entries for that CA match the
actual revocations and non-revocations recorded by that CA.  This is not
currently deployed, but is an example of something that CAs could safely
do using their private key, provided sufficient design competence by the
central super-crl team.

Another good example could be signing a "certificate white-list"
containing all issued but not revoked serial numbers.  Again someone
not a random CA) should provided a well thought out data format
specification that cannot be maliciously confused with any of the
current data types.





Banning those by policy would be as bad as banning the first OCSP
responder because it was not yet on the old list {Certificate, CRL}.



This argument presumes technical competence of CAs, for which collectively
there is no demonstrable evidence.


In this case, it would presume that technical competence exists at high
end crypto research / specification teams defining such items, not at
any CA or vendor.  For example any such a format could come from the
IETF, ITU-T, NIST, IEEE, ICAO, or any of the big crypto research centers
inside/outside the US (too many to enumerate in a policy).

Here's one item no-one listed so far (just to demonstrate our collective
lack of imagination):

Using the CA private key to sign a CSR to request cross-signing from
another CA (trusted or untrusted by Mozilla).



Functionally, this is identical to banning the "any other method" for
domain validation. Yes, it allowed flexibility - but at the extreme cost to
security.



However the failure mode for "signing additional CA operational items"
would be a lot less risky and a lot less reliant on CA competency.


If there are new and compelling thing to sign, the community can review and
the policy be updated. I cannot understand the argument against this basic
security sanity check.



It is restrictions for restrictions sake, which is always bad policy
making.





Hence my suggested phrasing of "Anything that resembles a certificate"
(my actual wording a few posts up was more precise of cause).



Yes, and I think that wording is insufficient and dangerous, despite your
understandable goals, for the reasons I outlined.




If n

Re: Policy 2.5 Proposal: Add definition of "mis-issuance"

2017-06-02 Thread Jakob Bohm via dev-security-policy

On 02/06/2017 15:54, Ryan Sleevi wrote:

On Fri, Jun 2, 2017 at 9:33 AM, Peter Bowen <pzbo...@gmail.com> wrote:


On Fri, Jun 2, 2017 at 4:27 AM, Ryan Sleevi <r...@sleevi.com> wrote:
Yes, my concern is that this could make SIGNED{ToBeSigned} considered
misissuance if ToBeSigned is not a TBSCertificate.  For example, if I
could sign an ASN.1 sequence which had the following syntax:

TBSNotCertificate ::= {
notACertificateUTF8String,
COMPONENTS OF TBSCertificate
}

Someone could argue that this is mis-issuance because the resulting
"certificate" is clearly corrupt, as it fails to start with an
INTEGER.  On the other hand, I think that this is clearly not
mis-issuance of a certificate, as there is no sane implementation that
would accept this as a certificate.



Would it be a misissuance of a certificate? Hard to argue, I think.

Would it be a misuse of key? I would argue yes, unless the
TBSNotCertificate is specified/accepted for use in the CA side (e.g. IETF
WD, at the least).

As a practical matter, this largely only applies to the use of signatures
for which collisions are possible - since, of course, the TBSNotCertificate
might be constructed in such a way to collide with the TBSCertificate.
As a "assume a jackass genie is interpreting the policy" matter, what about
situations where a TBSNotCertificate has the same structure as
TBSCertificate? The fact that they are identical representations
on-the-wire could be argued as irrelevant, since they are non-identical
representations "in the spec". Unfortunately, this scenario has come up
once before already - in the context of RFC 6962 (and hence the
clarifications in the Baseline Requirements) - so it's not unreasonable a
scenario to expect.

The general principle I was trying to capture was one of "Only sign these
defined structures, and only do so in a manner conforming to their
appropriate encoding, and only do so after validating all the necessary
information. Anything else is 'misissuance' - of a certificate, a CRL, an
OCSP response, or a Signed-Thingy"



Thing is, that there are still serious work involving the definition of
new CA-signed things, such as the recent (2017) paper on a super-
compressed CRL-equivalent format (available as a Firefox plugin).

Banning those by policy would be as bad as banning the first OCSP
responder because it was not yet on the old list {Certificate, CRL}.

Hence my suggested phrasing of "Anything that resembles a certificate"
(my actual wording a few posts up was more precise of cause).

Note that signing a wrong CRL or OCSP response is still bad, but not
mis-issuance.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Add definition of "mis-issuance"

2017-06-01 Thread Jakob Bohm via dev-security-policy

On 31/05/2017 18:04, Gervase Markham wrote:

It has been suggested we need a formal definition of what we consider
mis-issuance. The closest we have is currently a couple of sentence in
section 7.3:

"A certificate that includes domain names that have not been verified
according to section 3.2.2.4 of the Baseline Requirements is considered
to be mis-issued. A certificate that is intended to be used only as an
end entity certificate but includes a keyUsage extension with values
keyCertSign and/or cRLSign or a basicConstraints extension with the cA
field set to true is considered to be mis-issued."

This is clearly not an exhaustive list; one would also want to include
BR violations, RFC violations, and insufficient EV vetting, at least.

The downside of defining it is that CAs might try and rules-lawyer us in
a particular situation.

Here's some proposed text which provides more clarity while hopefully
avoiding rules-lawyering:

"The category of mis-issued certificates includes (but is not limited
to) those issued to someone who should not have received them, those
containing information which was not properly validated, those having
incorrect technical constraints, and those using algorithms other than
those permitted."



How about: Any issued certificate which violates any applicable policy,
requirement or standard, which was not requested by all its alleged
subject(s) or which should otherwise not have been issued, is by
definition mis-issued.  Policies and requirements include but are not
limited to this policy, the CCADB policy, the applicable CPS, and the
baseline requirements.  Any piece of data which technically resembles an
X.509 or PKCS#6 extended certificate, and which is signed by the CA
private key is considered an issued certificate.

(Note: I mention the ancient PKCS#6 certificate type because mis-issuing
those is still mis-issuance, even if there is no current reason to
validly issue those).

(Note: The last sentence above was phrased to try to cover semi-garbled
certificates, without accidentally banning things like CRLs and OCSP
responses).


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy and CCADB Disclosure scope

2017-05-23 Thread Jakob Bohm via dev-security-policy

On 23/05/2017 18:18, Ryan Sleevi wrote:

On Tue, May 23, 2017 at 11:52 AM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


Note as this is about a proposed future policy, this is about validation
code updated if and when such a policy is enacted.  Current validation
code has no reason to check a non-existent policy.



Mozilla strives, to the best possible way, to be interoperable with other
vendors, and not introduce security risks that would affect others, nor
unduly require things that would inhibit others.

In this aspect, the proposal of TCSCs - and the rest of the radical changes
you propose - are incompatible with many other libraries.



NOT my proposal, I was trying to help out with technical details of
Matthew's proposal, that's all.


While you're true that Mozilla could change their code at any point, much
of the Web Platform's evolution - and in particular, TLS - has been
achieved through multi-vendor collaboration.



Which I repeatedly referred to, in my latest e-mail I phrased it as "If
and when" such a policy would be enacted.


This is why it's important, when making proposals, to not simply work on a
blank canvas and attempt to sketch something, but to be aware of the lines
in the ecosystem that exist and the opportunities for collaboration - and
the times in which it's important to "go it alone".



I fully agree with that, and wrote so.


What part of "Has DNS/e-mail name constraints to at least second-level

domains or TLDs longer than 3 chars", "Has DN name constraints that
limit at least O and C", "Has EKU limitations that exclude AnyEKU and
anything else problematic", "Has lifetime and other general constraints
within the limits of EE certs" AND "Has a CTS" cannot be detected
programmatically?



These are not things that can be reliably implemented across the ecosystem,
nor would they be reasonable costs to bear for the proposed benefits, no.



You seem keen to be reject things out of hand, with no explanation.
Good luck convincing Matthew or others that way.




Or could this be solved by require such "TCSC light" SubCA certs to
carry a specific CAB/F policy OID with CT-based community enforcement
that all SubCA certs with this policy OID comply with the more stringent
non-computable requirements likely to be in such a policy (if passed)?



No.



I am trying to limit the scope of this to the kind of TCSC (Technically
Constrained SubCA) that Matthew was advocating for.  Thus none of this
applies to long lived or public SubCAs.

If an organization wants ongoing TCSC availability, they may subscribe
to getting a fresh TCSC halfway through the lifetime of the previous
one, to provide a constantly overlapping chain of SubCAs.



Except this doesn't meaningfully address the "day+1" issuance problem that
was highlighted, unless you proposed that the non-nesting constraints that
I mentioned aren't relevant.


The idea would be: TCSC issued for BR maximum period (N years plus M
months), fresh TCSC issued every M months, customer can always issue up
to at least N years.

I do realize the M months in the BRs are for another business purpose
related to renewal payments, but because TCSCs issue to non-paying
internal users, they don't need those months for the payment use case.





It would more be like disclaimer telling their customers that if they
issue a SHA-1 cert after 2016-01-01 from their SHA-256 TCSC, it probably
won't work in a lot of browsers, please for your own protection, issue
only SHA-256 or stronger certs.  So the incentive for the issuing CA is
to minimize tech support calls and angry customers.

If the CA fails to inform their customers, the customer will get angry,
but the WebPKI will be unaffected.



And I'm trying to tell you that your model of the incentives is wrong, and
it does not work like that, as can be shown by every other real world
deprecation.

If they made the disclaimer, and yet still 30% of sites had these, browsers
would not turn it off. As such, the disclaimer would be pointless - the
incentive structure is such that browsers aren't going to start throwing
users under the bus.

When the browser makes the change, the issuing CA does not get the calls.
The site does not get the calls. The browser gets the anger. This is
because "most recent to change is first to blame" - and it was the browser,
not the CA, that made the most recent change.

This is how it has worked out for every change in the past. And while I
appreciate your optimism that it would work with TCSCs, there's nothing in
this proposal that would change that incentive structure, such as to ensure
that you don't have 30% of the Internet doing "Whatever thing will be
deprecated", and as a consequence, _it will not be deprecate_.



OK, that is a sad state of affairs, that someone will have to solve for
this to fly.



One could also 

Re: Mozilla Policy and CCADB Disclosure scope

2017-05-23 Thread Jakob Bohm via dev-security-policy

On 23/05/2017 16:22, Ryan Sleevi wrote:

On Tue, May 23, 2017 at 9:45 AM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


* TCSCs can, by their existing definition, be programmatically
  recognized by certificate validation code e.g. in browsers and other
  clients.



In theory, true.
In practice, not even close.




Note as this is about a proposed future policy, this is about validation
code updated if and when such a policy is enacted.  Current validation
code has no reason to check a non-existent policy.

What part of "Has DNS/e-mail name constraints to at least second-level
domains or TLDs longer than 3 chars", "Has DN name constraints that
limit at least O and C", "Has EKU limitations that exclude AnyEKU and
anything else problematic", "Has lifetime and other general constraints
within the limits of EE certs" AND "Has a CTS" cannot be detected
programmatically?

Or could this be solved by require such "TCSC light" SubCA certs to
carry a specific CAB/F policy OID with CT-based community enforcement
that all SubCA certs with this policy OID comply with the more stringent
non-computable requirements likely to be in such a policy (if passed)?


* If TCSCs are limited, by requirements on BR-complient unconstrained
  SubCAs, to lifetimes that are the BR maximum of N years + a few months
  (e.g. 2 years + a few months for the latest CAB/F requirements), then
  any new CAB/F requirements on the algorithms etc. in SubCAs will be
  phased in as quickly as for EE certs.



I'm not sure what you're trying to say here, but the limits of lifetime to
EE certs are different than that of unconstrained subCAs (substantially)


I am trying to limit the scope of this to the kind of TCSC (Technically
Constrained SubCA) that Matthew was advocating for.  Thus none of this
applies to long lived or public SubCAs.

If an organization wants ongoing TCSC availability, they may subscribe
to getting a fresh TCSC halfway through the lifetime of the previous
one, to provide a constantly overlapping chain of SubCAs.





* If TCSCs cannot be renewed with the same public key, then TCSC issued
  EEs are also subject to the same phase in deadlines as regular EEs.



Renewing with the same public key is a problematic practice that should be
stopped.



Some other people seem to disagree, however in this case I am
constraining the discussion to a specific case where this would be
forbidden (And enforced via CT logging of the TCSC certs).  Thus no
debate on that particular issue.




* When issuing new/replacement TCSCs, CA operators should (by policy) be
  required to inform the prospective TCSC holders which options in EE
  certs (such as key strengths) will not be accepted by relying parties
  after certain phase-out dates during the TCSC lifetime.  It would then
  be foolish (and of little consequence to the WebPKI as a whole) if any
  TCSC holders ignore those restrictions.



This seems to be operating on an ideal world theory, not a real world
incentives theory.

First, there's the obvious problem that "required to inform" is
fundamentally problematic, and has been pointed out to you by Gerv in the
past. CAs were required to inform for a variety of things - but that
doesn't change market incentives. For that matter, "required to inform" can
be met by white text on a white background, or a box that clicks through,
or a default-checked "opt-in to future communications" requirement. The
history of human-computer interaction (and the gamification of regulatory
action) shows this is a toothless and not meaningful action.

I understand your intent is to be like "Surgeon General's Warning" on
cigarettes (in the US), or more substantive warnings in other countries,
and while that is well-intentioned as a deterrent - and works for some
cases - is to otherwise ignore the public health risk or to try to sweep it
under the rug under the auspices of "doing something".



It would more be like disclaimer telling their customers that if they
issue a SHA-1 cert after 2016-01-01 from their SHA-256 TCSC, it probably
won't work in a lot of browsers, please for your own protection, issue
only SHA-256 or stronger certs.  So the incentive for the issuing CA is
to minimize tech support calls and angry customers.

If the CA fails to inform their customers, the customer will get angry,
but the WebPKI will be unaffected.


Similarly, the market incentives are such that the warning will ultimately
be ineffective for some segment of the population. Chrome's own warnings
with SHA-1 - warnings that CAs felt were unquestionably 'too much' - still
showed how many sites were ill-prepared for the SHA-1 breakage (read: many).

Warnings feel good, but they don't do (enough) good. So the calculus comes
down to those making the decision - Gerv and Kathleen on behalf of Mozilla,
or folks like Andrew and I

Re: Mozilla Policy and CCADB Disclosure scope

2017-05-23 Thread Jakob Bohm via dev-security-policy
ny of the administrative burdens
imposed on public SubCAs.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Plan for Symantec posted

2017-05-22 Thread Jakob Bohm via dev-security-policy

On 22/05/2017 18:33, Gervase Markham wrote:

On 19/05/17 21:04, Kathleen Wilson wrote:

- What validity periods should be allowed for SSL certs being issued
in the old PKI (until the new PKI is ready)?


Symantec is required only to be issuing in the new PKI by 2017-08-08 -
in around ten weeks time. In the mean time, there is no restriction
beyond the normal one on the length they can issue. This makes sense,
because if certs issued yesterday will expire 39 months from yesterday,
then certs issued in 10 weeks will only expire 10 weeks after that - not
much difference.



Note that the plan (at least as I read it), involves two major phases:

1. The transition "Managed SubCAs", these will continue to chain to the
  old PKI during the transition, but it is possible for clients and root
  programs to limit the trust to those specific "Managed SubCAs" instead
  of the sprawling old certificate trees.  This does not involve CT
  checking in clients, just trust decisions.

2. The truly "new infrastructure", built properly to modern standards
  will not be ready until some time has passed, and will be a new root
  program applicant with new root CA certs.  Once those roots become
  accepted by multiple root programs (at least Google and Mozilla), the
  new root CAs can begin to issue via "new infrastructure" SubCAs that
  are signed by both "new root CAs" (for updated clients) and old root
  CAs (for old clients).


I prefer that this be on
the order of 13 months, and not on the order of 3 years, so that we
can hope to distrust the old PKI as soon as possible. I prefer to not
have to wait 3 years to stop trusting the old PKI for SSL, because a
bunch of 3-year SSL certs get issued this year.


If we want to distrust the old PKI as soon as possible, then instead of
trying to limit issuance period now, we should simply set a date after
which we are doing this, and require Symantec to have moved all of their
customers across to the new PKI by that time.

Google are doing a phased distrust of old certs, but they have not set a
date in their plan for total distrust of the old PKI. We should ask them
what their plans are for that.



I understood certs issued by the old systems (except the listed Managed
SubCAs) will be trusted only if issued and CT logged between 2016-06-01
and 2017-08-08, and will be subject to the BR lifetime requirements for
such certs.  Thus no such certs will remain trusted after approximately
2020-08-08 plus the slack in the BRs.

Clients without SCT checking (NSS ?) cannot check the presence of SCTs, 
but can still check the limited range of notBefore dates.



- I'm not sold on the idea of requiring Symantec to use third-party
CAs to perform validation/issuance on Symantec's behalf. The most
serious concerns that I have with Symantec's old PKI is with their
third-party subCAs and third-party RAs. I don't have particular
concern about Symantec doing the validation/issuance in-house. So, I
think it would be better/safer for Symantec to staff up to do the
validation/re-validation in-house rather than using third parties. If
the concern is about regaining trust, then add auditing to this.


Of course, if we don't require something but Google do (or vice versa)
then Symantec will need to do it anyway. But I will investigate in
discussions whether some scheme like this might be acceptable to both
the other two sides and might lead to a quicker migration timetable to
the new PKI.

Gerv




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Plan for Symantec posted

2017-05-22 Thread Jakob Bohm via dev-security-policy

Comments inline

On 20/05/2017 16:49, Michael Casadevall wrote:

Comments inline.

On 05/19/2017 05:10 PM, Jakob Bohm wrote:

Suggested trivial changes relative to the proposal for Mozilla use:

3. All non-expired Symantec issued certificates of any kind (including
SubCAs and revoked certificates) shall be CT logged as modified by #4
below.  All Symantec referenced OCSP responders shall return SCTs for
all such certificates, if possible even for revoked certificates.  This
also applies to expired certificates that were intended for use with
validity extending timestamping, such as the code signing certificate
issued to Mozilla Corporation with serial number 25 cc 37 35 e9 ec 1f
c9 71 67 0e 73 e3 69 c7 91.  Independent parties or root stores may at
their option use this data to generate public trust whitelists.

   Necessity: Whitelists in various forms based on such CT log entries,
as well as the SCTs in OCSP responses can provide an alternative for
relying parties checking current certificates even if the cleanup at
Symantec reveals a catastrophic breach during the past 20+ years.

   Proportionality: This should be relatively easy for the legitimate
certificates issued by Symantec, since the underlying data is still
used for OCSP response generation.



Sanity check here, but I thought that OCSP-CT-Stapling required SCTs to
be created at the time of issuance. Not sure if there's a way to
backdate this requirement. If this is only intended for the new roots
then just a point of clarification.


4. All stated requirements shall also apply to S/MIME certificates.




I *really* like this since it solves the problem of S/MIME + CT, but I
think this has to get codified into a specification. My second thought
here though is that there's no way to independently check if the CT logs
correspond to reality unless you have the public certificate since the
hashed fields would cause the signature to break.

I'd love to see this go somewhere but probably needs a fair bit of
thought and possible use of a different CT log vs. the primarily webPKI
ones.



The ideas here are:

1. To establish a temporary ad-hoc solution that can be handled by
existing CT log software logging the redacted precertificates.  This is
so solving the Symantec problem won't have to wait for general
standardization, which has stalled on this issue.  A standardized form
would be more compact and involve at least one "CT Extension" attribute.

2. By definition, any redaction would prevent CT log watchers from
checking if the unredacted cert signatures are valid.  This is
unavoidable, but not a problem for any known good uses of CT logs.

3. The design is intended to ensure that any process seeing an actual
cert can check it against SCTs obtained in any way (e.g. present in
cert, present in OCSP response, direct CT query, ...) by forming at most
one candidate redacted form, using mostly code likely to be already
present in such processes.

4. The design is intended to prevent recovering redacted data by
dictionary attacks (= guess and check).  This means that for existing
certs without a strong nonce attribute, logging the signature over the
unredacted final cert is also out of the question, such old certs need
to be logged as precerts only.




5. All stated requirements shall also apply to SubCA certificates other
than the specially blessed "Managed CA" SubCAs.  These shall never be
redacted.  As a special exception, the root programs may unanimously on
a one-by-one bases authorize the signing of additional Managed SubCAs
and/or new infrastructure cross certificates, subject to full
validation and signing ceremonies.  The root programs will authorize
enough new infrastructure cross signatures if and when they include the
roots of the new infrastructure.



Believe this was already covered by the PKI concerns that Symantec would
not be allowed to use third-party validation. Not sure if we can
realistically do a technical measure here since if we put a NotBefore
check in NSS, we have no easy way to change it in the future if it
becomes necessary for a one-off.



This would be an administrative requirement not checked by client
software directly, except that client software can check for the
presence of SCTs in any new SubCAs, and root programs can check the logs
for non-approved SubCA issuance.



6. All stated requirements except premature expiry and the prohibition
against later issuance shall apply to delegated OCSP signing, CRL
signing and other such revocation/validation related signatures made
by the existing Symantec CAs and SubCAs, but after the first deadline
(ultimo August), such shall only be used for the continued provision of
revocation information, and shall have corresponding EKUs.  This
corresponds to standard rules for dead CA certs, but adds CT logging of
any delegated revocation signing certificates.  These shall never be
redacted.



I think this can be more easily put as "intermediate certificates
r

Re: Google Plan for Symantec posted

2017-05-19 Thread Jakob Bohm via dev-security-policy
ge at small cost to all
parties (including Symantec).  By revealing only a strong hash of the
full PKCS#7 structure (which includes a signature blob), no information
is leaked about the time stamped objects, as required by RFC3161.

8. As Mozilla products don't currently trust any code or object signing
certificates, Mozilla places no requirements on those, but observes
that other root stores may have such requirements.  Ditto for any other
certificate types not mentioned.

  Necessity: Since this is no measure, it is the smallest possible
action.

  Proportionality: This is the lightest possible touch.

9. Symantec shall be allowed and obliged to continue operation of the
special "managed signing" services for which it has in the past been
granted a technically enforced monopoly by various platform vendors,
but may delegate this ability to one or more of the vendor independent
Managed SubCA operators if legally and technically possible.  This is
for the good of the community (in particular the relying parties using 
affected platforms), as I don't think it affects any Mozilla products

(I don't think Fennec was ever released for those platforms).
 Google may provide requirements for the handling of Google Play
private keys managed by Symantec on behalf of content vendors.

  Necessity: End users of certain systems (such as Windows Mobile) are
prevented from installing any software not signed by a special Symantec
online procedure.  Symantec may have similar relationships with other
platforms, or may be the custodian of signing keys for legacy systems
where the prior CA operator function relied on Symantec validations and
has since been shut down.  This is one of the specific situations where 
Symantec is too big to fail.

  For the Google Play private key hosting ("managed signing"), Symantec
has put itself in contractual relationships requiring it to provide this 
service on an ongoing basis.  (Note to those unfamiliar: Each

Google Play content creator signs content with one or more self-signed
long term (25+ years) certificates, and Google products enforces that
content (including app) updates must be signed with the same such
certificate as version 1 of each piece of content.  Actual identity
validation occurs when version 1 is accepted into the Google Play
store.  Based on the technology used for its monopoly managed signing
services, Symantec convinced a number of content creators that it could
protect their private keys better than they could).

  Proportionality: This requires Symantec to continue to serve 
communities for which it has acquired an actual monopoly.  Symantec 
already charges for these services, thus the economic burden on Symantec 
is limited to the fact that fixed costs can no longer be shared with the 
CA activities that have been handed to Managed SubCAs.

Symantec is not prohibitied from reasonable price hikes on these
services, thus further lessening their burden.  Eventually, these
services will need to be reinstantiated in the new infrastucture once
it is ready.
  For the Google Play case, Symantec created these situations
themselves.



10. Symantec shall be allowed for marketing purposes, but not in legal
documents, to pretend to be the vendor behind the Managed SubCA
issuances, and may collect a profit or loss on the fees for each
certificate request.  However Symantec may not pretend to, nor actually
obtain, any ownership, authority or affiliation of the Managed SubCA
operators, nor vice versa.  I believe Symantec is very capable of
formulating effective marketing materials etc. that complies with this.
This permission is at the grace of the root stores and may be revoked
if further Symantec misbehavior occurs after 2017-05-20 at 00:00 PDT,
especially but not only if Symantec fails to file a Mozilla bug
admitting the incident before the latter of the independent reporting
of the incident or 48 hours after said incident.  Mozilla will
independently timestamp such reports using its existing mechanisms and
share such timestamps and reports with Google and other cooperating
root programs.


  Necessity: This is not necessary, but provides Symantec with
something to eat during the transition while providing the root stores
with a sanction short of distrust for minor infractions during the
transition.
  The statement of clear deadlines for self-reporting issues is due to
the lateness of recent Symantec reports and responses.

  Proportionality: This is mostly what would be implied by the plan,
except that some marketing shenanigans are tolerated.  The Mozilla
timestamping is imagined to simply be the time stamps and sequential
numbers that Mozilla already assigns to all bug reports.  The 48 hours
allowed post-incident even if someone else is quick to report the
incident is intended to just give Symantec Management time enough to
literally actually wake up, go to work, notice the misbehavior or other
incident and then quickly issue a report.  The exact number

Re: Google Plan for Symantec posted

2017-05-19 Thread Jakob Bohm via dev-security-policy

On 19/05/2017 22:04, Kathleen Wilson wrote:

On Friday, May 19, 2017 at 8:42:40 AM UTC-7, Gervase Markham wrote:


I have passed that document to Kathleen, and I hope she will be
endorsing this general direction soon, at which point it will no longer
be a draft.

Assuming she does, this will effectively turn into a 3-way conversation
between Symantec, Google and Mozilla, to iron out the details of what's
required, with the Google proposal as a base. (Which I'm fine with as a
starting point.)

Comments are therefore invited on what modifications to the plan or
additional requirements Mozilla might want to suggest/impose, and
(importantly) why those suggestions/impositions are necessary and
proportionate.




Gerv, thank you for all the effort you have been putting into this 
investigation into Symantec's mis-issuances, and in identifying the best way to 
move forward with the primary goal being to help keep end-users safe.

I fully support requiring Symantec to set up a new PKI on new infrastructure, 
and to transition to it in phases, in order to minimize the impact and reduce
the risk for end-users.

I think the general direction is correct, but I think there are a few details 
to be ironed out, such as:

- What validity periods should be allowed for SSL certs being issued in the old 
PKI (until the new PKI is ready)? I prefer that this be on the order of 13 
months, and not on the order of 3 years, so that we can hope to distrust the 
old PKI as soon as possible. I prefer to not have to wait 3 years to stop 
trusting the old PKI for SSL, because a bunch of 3-year SSL certs get issued 
this year.



I think this can be solved by having either the new Symantec PKI (if
trusted) or some other trusted CA cross sign the Managed SubCA, then
including that cross cert in the P11 object in Mozilla products.

This would disconnect the new certs (even if they have 35 months left)
from the old Symantec PKI for any client that includes the cross certs
in its standard store, allowing the old CA certs to be removed in the
very same release (unless of cause there are pre-transition certs that
should still be trusted).


- Perhaps the new PKI should only be cross-signed by a particular intermediate 
cert of a particular root cert, so that we can begin to distrust the rest of 
the old PKI as soon as possible.



Again, the idea would be that once the new PKI cross signs the
transitional managed SubCAs, root stores can ship the new root CAs and
the new cross certs for the SubCAs while instantly distrusting the old
PKI.  In other words the Managed/transitional SubCAs would serve the
role of your proposed "particular intermediate cert".



- I'm not sold on the idea of requiring Symantec to use third-party CAs to 
perform validation/issuance on Symantec's behalf. The most serious concerns 
that I have with Symantec's old PKI is with their third-party subCAs and 
third-party RAs. I don't have particular concern about Symantec doing the 
validation/issuance in-house. So, I think it would be better/safer for Symantec 
to staff up to do the validation/re-validation in-house rather than using third 
parties. If the concern is about regaining trust, then add auditing to this.



That seems to be a matter of debate.  I have been arguing the same
point without success.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Fix definition of constraints for id-kp-emailProtection

2017-05-19 Thread Jakob Bohm via dev-security-policy

On 19/05/2017 16:37, Gervase Markham wrote:

On 19/05/17 15:16, Inigo Barreira wrote:

What about those for gmail, Hotmail, etc.? Are out of scope?


I'm not sure what you mean. If Gmail wants a TCSC for @gmail.com, they
can have one. They would presumably need to set the dirName to "" or
null, because no dirName can cover all of their customers, as their
customerd don't represent Google?

Gerv



Or it could be O=GMail Canada Users, C=CA for @gmail.ca with the SubCA
itself being O=Google.  Etc. For each country-specific GMail domain.
@gmail.com would be C=US, or some C= value indicating unknown country
(if permitted in the X.500 standards).

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Fix definition of constraints for id-kp-emailProtection

2017-05-19 Thread Jakob Bohm via dev-security-policy

On 19/05/2017 16:15, Gervase Markham wrote:

On 19/05/17 14:58, Jakob Bohm wrote:

Because the O and other dirname attributes may be shown in an e-mail
client (current or future) as a stronger identity than the technical
e-mail address.


Do you know of any such clients?



No, but it would be similar to how Fx displays that field in EV certs,
so a future Thunderbird, or a non-Mozilla client could reasonably do
something similar, even at OV level.


Imagine a certificate saying that ge...@wosign.cn is "CN=Gervase
Markham, O=Mozilla Corporation, ST=California, CN=US", issued by a
SubCA name constrained to "@wosign.cn", but not to any range of DNs.


Surely such a certificate would be misissued? Although I guess the issue
here is that we are excluding them from scope...

So the idea would be to say that dirName had to be constrained to either
be empty (is that possible?) or to contain a dirNames validated as
correctly representing an organization owning at least one of the domain
name(s) in the cert?



Rather: It should be constrained to an X.500 subtree identifying an
organization validated to at least BR compliant OV level (EV level if
SubCA notBefore after some policy date) as for a ServerAuth certificate
for the same domain names specified in the rfc822name restrictions.

Keeps it short and simple and subject to well-understood policies.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Fix definition of constraints for id-kp-emailProtection

2017-05-19 Thread Jakob Bohm via dev-security-policy

On 19/05/2017 15:13, Gervase Markham wrote:

On 08/05/17 11:32, Dimitris Zacharopoulos wrote:

On 8/5/2017 1:18 μμ, Gervase Markham wrote:

  + dirName entries scoped in the Organizational name and
location

Help me understand how dirName interacts with id-kp-emailProtection?


When the Subscriber belongs to an Organization that needs to be included
in the subjectDN.


Right, but why do we need name constraints here?

It seems to me that positive constraints on rfc822Name are sufficient
for an intermediate to be a TCSC.

Gerv



Because the O and other dirname attributes may be shown in an e-mail
client (current or future) as a stronger identity than the technical
e-mail address.

Imagine a certificate saying that ge...@wosign.cn is "CN=Gervase
Markham, O=Mozilla Corporation, ST=California, CN=US", issued by a
SubCA name constrained to "@wosign.cn", but not to any range of DNs.

It would be problematic for such a SubCA to be considered a TCSC
excluded from all usual checks and balances.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Configuring Graduated Trust for Non-Browser Consumption

2017-05-17 Thread Jakob Bohm via dev-security-policy

On 17/05/2017 13:29, Gervase Markham wrote:

On 16/05/17 18:04, Jakob Bohm wrote:

Could you please point out where in certdata.txt the following are
expressed, as I couldn't find it in a quick scan:

1. The date restrictions on WoSign-issued certificates.

2. The EV trust bit for some CAs.


I am surprised you are engaging in a discussion on this topic without
being pretty familiar with:
https://wiki.mozilla.org/CA/Additional_Trust_Changes



That is /human readable/ information, not /computer readable/ data that
can be imported by other libraries when those are used with the Mozilla
root program.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Configuring Graduated Trust for Non-Browser Consumption

2017-05-16 Thread Jakob Bohm via dev-security-policy

On 16/05/2017 18:10, Peter Bowen wrote:

On Tue, May 16, 2017 at 9:00 AM, Jakob Bohm via dev-security-policy
<dev-security-policy@lists.mozilla.org> wrote:

Your post above is the first response actually saying what is wrong
with the Microsoft format and the first post saying all the
restrictions are actually in the certdata.txt file, and not just in the
binary file used by the the NSS library.


What "binary file" are you referring to?  NSS is distributed as source
and I'm unaware of any binary file used by the NSS library for trust
decisions.



Source code for Mozilla products presumably includes some binary files
(such as PNG files), so why not a binary database file that becomes
that data that end users can view (and partially edit) in the Mozilla
product dialogs.  Existence of a file named "generate_certdata.py",
which is not easily grokked also confused me into thinking that
certdata.txt was some kind of extracted subset.

Anyway, having now looked closer at the file contents (which does look
like computer output), I have been unable to find a line that actually
expresses any of the already established "gradual trusts".

Could you please point out where in certdata.txt the following are
expressed, as I couldn't find it in a quick scan:

1. The date restrictions on WoSign-issued certificates.

2. The EV trust bit for some CAs.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Configuring Graduated Trust for Non-Browser Consumption

2017-05-16 Thread Jakob Bohm via dev-security-policy

On 16/05/2017 17:10, Peter Bowen wrote:



On May 16, 2017, at 7:42 AM, Jakob Bohm via dev-security-policy 
<dev-security-policy@lists.mozilla.org> wrote:

On 13/05/2017 00:48, Ryan Sleevi wrote:


And in the original message, what was requested was
"If Mozilla is interested in doing a substantial public service, this
situation could be improved by having Mozilla and MDSP define a static
configuration format that expresses the graduated trust rules as data, not
code."

Mozilla does express such graduated trust rules as data, not code, when
possible. This is available with in the certdata.txt module data as
expressive records using the NSS vendor attributes.

Not all such requirements can be expressed as data, not code, but when
possible, Mozilla does. That consuming applications do not make use of that
information is something that consuming applications should deal with.



I suggest you read and understand the OP in this thread, which is
*entirely* about using the Mozilla Root Store outside Mozilla code.

Yet you keep posting noise about using the Mozilla store with Mozilla
code such as NSS, with Mozilla internal database formats, etc. etc.

Just above you commented "Not all such requirements can be expressed as
code", which is completely backwards thinking when the request is for
putting all additional conditions in an open database in a *stable*
data format that can be easily and fully consumed by non-Mozilla code.


Jakob,

What I think Ryan has been trying to express is his view that this request is 
not possible.  A *stable* data format is unable to express future graduated 
trust rules.

To see why Ryan likely has this view, consider the authroot.stl file used by 
Microsoft Windows.  The structure is essentially a certificate plus a set of 
properties.  The properties are name value pairs.  The challenge in using this 
file is that the list of properties keeps extending.  New property names are 
added on a fairly routine basis.  For example, the last update added 
NOT_BEFORE_FILETIME and NOT_BEFORE_ENHKEY_USAGE.  This is great — we now know 
that certain roots have one or both of these properties with both represent 
some sort of restriction.  However we have zero clue what they mean or how to 
process them.

Now consider certdata.txt, the Mozilla trust store format.  It is similarly 
extensible, after all it is just a serialization of a PKCS#11 token.  PKCS#11 
has objects which each have attributes.  Mozilla certdata.txt could take the 
exact same path as authroot.stl and just add attributes for each new rule.  
Imagine a new attribute on CKO_NSS_TRUST class objects called 
CKA_NAME_CONSTRAINTS.  This is contains DER encoded NameConstraints.  If this 
were suddenly added, what would existing libraries do?  Probably just ignore 
it, because they don’t query for CKA_NAME_CONSTRAINTS.  Taking this to an 
extreme, certain objects could even having attributes like 
CKA_CONSTRAINT_METHOD with a value that is the name of a function.

While this would be stable and would express all the rules, it isn’t clear that 
such is valuable to anyone because you need the matching code to query 
attributes and do something with the value.  It also can lead to a false sense 
of security because using a new certdata.txt with an old library will not 
actually implement the trust changes.

Does this help explain the problem?



Your post above is the first response actually saying what is wrong
with the Microsoft format and the first post saying all the
restrictions are actually in the certdata.txt file, and not just in the
binary file used by the the NSS library.

This is much more constructive than anything Ryan posted in this thread.

Thanks for this.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Configuring Graduated Trust for Non-Browser Consumption

2017-05-16 Thread Jakob Bohm via dev-security-policy

On 13/05/2017 00:48, Ryan Sleevi wrote:

On Fri, May 12, 2017 at 6:02 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


This SubThread (going back to Kurt Roeckx's post at 08:06 UTC) is about
suggesting a good format for sharing this info across libraries though.
Discussing that on a list dedicated to a single library (such as NSS or
OpenSSL) would be pointless.



And in the original message, what was requested was
"If Mozilla is interested in doing a substantial public service, this
situation could be improved by having Mozilla and MDSP define a static
configuration format that expresses the graduated trust rules as data, not
code."

Mozilla does express such graduated trust rules as data, not code, when
possible. This is available with in the certdata.txt module data as
expressive records using the NSS vendor attributes.

Not all such requirements can be expressed as code, not data, but when
possible, Mozilla does. That consuming applications do not make use of that
information is something that consuming applications should deal with.



I suggest you read and understand the OP in this thread, which is
*entirely* about using the Mozilla Root Store outside Mozilla code.

Yet you keep posting noise about using the Mozilla store with Mozilla
code such as NSS, with Mozilla internal database formats, etc. etc.

Just above you commented "Not all such requirements can be expressed as
code", which is completely backwards thinking when the request is for
putting all additional conditions in an open database in a *stable*
data format that can be easily and fully consumed by non-Mozilla code.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Configuring Graduated Trust for Non-Browser Consumption

2017-05-16 Thread Jakob Bohm via dev-security-policy

On 16/05/2017 15:23, Alex Gaynor wrote:

That's not an appropriate way to participate in a mailing list, please
communicate civilly.



Sorry about the flaming, but he was constantly derailing that
particular discussion with this misconception, and I was frankly
getting fed up.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Configuring Graduated Trust for Non-Browser Consumption

2017-05-16 Thread Jakob Bohm via dev-security-policy

On 11/05/2017 18:42, Ryan Sleevi wrote:

On Thu, May 11, 2017 at 11:57 AM, Alex Gaynor via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


Ryan,

I think you've correctly highlighted that there's a problem -- the Mozilla
CA store is "designed" to be consumed from NSS, and CA-specific
remediations are a part of that (hash algorithms, maximum certificate
lifetimes, and any number of other important technical controls).

Unfortunately, we're currently in a position where near as I can tell, most
code (except Go code :P) making HTTPS requests are using a Mozilla-derived
CA store, and OpenSSL's verifier, which only provides a subset of the
technical controls browsers implement. This is unfortunate, particular
because these clients also do not check CT, so it's entirely possible to
serve them certs which are not publicly visible. In a large sense, browsers
currently act as canaries-in-the-coalmine, protecting non-browser clients.

Like Cory, I help maintain non-browser TLS clients. To that end, I think
it'd be outstanding if as a community we could find a way to get more of
these technical controls into non-browser clients -- some of this is just
things we need to do (e.g. add hash algorithm and lifetime checking to
OpenSSL or all consumers of it),



Yes :) There's a significant amount that needs to happen in the third-party
verifiers to understand and appreciate the risk of certain behaviours ;)



other's need coordination with Mozilla's
root program, and I think Cory's proposal highlights one way of making that
happen.



Right, but these already flow into the NSS trust store - when appropriate.
I'm sure you can understand when a piece of logic is _not_ implemented in
NSS (e.g. because it's not generic beyond the case of browsers), that it
seems weird to put it in/expose it in NSS :)

To be clear: I'm not trying to suggest it's an entirely unreasonable
request, merely an explanation of the constraints around it and why the
current approach is employed that tries to balance what's right for Mozilla
users and the overall NSS using community :)



Can you please get it into your thick skull that this thread is NOT
ABOUT NSS, IT IS ABOUT ALL THE OTHER X.509 LIBRARIES that can be
configured to use a copy of the Mozilla Root store!


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: April CA Communication: Results

2017-05-15 Thread Jakob Bohm via dev-security-policy

On 15/05/2017 15:53, Doug Beattie wrote:


...
Yes, it is certainly a bit dated.  Outlook 2013 and 2016 are not listed along 
with more recent versions of iMail and Thunderbird.



I believe the point of the document was only to list what was needed to
get SHA256 compatibility.  So for each vendor, product and use, it
lists the incompatible products and the first fully compatible product.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec: Update

2017-05-15 Thread Jakob Bohm via dev-security-policy

On 15/05/2017 22:06, Michael Casadevall wrote:

On 05/15/2017 09:32 AM, Jakob Bohm wrote:

This won't work for the *millions* of legitimate, not-misissued,
end certificates that were issued before Symantec began SCT
embedding (hopefully in the past) and haven't expired before such
an early deadline.



Sorry, I could have been more clear here. What I'm proposing is that
after a specific TBD NotBefore date, we require SCTs to be in place on
the certificate to be trusted. Certificates from before that date
would remain trusted as-is (pending any reduction of expiration time).



Ok, that's much better.


I don't know if NSS has support for checking of SCTs (I can't pull the
source at the moment to check), but it should fail if the SCT is
missing, and otherwise behave like OCSP validation.


Also, since both Mozilla and Debian-derived systems such as Ubuntu
use the the Mozilla store as the basis for S/MIME checking, it is
worth noting that CT-logging of S/MIME end certs under the current
Google- dominated specifications is a guaranteed spam disaster, as
it would publish all the embedded e-mail addresses for easy
harvesting.



I didn't consider the S/MIME use case here. A brief look at the root
store I'd be fine with the SCT restriction only applying when looking
at CKA_TRUST_SERVER_AUTH, and not in other cases. Looking at certdata,
it looks like at least some of the current Verisign/Symantec roots
have both the S/MIME and server auth bits enabled.

While I feel CT would be a nice thing for S/MIME, unfortunately, I
have to agree with this point that we don't need to make spammers
lives easier. That being said, part of me wonders if there would be
other undisclosed intermediates if one could easily evaluate S/MIME
issuances ...


Mandating the X509v3 extension for TLS certificates means that
downstream servers don't have to be updated for CT awareness, and
we should never be in a case where a Mozilla product is accepting
a certificate that we can't independent review at a further point
via the CT logs. It should also prevent an undisclosed
intermediately from being undetected (as we've seen with Issue
Y).



However it would mandate that they be updated with new
certificates instead.  A lot easier, but still a mountain not
easily moved.


See above on NotBefore.




I'd also like to add the following to the transition plans: -
Limit certificate expiration to nine months from the existing
roots for new certificates.


I strongly believe the "9 month" rule mysteriously proposed but
never explained by Google was designed specifically to make buying
certs from Symantec all but worthless, chasing away all their
customers.  People *paying* for certificates generally don't want
to buy from anyone selling in increments of less than 1 year,
preferably 2 or 3.  "9 months" is an especially bad duration, as it
means the renewal dates and number of renewals per fiscal year will
fluctuate wildly from an accounting perspective.



I can see the point here, but I'm not sure I agree. Every time we keep
digging, we keep finding more and more problems with these roots.
WebPKI depends on all certificates in the root store being
trustworthy, and Symantec as a whole has not exactly shown themselves
to be responsive or willing to communicate publicly on the various
issues brought up on the list.



Yes, that seems to be the trend.  But it has nothing to do with if the
"9 month" rule or some other measures are the best remedy.


There's a decent argument to be had to simply disallow new issuance
from the existing roots and allow the current certificates to age out
(in which case imposing SCT embedded as I propose is simple), but I'm
not sure we've gotten a complete picture of how far this rabbit hole goe
s.



That wouldn't work, see below.


There's been a continual pattern of "this is everything", and then we
find another bunch of misissued certificates/undisclosed subCAs/etc.
Can we honestly say that we're comfortable with allowing these roots
to still be active at all?


- The above SCT requirement shall come into affect for the old
roots no less that three months from the date the proposal is
ratified. - Create a whitelist of intermediate certificates from
the root that can continue issuing certificates, but cutting off
RAs after an initial six month time period


Are there any RA's left for Symantec?



TBH, I'm not sure. I think Gervase asked for clarification on this
point, but its hard to keep track of who could issue as an RA. I know
quite a few got killed, but I'm not sure if there are any other subCAs
based off re-reading posts in this thread.



RAs (external companies that can decide if Symantec itself should issue
a cert) are completely different from external SubCAs (external
companies that have their own CA and a certificate chain back to a
Symantec root), which are different from internal SubCAs (CA
certificates for Symantec controlled keys, such as the SubCA that
signed 

Re: April CA Communication: Results

2017-05-15 Thread Jakob Bohm via dev-security-policy

On 15/05/2017 15:26, Gervase Markham wrote:

On 15/05/17 14:19, Doug Beattie wrote:

https://support.globalsign.com/customer/portal/articles/1216323


Thanks, Doug. There's no date on that doc - are you able to say when it
was written?

Gerv



I believe it is a "live" doc, that was regularly updated, at least
while relevant changes were happening in the world.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Configuring Graduated Trust for Non-Browser Consumption

2017-05-15 Thread Jakob Bohm via dev-security-policy

On 15/05/2017 15:19, Gervase Markham wrote:

On 12/05/17 09:18, Cory Benfield wrote:

I try not to decide whether there is interest in features like this:
if they’re easy I’d just implement them and let users decide if they
want it. That’s what I’d be inclined to do here. If Mozilla added
such a flag, I’d definitely be open to adding an extra certifi
bundle. Certifi currently already ships with two bundles (one
labelled “weak”, which includes 1024-bit roots to work around
problems with older OpenSSLs), so we could easily add a third called
“strong” or “pedantic” or “I hate CAs” or something that removes any
CA that is subject to graduated trust in Firefox.


If people actually care enough to make a root store choice, should we be
encouraging them instead to use a store containing only the CA they care
about for the connection they are making (and perhaps a backup)? In
other words, is some sort of easy-to-use root store filtering/splitting
tool a better solution to this issue?



That obviously wouldn't be any good where the intent is to accept
general third parties (just like the intent in a Browser).

It would also cause the specific problems with entities randomly stuck
at specific historic roots that are part of our current Symantec
dilemma.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec: Update

2017-05-15 Thread Jakob Bohm via dev-security-policy
hat's its damn hard to
understand when exactly a certificate is good or when it expires since
the dates in the X509 certificate don't necessarily correspond with
reality. By simply capping Symantec certificates to nine months, it puts
us in a position that moving to a new DV/EV root would be required for
them to remain competitive while not drastically affecting the ecosystem
as a whole.



For starters, we can demand that Symantec never issues misdated
certificates lest they be thrown out instantly.  So far the only
misdatings known were a very few that were well documented with very
good technical excuses.

Forcing the stand-up of new roots operated by competent management and
staff is indeed good, but to do that safely (i.e. without false
security theatrics and same old flaws) would require that Symantec
spends several months to get their house in order before generating
the new root keys.


Maybe I'm off-kilter here, but I think this proposal would help keep
impact on WebPKI to a minimum but light a fairly serious fire to get
users moved to the new root stores ASAP. Please let me know if I am
seriously off base with my understanding of the situation or the
technologies involved; WebPKI is a complicated thing to understand :)



The fundamental questions are:

Should we light a fire under the sloppy Symantec management or under
their innocent users?

And should we allow enough of Symantec to keep standing to not leave
those users who cannot switch stranded with nowhere to go but a 404
webpage?



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Configuring Graduated Trust for Non-Browser Consumption

2017-05-12 Thread Jakob Bohm via dev-security-policy

On 12/05/2017 23:45, Ryan Sleevi wrote:

On Fri, May 12, 2017 at 2:15 PM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


On 12/05/2017 20:43, Ryan Sleevi wrote:

On Fri, May 12, 2017 at 1:50 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


Could something be derived from / based on the ASN.1 format apparently
used by Microsoft in it's root store, with OpenSSL/Mozilla OIDs added
for things that have no Microsoft notation yet.



Why? It's a poor format.



Another starting point (if not the same) could be the "trusted
certificate" format that some openssl commands can generate.



Why? It's a poor format.

You missed that NSS already has these expressions in the form that is
appropriate for NSS. Why change?



The topic of this thread is to get the information in a format
appropriate for use in *other* libraries, such as OpenSSL or
BouncyCastle, both of which are used in Android.



I'm afraid that may be misstating things. The topic is to get the
information at all - which, in cases, it is made available in the NSS trust
DB.

How that is exported is something better suited for those applications, not
this list or discussion. The discussion here is whether that information is
consistently made available in the NSS trust DB (which has its own format)
at all.

I can see how those may be confusing, but hopefully with that clarification
you can understand the difference between discussing format versus
discussing functionality.



This SubThread (going back to Kurt Roeckx's post at 08:06 UTC) is about
suggesting a good format for sharing this info across libraries though.
Discussing that on a list dedicated to a single library (such as NSS or
OpenSSL) would be pointless.

I am trying not to be overly technical in my suggestions, using
descriptive names for the formats rather than going into bits bytes and
source code.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Configuring Graduated Trust for Non-Browser Consumption

2017-05-12 Thread Jakob Bohm via dev-security-policy

On 12/05/2017 20:43, Ryan Sleevi wrote:

On Fri, May 12, 2017 at 1:50 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


Could something be derived from / based on the ASN.1 format apparently
used by Microsoft in it's root store, with OpenSSL/Mozilla OIDs added
for things that have no Microsoft notation yet.



Why? It's a poor format.



Another starting point (if not the same) could be the "trusted
certificate" format that some openssl commands can generate.



Why? It's a poor format.

You missed that NSS already has these expressions in the form that is
appropriate for NSS. Why change?



The topic of this thread is to get the information in a format
appropriate for use in *other* libraries, such as OpenSSL or
BouncyCastle, both of which are used in Android.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Require CAs to operate in accordance with their CPs and CPSes

2017-05-12 Thread Jakob Bohm via dev-security-policy

On 12/05/2017 15:21, Gervase Markham wrote:

Mozilla policy requires that certificates issued in contravention of a
CA's CP/CPS should be revoked. Other than that, Mozilla policy does not
directly require that a CA operate in accordance with its CP and CPS. We
require this indirectly because the audits that we require, require it.
This perhaps surprising omission was brought to light by the Let's
Encrypt blocklist incident. Discussion:
https://groups.google.com/forum/#!topic/mozilla.dev.security.policy/_pSjsrZrTWY

The proposal is to have Mozilla policy directly require that CAs operate
in accordance with the appropriate CP/CPS for the root(s) in our store
on an ongoing basis.

Specifically, we could add text to the top of section 5.2 ("Forbidden
and Required Practices"):

"CA operations MUST at all times be in accordance with the applicable CP
and CPS."



Perhaps tweak the wording to make the document submitted to the CCADB
binding, rather than any CP/CPS published elsewhere.


This is: https://github.com/mozilla/pkipolicy/issues/43

---

This is a proposed update to Mozilla's root store policy for version
2.5. Please keep discussion in this group rather than on Github. Silence
is consent.

Policy 2.4.1 (current version):
https://github.com/mozilla/pkipolicy/blob/2.4.1/rootstore/policy.md
Update process:
https://wiki.mozilla.org/CA:CertPolicyUpdates




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Configuring Graduated Trust for Non-Browser Consumption

2017-05-12 Thread Jakob Bohm via dev-security-policy

On 12/05/2017 10:06, Kurt Roeckx wrote:

On 2017-05-11 17:57, Alex Gaynor wrote:

Ryan,

I think you've correctly highlighted that there's a problem -- the
Mozilla
CA store is "designed" to be consumed from NSS, and CA-specific
remediations are a part of that (hash algorithms, maximum certificate
lifetimes, and any number of other important technical controls).

Unfortunately, we're currently in a position where near as I can tell,
most
code (except Go code :P) making HTTPS requests are using a
Mozilla-derived
CA store, and OpenSSL's verifier, which only provides a subset of the
technical controls browsers implement. This is unfortunate, particular
because these clients also do not check CT, so it's entirely possible to
serve them certs which are not publicly visible. In a large sense,
browsers
currently act as canaries-in-the-coalmine, protecting non-browser
clients.

Like Cory, I help maintain non-browser TLS clients. To that end, I think
it'd be outstanding if as a community we could find a way to get more of
these technical controls into non-browser clients -- some of this is just
things we need to do (e.g. add hash algorithm and lifetime checking to
OpenSSL or all consumers of it), other's need coordination with Mozilla's
root program, and I think Cory's proposal highlights one way of making
that
happen.


From past discussion on the OpenSSL list, I understand that we want to
support a trust store that supports all such kind of attributes. Some
things like for what it's trusted are currently supported by using an
X509_AUX structure instead of an X509 structure but then OpenSSL is the
only thing that can read it, so this isn't really used.

What we need is a format that can be used by all libraries. It probably
needs to be extensible. It should probably support both all certificates
in 1 file and all in separate files.



Could something be derived from / based on the ASN.1 format apparently
used by Microsoft in it's root store, with OpenSSL/Mozilla OIDs added
for things that have no Microsoft notation yet.

Another starting point (if not the same) could be the "trusted
certificate" format that some openssl commands can generate.

Ideally in the future, code support for a new restriction type can be
implemented (in both NSS and other libraries) before the community
decision to enforce that restriction against a given CA comes into
effect.  For example, some time has already passed since Google
proposed their set of Symantec restrictions, but the trigger has not
been pulled yet on any of those.

Just for clarity, could you confirm that the current (1.0.2 / 1.1.0)
OpenSSL verifiers do not check if issuing CA has a conflicting EKU list?


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Fix definition of constraints for id-kp-emailProtection

2017-05-09 Thread Jakob Bohm via dev-security-policy

On 08/05/2017 12:16, Gervase Markham wrote:

On 05/05/17 22:21, Jakob Bohm wrote:

The issue would be implementations that only check the EE cert for
their desired EKU (such as ServerAuth checking for a TLS client or
EmailProtection checking for a mail client).  In other words, relying
parties whose software would accept a chain such as

root CA (no EKUs) => SubCA (EmailProtection) => EE cert (ServerAuth).


Do you know of any such implementations?


I am not sure.  I suspect such simple implementations (that only check
for the specifically desired EKU in the EE cert) were common in the
past, and I don't know if all implementations have switched to the
interpretation that CA EKUs act as constraints on child EKUs.

This simple implementation kind would correspond to interpreting the
EKUs in a CA cert to describe the abilities of the CA cert itself (i.e.
it could reasonable list only CA related uses such as CertSign,
CRLSign, OCSPSign).  (Not checked for typos).




One other question: Does your proposal allow a TCSC that covers both
ServerAuth and EmailProtection for the domains of the same organization?


I don't believe my proposal forbids this. Do you think it should?



These questions were directed at Dimitris' wording.


Does Mozilla as a Browser implementer have any policy or technical
requirements on certificates that Mozilla products can use for
ClientAuth


No policy requirements to my knowledge. There may be technical
requirements (e.g. now we've turned off SHA-1 support, I doubt that
works with ClientAuth either).





Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Fix definition of constraints for id-kp-emailProtection

2017-05-05 Thread Jakob Bohm via dev-security-policy

On 05/05/2017 22:45, Dimitris Zacharopoulos wrote:



On 5/5/2017 10:58 μμ, Peter Bowen wrote:

On Fri, May 5, 2017 at 11:58 AM, Dimitris Zacharopoulos via
dev-security-policy <dev-security-policy@lists.mozilla.org> wrote:


On 5/5/2017 9:49 μμ, Peter Bowen via dev-security-policy wrote:

On Fri, May 5, 2017 at 11:44 AM, Dimitris Zacharopoulos via
dev-security-policy <dev-security-policy@lists.mozilla.org> wrote:

Looking at https://github.com/mozilla/pkipolicy/issues/69

do you have a proposed language that takes all comments into account?
From
what I understand, the Subordinate CA Certificate to be considered
Technically Constrained only for S/MIME:

   * MUST include an EKU that has the id-kp-emailProtection value AND
   * MUST include a nameConstraints extension with
   o a permittedSubtrees with
   + rfc822Name entries scoped in the Domain (@example.com) or
 Domain Namespace (@example.com, @.example.com)
controlled by
 an Organization and
   + dirName entries scoped in the Organizational name and
location
   o an excludedSubtrees with
   + a zero‐length dNSName
   + an iPAddress GeneralName of 8 zero octets (covering
the IPv4
 address range of 0.0.0.0/0)
   + an iPAddress GeneralName of 32 zero octets (covering the
 IPv6 address range of ::0/0)

Why do we need to address dNSName and iPAddress if the only EKU is
id-kp-emailProtection?

Can we simplify this to just requiring at least one rfc822Name entry
in the permittedSubtrees?


I would be fine with this but there may be implementations that
ignore the
EKU at the Intermediate CA level.

I've only ever heard of people saying that adding EKU at the
intermediate level breaks things, not that things ignore it.


You are probably right. Two relevant threads:

 * https://www.ietf.org/mail-archive/web/pkix/current/msg33507.html and
 * an older one from year 2000
   (https://www.ietf.org/mail-archive/web/pkix/current/msg06821.html)

I don't know if all implementations doing path validation, use the EKUs
at the CA level but it seems that the most popular applications use it.



The issue would be implementations that only check the EE cert for
their desired EKU (such as ServerAuth checking for a TLS client or
EmailProtection checking for a mail client).  In other words, relying 
parties whose software would accept a chain such as


root CA (no EKUs) => SubCA (EmailProtection) => EE cert (ServerAuth).




So, if we want to align with both the CA/B
Forum BRs section 7.1.5 and the Mozilla Policy for S/MIME, perhaps we
should
keep the excludedSubtrees.

The BRs cover serverAuth.


Of course they do, I was merely trying to re-use the same language for
S/MIME usage :)


Dimitris.


If you look at
https://imagebin.ca/v/3LRcaKW9t2Qt, you will see that TCSC will end up
being two independent tests.




One other question: Does your proposal allow a TCSC that covers both
ServerAuth and EmailProtection for the domains of the same organization?

Or put another way, would your proposed language force an organization
wanting to run under its own TCSC(s) to obtain two TCSCs, one for their
S/MIME needs and another for their TLS needs?

P.S.

Does Mozilla as a Browser implementer have any policy or technical
requirements on certificates that Mozilla products can use for
ClientAuth (e.g. does Firefox only offer certs with the ClientAuth (or
no) EKU when prompting a user which cert to send to a webserver,
similar for Thunderbird doing ClientAuth to a TLS protected e-mail
server).


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec: Draft Proposal

2017-05-05 Thread Jakob Bohm via dev-security-policy

On 05/05/2017 17:37, Gervase Markham wrote:

On 04/05/17 19:30, Jakob Bohm wrote:

1. Issue D actually seems to conflate three *completely different*
  issues:


Are you sure you are not referring to the Issues List document here
rather than the proposal?



I am referring to the "summary" of D in the proposal, which talks about 
"Test Certificate Misissuance - issuance of several thousand test 
certs...", the descriptions I have found in the newsgroup talk about the 
separate situations I call D1, D2 and D3, each of which was different in 
their degree of misissuance and in their quantity.


I think summarizing it as misissuance times several thousands leads to 
an exaggeration, and a suggestion that problems in one part occurred 
also in another part.


From my understanding of past discussions:

D1 had actual misissuance, violation of procedures and apparently no
separation of duties.  But it was a small number of certs, the private
keys never left Symantec, the certs were quickly revoked, it was years
ago and other concrete measures were taken.

D2 only violated a form requirement (that an actual company name should
be in the O field), but it was a deliberate violation of procedure, a
lack of RA oversight, collusion between separated duties at the RA and
a large number of certificates.  And it was relatively recent.

D3 had actual misissuance (where CrossCert didn't control the test
domains) a deliberate violation of procedure, a lack of RA oversight,
collusion between separated duties at the RA and it was relatively
recent.  But it was apparently a smallish number of certificates (I
don't think there was an actual counting of how many certs were
actually in category D3 versus false positives from issue D2
and valid uses of the word test).

Please correct me if I missed some other test certificate misissuance 
incidents.




2. If the remaining unconstrained SubCAs are operated by Symantec and
  subject to (retroactive if necessary) compliance audits showing that
  they don't issue certs that could not (under the BR and Mozilla
  policies) be issued from a public Symantec CA by an "Enterprise RA"
  (as defined in the BRs), could those SubCAs not simply be
  reclassified as "public SubCAs" for Mozilla/BR policy purposes while
  remaining further usage limited by actual Symantec practices and
  contractual arrangements beyond the BR/Mozilla policies?


I'm afraid I just don't understand this.



Symantec has claimed that some of the remaining unconstrained SubCA's
identified before my post (they had not yet commented on the most
recent batch found) are hosted within Symantec's infrastructure, which
means that Symantec probably has some direct control over issuance
limitations, as well as complete logs of all issued certs.  This could
mean that reinterpreting their status as "public SubCAs subject to
Symantec policies and audits" rather than "flawed TCSCs lacking proper
'technical' constraints" could make them compliant.

Since such a formal/legal reinterpretation of the ground facts would be
retroactive going back from some time in May 2017 to when each SubCA
was issued, the related rephrased policy documents and additional BR
etc. audits would have to be similarly retroactive.

The goal of such a paperwork exercise would be to avoid revocation and
replacement of the EE certs issued from the SubCAs thus rescued.



   - Is it really necessary to outsource this to bring the Symantec PKI
under control?  Or was this simply copy/pasted from the
WoSign/StartCom situation?


Nothing like this was proposed for WoSign/StartCom.


I seem to recall that making WoSign/StartCom a (possibly disguised)
reseller of certs from another CA was a suggestion made as to what
WoSign/StartCom could do to keep their customer relationships during
their minimum 1 year of complete distrust.




   - If this is outsourced as suggested, how can/should Symantec
continue to serve customers wanting certificates that chain to
older CA certs in the old hierarchy.


The old cross-signs the new.


Some of the examples I mention seem to have tree height or other 
technical limitations baring this.  Which means that to serve those

segments, Symantec would need to operate the core of their CA
infrastructure while not having the large volume of WebPKI and S/MIME
certificates to fund basic operations.

Also some of these operate in peculiar (CP specified) modes that
most/all other CAs don't have ready systems to handle.  One variant
involve submitting the item to be signed to Symantec for (automated
and/or manual) inspection of the item, issuance of a one-off
certificate for signing that item followed by the immediate destruction
of the private key (this allows individual signatures to be revoked by
revoking the associated one-off cert).





   - Could some of the good SubCAs under the "Universal" and "Georoot"
program be salvaged by signing 

Re: Symantec: Draft Proposal

2017-05-04 Thread Jakob Bohm via dev-security-policy

On 01/05/2017 16:16, Gervase Markham wrote:

Here is my analysis and proposal for what actions the Mozilla CA
Certificates module owner should take in respect of Symantec.

https://docs.google.com/document/d/1RhDcwbMeqgE2Cb5e6xaPq-lUPmatQZwx3Sn2NPz9jF8/edit#

Please discuss the document here in mozilla.dev.security.policy. A good
timeframe for discussion would be one week; we would aim to finalise the
plan and pass it to the module owner for a decision next Monday, 8th
May. Note that Kathleen is not around until Wednesday, and may choose to
read rather than comment here. It is not a given that she will agree
with me, or the final form of the proposal :-)



Some notes on the text now in the document:

1. Issue D actually seems to conflate three *completely different*
  issues:

  D1) MisIssuance of actual test certificates for real world domains
 that had not approved that issuance.  I think this was mostly the
 old issue involving a very small number of improper in-house test
 certs by Symantec.

  D2) Dubious / non-permitted substitution of the word "test" for the
 organization name in otherwise fully validated OV certificates as
 a service to legitimate domain holders wanting to test Symantec
 certificates before paying for final issuance of certs with their
 actual name.  This was much less harmful, but was done in large
 numbers by the CrossCert RA.

  D3) Dubious and actual misussance of certificates for some domains
 containing "test" as a substring under the direction of the
 CrossCert RA.  These vary in seriousness but I think their total
 number is much smaller than those in category D2.

  Splitting these three kinds of "test" certificates will properly give
  a much clearer issue of what was going on and how serious it was.

2. If the remaining unconstrained SubCAs are operated by Symantec and
  subject to (retroactive if necessary) compliance audits showing that
  they don't issue certs that could not (under the BR and Mozilla
  policies) be issued from a public Symantec CA by an "Enterprise RA"
  (as defined in the BRs), could those SubCAs not simply be
  reclassified as "public SubCAs" for Mozilla/BR policy purposes while
  remaining further usage limited by actual Symantec practices and
  contractual arrangements beyond the BR/Mozilla policies?

3. The plan involving a new hierarchy outsourced to a Symantec
  competitor leaves some big questions when seen from outside the
  Google perspective:

   - Is it really necessary to outsource this to bring the Symantec PKI
under control?  Or was this simply copy/pasted from the
WoSign/StartCom situation?

   - If this is outsourced as suggested, how can/should Symantec
continue to serve customers wanting certificates that chain to
older CA certs in the old hierarchy.  For example some brands of
Smartphones require that all apps installed are signed by specific
old Symantec hierarchies via exclusivity deals that were in place
when those Smartphones were manufactured, and no changes to that
requirement are feasible at this point for the already sold phones.
 Similar issues may exist in the WebPKI for jobs such as providing
HTTPS downloads of Firefox to machines whose existing browser is
outdated and contains an outdated root store.

   - Should Symantec be allowed to cross-sign SubCAs of the new roots
directly from the old roots, thus keeping the chain length to the
old roots short?

   - Could some of the good SubCAs under the "Universal" and "Georoot"
program be salvaged by signing them from new roots and adding the
cross certs to default Mozilla and Chrome installations (so servers
don't need to install them)?  For example, if the legit EV SubCAs
under "Universal" are cross-signed by a (new) "EV-only" root, could
Mozilla move the EV trust to that new root, thus removing the
risk of EV-trusting any other "Universal" subCAs.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Remove the bullet about "fraudulent use"

2017-05-04 Thread Jakob Bohm via dev-security-policy
sets.
  For example, an application for the domain exampie.com is high risk
 from an entity other than the entity controlling exampLe.com.  And
 vice versa.

Note that "High Risk Certificate Requests" can still be fulfilled,
they just require extra checks of their legitimacy, as per BR 4.2.1.




*From: *Gervase Markham
*Sent: *Tuesday, May 2, 2017 5:46 AM
*To: *Peter Kurrasch; mozilla-dev-security-pol...@lists.mozilla.org
*Subject: *Re: Policy 2.5 Proposal: Remove the bullet about "fraudulent use"


On 02/05/17 01:55, Peter Kurrasch wrote:

I was thinking that fraud takes many forms generally speaking and that
the PKI space is no different. Given that Mozilla (and everyone else)
work very hard to preserve the integrity of the global PKI and that the
PKI itself is an important tool to fighting fraud on the Internet, it
seems to me like it would be a missed opportunity if the policy doc made
no mention of fraud.

Some fraud scenarios that come to mind:

- false representation as a requestor
- payment for cert services using a stolen credit card number
- malfeasance on the part of the cert issuer


Clearly, we have rules for vetting (in particular, EV) which try and
avoid such things happening. It's not like we are indifferent. But
stolen CC numbers, for example, are a factor for which each CA has to
put in place whatever measures they feel appropriate, just as any
business does. It's not really our concern.


- requesting and obtaining certs for the furtherance of fraudulent

activity


Regarding that last item, I understand there is much controversy over
the prevention and remediation of that behavior but I would hope there
is widespread agreement that it does at least exist.


It exists, in the same way that cars are used for bank robbery getaways,
but the Highway Code doesn't mention bank robberies.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA Validation quality is failing

2017-05-02 Thread Jakob Bohm via dev-security-policy

On 02/05/2017 17:30, Rob Stradling wrote:

On 02/05/17 16:11, Alex Gaynor via dev-security-policy wrote:

I know several CAs are using certlint
(https://github.com/awslabs/certlint)
as a pre-issuance check that the cert they're about to issue doesn't have
any programmatically detectable deficiencies; if it doesn't already cover
some of these cases, it'd be great to add them as a technical means for
ensuring that this doesn't regress -- things like N/A should be an easy
enough check to add I'd think.


Simple project idea (perhaps for https://github.com/cabforum):

A CSV file that contains 2 items per line:
1. An optional comma-separated list of Subject attribute shortnames.
2. A string that a CA should probably not encode as a complete Subject
attribute.

e.g.,
"OU,ST,L","N/A"
,"."
"O","Internet Widgits Pty Ltd"

Anyone (CA representatives, industry researchers, etc, etc) would be
able to submit PRs, CAs would be invited to consult this list when
evaluating certificate requests, and certlint would be able to report on
"violations".



For simplicity and consistency with usual best development practices
("3rd normal form"), perhaps at most one attribute shortname in column
1.


e.g. Your example would be written as:

"OU","N/A"
"ST","N/A"
"L","N/A"
,"."
"O","Internet Widgits Pty Ltd"




...




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Fix definition of constraints for id-kp-emailProtection

2017-05-02 Thread Jakob Bohm via dev-security-policy

On 01/05/2017 10:55, Gervase Markham wrote:

Does anyone have any thoughts about this issue, below?

I sent out a message saying that I had adopted this change as proposed,
but that was an error. It has not yet been adopted, because I am
concerned about the below.

The first option is simpler, because it does not need to get into the
details of who controls or who has issuance authority for the
intermediate, and what their relationship is with the domain names in
question. Some concrete proposed text for that option is:

"Each entry in permittedSubtrees must either be or end with a Public
Suffix." (And we'd need to link to publicsuffix.org)

The second option is harder to spec, because I don't know the uses to
which TCSCs for email are put. Is the idea that they get handed to a
customer, and so it's OK to say that the domain names have to be
validated as being owned by the entity which has authority to command
issuance? Or are there scenarios I'm missing?




On 20/04/17 14:26, Gervase Markham wrote:

On 12/04/17 10:47, Gervase Markham wrote:

"If the certificate includes the id-kp-emailProtection extended key
usage, it MUST include the Name Constraints X.509v3 extension with
constraints on rfc822Name, with at least one name in permittedSubtrees."


As worded, this would allow for a set of constraints which looked like:

".com, .net, .edu, .us, .uk, ..."

The SSL BRs require:

"(a) For each dNSName in permittedSubtrees, the CA MUST confirm that the
Applicant has registered the dNSName or has been authorized by the
domain registrant to act on the registrant's behalf in line with the
verification practices of section 3.2.2.4."

That's not possible for e.g. ".com", so the problem is avoided.

Do we need to say that each entry in permittedSubtrees must be a Public
Suffix? Or do we need to require that each rfc822Name is
ownership-validated in a analogous way to the dNSNames in the BRs?

Gerv



Some cases to consider:

  If the permittedSubtrees specifies a public suffix, the SubCA should
  not be considered a TCSC.  It could however be a public SubCA
  dedicated to some part of the Internet, such as a country.

  If the permittedSubtrees ends with, but isn't, a public suffix, then
  the ability to issue under the TCSC should be limited to a single
  entity which has been validated to have full authority over each
  domain specified.

  If the permittedSubtrees specifies a domain that is *not* a public
  suffix, but is an IANA TLD (The classic example would be a
  hypothetical .cocacola. TLD), then the ability to issue under the
  TCSC should be limited to a single entity which has been validated to
  have full authority over each domain specified.

Thus perhaps it would be more appropriate to require that in a TCSC,
the permittedSubtrees must NOT list any public suffix or a suffix of a
public suffix.  And that control of the TCSC must be exclusively by
someone properly validated to act on behalf of each domain listed in
permittedSubtrees.

I am using the phrase "control" as a placeholder for whatever phrase
would be consistent with wording elsewhere in the policy for someone
who either:

- Is in exclusive possession of the TCSC private key material and makes
 their own decisions regarding issuance

OR

- Holds exclusive RA authority for non-technical certificates issued
 under the TCSC, while the private key is stored elsewhere (e.g. at the
 parent CA or at some other CA outsourcing facility).

OR

- Holds blocking RA authority for the TCSC such that all non-technical
 certificates issued under the TCSC require approval by the entity, but
 that additional policy requirements may be checked/enforced by some
 other entity (for example the CA outsourcing provider may be doing
 some/all of the same automated checks it would have done for an EE
 cert issued from a public SubCA).  This would be a common case where
 the private key is hosted by the parent CA under some "Enterprise
 customer" user interface.

I am using the phrase "technical certificates" as a placeholder for
whatever phrase would be consistent with wording elsewhere in the
policy for  for such things as OCSP signing certificates, CRL signing
certificates, time stamping certificates and other such CA operational
certificate types now known or yet to be invented.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google's past discussions with Symantec

2017-04-27 Thread Jakob Bohm via dev-security-policy
 certified by these sub-CAs must be
   covered by the same CP/CPS, management’s assertion, and audit reports as
   the sub-CA itself. That is, any sub-CAs beneath these sub-CAs must be part
   of the same infrastructure and operation of the non-affiliated organization.
   - In order to be trusted, issued certificates must be “CT Qualified”,
   as defined in the Certificate Transparency in Chrome Policy
   <https://www.chromium.org/Home/chromium-security/certificate-transparency>
   .
   - In order to be trusted, subscriber certificates, including those of
   technically constrained subordinate CAs, must have validity periods of no
   greater than three hundred and ninety-seven (397) days. This is an
   unambiguous way of clarifying thirteen (13) month validity.

The goal of these controls is to try to meaningfully signal that the new
certificates issued will not suffer from the same flawed processes,
controls, and infrastructure, and as a result, are fully deserving of the
trust, including that of EV certificates. As audits are not perfect, and
consistent with our overall goals for the ecosystem, having such
certificates as valid for thirteen months represents an acceptable
balancing of the concerns.




Upon *[Removed]*, Symantec can reapply to have new root certificates
trusted, much in the way that organizations such as CNNIC have done
<https://bugzilla.mozilla.org/show_bug.cgi?id=1312957> or as StartCom plans
to do
<https://groups.google.com/d/msg/mozilla.dev.security.policy/jDRxCN9qoQI/aNKZ0932GQAJ>.
Subject to the appropriate review, this can serve to re-establish trust by
eliminating uncertainty and providing greater transparency. Trust in these
new root certificates is contingent upon them not cross-signing any of the
existing infrastructure or certificates, but they may be cross-signed by
such existing infrastructure or certificates.




Hopefully, this provides a meaningful suggestion about ways in which
Symantec can minimize any disruptions, due to a lack of trust when using
Chrome. These suggestions are meant to provide further guidance about the
set of concerns we discussed, and to illustrate ways in which they can be
meaningfully and objectively addressed, in a way that can be consistently
applied to all CAs.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Criticism of Google Re: Google Trust Services roots

2017-04-24 Thread Jakob Bohm via dev-security-policy

On 25/04/2017 05:04, Ryan Sleevi wrote:

On Mon, Apr 24, 2017 at 9:42 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


On 25/04/2017 03:10, Peter Kurrasch wrote:


Fair enough. I propose the following for consideration:

Prior to ‎transferring ownership of a root cert contained in the trusted
store (either on an individual root basis or as part of a company
acquisition), a public attestation must be given as to the intended
management of the root upon completion of the transfer. "Intention" must
be one of the following:

A) The purchaser has been in compliance with Mozilla policies for more
than 12 months and will continue to administer (operate? manage?) the
root in accordance with those policies.

B) The purchaser has not been in compliance with Mozilla policies for
more than 12 months but will ‎do so before the transfer takes place. The
purchaser will then continue to administer/operate/manage the root in
accordance with Mozilla policies.

How about:


B2) The purchaser is not part of the Mozilla root program and has not
been so in the recent past, but intends to continue the program
membership held by the seller.  The purchaser intends to complete
approval negotiations with the Mozilla root program before the transfer
takes place.  The purchaser intends to retain most of the expertise,
personnel, equipment etc. involved in the operation of the CA, as will
be detailed during such negotiations.

This, or some other wording, would be for a complete purchase of the
business rather than a merge into an existing CA, similar to what
happened when Symantec purchased Verisign's original CA business years
ago, or (on a much smaller scale) when Nets purchased the TDC's CA
business unit and renamed it as DanID.



Why is that desirable? If anything, such acquisitions seem to be more
harmful than helpful to the CA ecosystem. That is, why _wouldn't_ a merge
be useful/desirable? What problems are you attempting to solve?



Look at my two examples of past acquisitions.  As far as I remember, in
neither case was the purchasing security company previously a trusted
CA, and they took over practically the whole operation with no initial
changes besides a name change.

Another variant of this scenario is when a CA restructures its formal
ownership structure, such as inserting or removing one or more levels
of companies between the ultimate owners and the CA operations
activity.  In many cases this would technically be an acquisition by a
new company that isn't a CA itself (as the acquiring company would
often be an empty shell).  One example would be the recent creation of
GTS from part of Google Inc, since GTS was a new company with no past
CA activity, while the acquired Google division had a past as a SubCA
and a recently acquired root cert.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Criticism of Google Re: Google Trust Services roots

2017-04-24 Thread Jakob Bohm via dev-security-policy

On 25/04/2017 03:10, Peter Kurrasch wrote:

Fair enough. I propose the following for consideration:

Prior to ‎transferring ownership of a root cert contained in the trusted
store (either on an individual root basis or as part of a company
acquisition), a public attestation must be given as to the intended
management of the root upon completion of the transfer. "Intention" must
be one of the following:

A) The purchaser has been in compliance with Mozilla policies for more
than 12 months and will continue to administer (operate? manage?) the
root in accordance with those policies.

B) The purchaser has not been in compliance with Mozilla policies for
more than 12 months but will ‎do so before the transfer takes place. The
purchaser will then continue to administer/operate/manage the root in
accordance with Mozilla policies.


How about:

B2) The purchaser is not part of the Mozilla root program and has not
been so in the recent past, but intends to continue the program
membership held by the seller.  The purchaser intends to complete
approval negotiations with the Mozilla root program before the transfer
takes place.  The purchaser intends to retain most of the expertise, 
personnel, equipment etc. involved in the operation of the CA, as will

be detailed during such negotiations.

This, or some other wording, would be for a complete purchase of the
business rather than a merge into an existing CA, similar to what
happened when Symantec purchased Verisign's original CA business years
ago, or (on a much smaller scale) when Nets purchased the TDC's CA
business unit and renamed it as DanID.


C) The purchaser does not intend to operate the root in accordance with
Mozilla policies. Mozilla should remove trust from the root upon
completion of the transfer.


The wording of the above needs some polish and perhaps clarification.
The idea is that the purchaser must be able to demonstrate some level of
competence at running a CA--perhaps by first cutting their teeth as a
sub-CA? If a organization is "on probation" with Mozilla, I don't think
it makes sense to let them assume more control or responsibility for
cert issuance so there should be a mechanism to limit that.

I also think we should allow for the possibility that someone may
legitimately want to remove a cert from the Mozilla program. Given the
disruption that such a move can cause, it is much better to learn that
up front so that appropriate plans can be made.


*From: *Gervase Markham via dev-security-policy
*Sent: *Tuesday, April 11, 2017 11:36 AM
*To: *mozilla-dev-security-pol...@lists.mozilla.org
*Reply To: *Gervase Markham
*Subject: *Re: Criticism of Google Re: Google Trust Services roots


On 11/04/17 14:05, Peter Kurrasch wrote:

Is there room to expand Mozilla policy in regards to ownership issues?


Subject to available time (which, as you might guess by the traffic in
this group, there's not a lot of right now, given that this is not my
only job) there's always room to reconsider policy. But what we need is
a clearly-stated and compelling case that changing the way we think
about these things would have significant and realisable benefits, and
that any downsides are fairly enumerated and balanced against those gains.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificate issues

2017-04-24 Thread Jakob Bohm via dev-security-policy

On 21/04/2017 21:29, Nick Lamb wrote:

On Tuesday, 18 April 2017 18:33:29 UTC+1, Jakob Bohm  wrote:

I believe the point was to check the prospective contents of the
TBSCertificate *before* CT logging (noting that Ryan Sleevi has been
violently insisting that failing to do that shall be punished as
harshly as actual misissuance) and *before* certificate signing.


I come to this as always as someone focused on prevention of future harm. I can't speak for Ryan 
but I'm not interested in "punishing" anybody because retribution does not avoid future 
harm in itself. For example distrust of a CA is not a "punishment" of that CA, but a step 
taken to protect relying parties from certificates which shouldn't exist.

Detecting already bad situations still counts as prevention of future harm, 
this is because almost always the bad situation might get worse if undetected. 
This is why we fit smoke alarms - it would be bad if my flat was on fire, but 
it would be much worse if in the absence of an alarm it simply burned down with 
me inside it.

If some CA comes to m.d.s.policy twice a year with a problem where a 
certificate was issued that shouldn't have been, but they've cured it and 
altered their systems so that won't happen again - I can't say I'm ecstatic to 
see that, but at least they're paying attention.

In contrast if they're here twice a year because an independent researcher 
found a year-old certificate that shouldn't exist, and Gerv has to ask them for 
comment, then they investigate what went wrong and promise to cure it, I have 
to say I look on that much less kindly, and I suspect Ryan does too.



I wrote this in the context of your previous post about why this
prevention code would have the ability to accidentally alter the
certificate before it was signed.  My point was to explain, in
general, why such code is forced to run before signing and in a
context where preventing the checking code from altering the
certificate has a (small but) non-zero cost, unlike a check done
after issuance and/or after CT submission.

As for Ryan, I think his own response was quite illustrative.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Remove BR duplication: reasons for revocation

2017-04-20 Thread Jakob Bohm via dev-security-policy

On 21/04/2017 00:36, Ryan Sleevi wrote:

On Thu, Apr 20, 2017 at 6:15 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


Technically, the part after the @ could also be a bang!path, though
this is rare these days.



No, technically, it could not.

RFC 5280, Section 4.2.1.6.  Subject Alternative Name
   When the subjectAltName extension contains an Internet mail address,
   the address MUST be stored in the rfc822Name.  The format of an
   rfc822Name is a "Mailbox" as defined in Section 4.1.2 of [RFC2821].
   A Mailbox has the form "Local-part@Domain".  Note that a Mailbox has
   no phrase (such as a common name) before it, has no comment (text
   surrounded in parentheses) after it, and is not surrounded by "<" and
   ">".  Rules for encoding Internet mail addresses that include
   internationalized domain names are specified in Section 7.5.

Note that RFC 2821 was OBSOLETEd by RFC 5321. RFC 5321 Section 4.1.2 states

   Mailbox= Local-part "@" ( Domain / address-literal )
   address-literal  = "[" ( IPv4-address-literal /
IPv6-address-literal /
General-address-literal ) "]"
; See Section 4.1.3
   Domain = sub-domain *("." sub-domain)
   sub-domain = Let-dig [Ldh-str]
   Let-dig= ALPHA / DIGIT
   Ldh-str= *( ALPHA / DIGIT / "-" ) Let-dig

Section 4.1.3 states
   IPv4-address-literal  = Snum 3("."  Snum)
   IPv6-address-literal  = "IPv6:" IPv6-addr
   General-address-literal  = Standardized-tag ":" 1*dcontent
   Standardized-tag  = Ldh-str
 ; Standardized-tag MUST be specified in a
 ; Standards-Track RFC and registered with IANA

To confirm, I also checked the IANA registry established, which is
https://www.iana.org/assignments/address-literal-tags/address-literal-tags.xhtml

The only address literal defined is IPv6.

Could you indicate where you believe RFC 5280 supports the conclusion that
a "bang!path" is permitted and relevant to Mozilla products?



The older RFC 2459 allowed the full RFC 822 syntax (minus comments),
which included bang paths.  While RFC 2459 and RFC 822 are obsolete, it
is still possible, sans explicit policy to the contrary, for a CA to
issue valid certificates for this older address type, perhaps as a
service to someone running historic system configurations.

Even them, I think wording is still needed to state that the
"domain"/"address-literal" part of the RFC5321 syntax is the part to
which the "domain name" revocation BRs apply.

Plus there is the additional situation of an e-mail domain
operator/owner telling a CA that an e-mail cert should be revoked for
various reasons (misissued cert, misissued e-mail address, e-mail
address removed from cert holder, possibly others).


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Remove BR duplication: reasons for revocation

2017-04-20 Thread Jakob Bohm via dev-security-policy

On 20/04/2017 21:15, Ryan Sleevi wrote:

Gerv,

I must admit, I'm not sure I understand what you consider irrelevant
reasons for 4.9.1 in the context of e-mail addresses.

The only one I can think of is
"7. The CA is made aware that a Wildcard Certificate has been used to
authenticate a fraudulently misleading
subordinate Fully-Qualified Domain Name;"

But that's because such e-mail CAs are effectively wildcards (e.g. they can
issue for subdomains, unless a nameconstraint includes a leading . to
indicate for host not domain)


I believe this is about end certificates, not constrained Intermediary
CA certificates.



But given that e-mail addresses include Domain portions (after all, that is
the definition, localpart@domain), and Fully-Qualified Domain Name doesn't
imply a sAN of type dNSName, this all seems... ok as is?



Technically, the part after the @ could also be a bang!path, though
this is rare these days.

So maybe some wording in the Mozilla e-mail end cert requirements for
how the phrase "Domain Name" in the TLS cert BRs maps to rfc822-names.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA Validation quality is failing

2017-04-20 Thread Jakob Bohm via dev-security-policy


One thing:

Could this be a result of the common (among CAs) bug of requiring entry
of a US/Canada State/Province regardless of country, forcing applicants
to fill in random data in that field?

On 20/04/2017 03:48, Jeremy Rowley wrote:

FYI - still looking into this. I should have a report tomorrow.

-Original Message-
From: dev-security-policy 
[mailto:dev-security-policy-bounces+jeremy.rowley=digicert@lists.mozilla.org]
 On Behalf Of Jeremy Rowley via dev-security-policy
Sent: Wednesday, April 19, 2017 2:27 PM
To: r...@sleevi.com; Mike vd Ent <pasarellaph...@gmail.com>
Cc: Ben Wilson <ben.wil...@digicert.com>; mozilla-dev-security-policy 
<mozilla-dev-security-pol...@lists.mozilla.org>
Subject: RE: CA Validation quality is failing

I’m looking into it right now. I’ll report back shortly.



Jeremy



From: Ryan Sleevi [mailto:r...@sleevi.com]
Sent: Wednesday, April 19, 2017 2:25 PM
To: Mike vd Ent <pasarellaph...@gmail.com>
Cc: mozilla-dev-security-policy <mozilla-dev-security-pol...@lists.mozilla.org>; Jeremy 
Rowley <jeremy.row...@digicert.com>; Ben Wilson <ben.wil...@digicert.com>
Subject: Re: CA Validation quality is failing







On Wed, Apr 19, 2017 at 3:47 PM, Mike vd Ent via dev-security-policy 
<dev-security-policy@lists.mozilla.org 
<mailto:dev-security-policy@lists.mozilla.org> > wrote:

Ryan,

My answers on the particular issues are stated inline.
But the thing I want to address is how could (in this case Digicert) validate 
such data and issues certificates? I am investigation more of them and afraid 
even linked company names or registration numbers could be false. Shouldn't 
those certificates be revoked?



You are correct that it appears these certificates should not have issued. 
Hopefully Jeremy and Ben from DigiCert can comment on this thread ( 
https://groups.google.com/d/msg/mozilla.dev.security.policy/DgeLqKMzIds/ig8UmHT2DwAJ
 for the archive) with details about the issues and the steps taken.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificate issues

2017-04-18 Thread Jakob Bohm via dev-security-policy

On 18/04/2017 18:47, Nick Lamb wrote:

Hi Jeremy

Given the small number of certificates involved, it might make sense to just 
convert them to text and mention them inline, or put them somewhere we can all 
see them - if it's inconvenient to put them into the CT logs.

I think this situation will be useful as evidence of the value of not putting 
all ones eggs in one basket when it comes to subCA EKUs. If these had come from 
a subCA which wasn't constrained, we'd have a bigger (though at five specific 
certificates still manageable) issue.


On Tuesday, 18 April 2017 17:10:39 UTC+1, Jeremy Rowley  wrote:

The bug was introduced, ironically, in code we deployed to detect potential
errors in cert profiles. This error caused the specified code signing
certificates to think they needed dNSnames and serverAuth. Let me know if
you have questions.


This irony might perhaps have been avoided if the code in question didn't have 
permission (never mind intent) to alter anything. I appreciate that what one 
would ideally like is to never issue a bad certificate, and to achieve that any 
checks must happen in a trusted part of the system. But it seems to me that 
there's a good deal of benefit, at practically zero risk, in building tooling 
that focuses on monitoring stuff which has already been signed, outside of the 
protected signing environment; and only subsequently after proving it there 
giving it the earlier, more powerful (and thus risky) role.



I believe the point was to check the prospective contents of the
TBSCertificate *before* CT logging (noting that Ryan Sleevi has been
violently insisting that failing to do that shall be punished as
harshly as actual misissuance) and *before* certificate signing.

Thus the checks would have to occur before signing, but it would still
be useful (architecturally) to run the checks without the ability to
change the request (other than to reject it with an error message).
Such separation will however have non-zero cost as the prospective
TBSCertificate or its description needs to be passed between additional
processes.


Of course I know nothing of your specific circumstances, this is just a general 
observation about how I think I'd approach the question of improving issuance 
quality with minimal risk.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Fix definition of constraints for id-kp-emailProtection

2017-04-18 Thread Jakob Bohm via dev-security-policy

On 13/04/2017 15:46, Gervase Markham wrote:

Hi Rob,

You either have a great memory or good search-fu; well done for digging
this out!

On 12/04/17 22:14, Rob Stradling wrote:

Gerv, FYI what you're proposing here
(https://github.com/mozilla/pkipolicy/issues/69) was slated to appear in
v2.1 of the policy, but it was vetoed by Symantec.

Here's why...

https://groups.google.com/forum/#!msg/mozilla.dev.security.policy/l1BAEHjKe8Q/mey4WREKpooJ


Hmm. I note we didn't end up using Symantec's proposed text either.

I'm not sure I entirely understand their objection. They wanted to
confirm via "business controls" that the customer was authorized to
issue email certs for the domain. What sort of thing might that be, and
how is it different to a technical control? Does it just involve the
customer pinky-swearing that it's OK for them to issue such certs?

I can see that CAs might want to issue email certs for almost any
domain, if the controller of an email address comes and asks for one.
But in that sort of case, I wouldn't expect them to be using a TCSC.
TCSCs are for "Hi, I'm Company X, and have 100,000 employees with
@companyx.com email addresses, and want to issue them publicly-trusted
email certs. Give me a TCSC for @companyx.com." Whereupon the CA would
get them to prove they own that domain, then provide them with such a
certificate.



Could the difference be one of outsourcing: Suppose Company X has
outsourced e-mail server operations (but not employee identity
checking) to big-name email provider Y.  Then Y has technical control
over @companyx.com, but Company X has business control and the
authority to decide who should and shouldn't get @companyx.com e-mail
certs.  For @companyx.maily.net e-mail addresses, that authority may
also be divorced from ownership of the maily.net domain.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Expand requirements about what an Audit Statement must include

2017-04-12 Thread Jakob Bohm via dev-security-policy

On 12/04/2017 12:44, Kurt Roeckx wrote:

On 2017-04-12 11:47, Gervase Markham wrote:

There are some items that it would be very helpful for auditors to state
in their public-facing audit documentation so that we can be clear about
what was covered and what was not. The policy already has some
requirements here, in section 3.1.3, mostly relating to dates.

The proposal is to add the following bullets to section 3.1.3 ("Public
Audit Information"), perhaps reordering the list as appropriate:

* name of the company being audited
* name and address of the organization performing the audit
* DN and SHA1 or SHA256 fingerprint of each root and intermediate
certificate that was in scope


The SHA256 of what? The certificate? There can be multiple certificates
for the same CA. It should probably be made more clear, like a hash of
the subject DN.




The operation "certificate fingerprint" is well defined and generally
covers most/all of the DER-encoded certificate, not just the DN.  For
example, this value is displayed when looking at CA certificates in the
Firefox options dialog.

For public CAs that don't rely on certificate distribution through
browser inclusion, it is also common to provide an authoritative
out-of-band copy of the fingerprint as something relying parties should
check when installing the CA root cert.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Expand requirements about what an Audit Statement must include

2017-04-12 Thread Jakob Bohm via dev-security-policy

On 12/04/2017 11:47, Gervase Markham wrote:

There are some items that it would be very helpful for auditors to state
in their public-facing audit documentation so that we can be clear about
what was covered and what was not. The policy already has some
requirements here, in section 3.1.3, mostly relating to dates.

The proposal is to add the following bullets to section 3.1.3 ("Public
Audit Information"), perhaps reordering the list as appropriate:

* name of the company being audited
* name and address of the organization performing the audit
* DN and SHA1 or SHA256 fingerprint of each root and intermediate
certificate that was in scope


Maybe just SHA256, since SHA1 is mostly dead.


* audit criteria (with version number) that were used to audit each of
the certificates
* For ETSI, a statement that the audit was a full audit, and which parts
of the criteria were applied, e.g. DVCP, OVCP, NCP, NCP+, LCP, EVCP,
EVCP+, Part1 (General Requirements), and/or Part 2 (Requirements for
trust service providers).

This is: https://github.com/mozilla/pkipolicy/issues/58 and
https://github.com/mozilla/pkipolicy/issues/28 .

---

This is a proposed update to Mozilla's root store policy for version
2.5. Please keep discussion in this group rather than on Github. Silence
is consent.

Policy 2.4.1 (current version):
https://github.com/mozilla/pkipolicy/blob/2.4.1/rootstore/policy.md
Update process:
https://wiki.mozilla.org/CA:CertPolicyUpdates




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.5 Proposal: Require qualified auditors unless agreed in advance

2017-04-12 Thread Jakob Bohm via dev-security-policy

On 12/04/2017 11:47, Gervase Markham wrote:

Way back when, Mozilla wrote some requirements for auditors which were
more liberal than "be officially licensed by the relevant audit scheme".
This was partly because organizations like CACert, who were at the time
pondering applying for inclusion, might need to use
unofficially-qualified auditors to keep cost down.

This is no longer a live issue, and this exception/expansion causes
confusion and means that we cannot unambiguously require that auditors
be qualified.

Therefore, I propose we switch our auditor requirements to requiring
qualified auditors, and saying that exceptions can be applied for in
writing to Mozilla in advance of the audit starting, in which case
Mozilla will make its own determination as to the suitability of the
suggested party or parties.

Proposed changes:

* Remove sections 3.2.1 and 3.2.2.

* Change section 3.2 to say:

In normal circumstances, Mozilla requires that audits MUST be performed
by a Qualified Auditor, as defined in the Baseline Requirements section 8.2.

If a CA wishes to use auditors who do not fit that definition, they MUST
receive written permission from Mozilla to do so in advance of the start
of the audit engagement. Mozilla will make its own determination as to
the suitability of the suggested party or parties, at its sole discretion.

* Change section 2.3, first bullet, to read:

- Mozilla reserves the right to accept audits by auditors who do not
meet the qualifications given in section 8.2 of the Baseline Requirements.




Does this (accidentally?) remove the ability of Mozilla to explicitly
distrust a specific formally qualified auditor, such as E HK?


This is: https://github.com/mozilla/pkipolicy/issues/63

---

This is a proposed update to Mozilla's root store policy for version
2.5. Please keep discussion in this group rather than on Github. Silence
is consent.

Policy 2.4.1 (current version):
https://github.com/mozilla/pkipolicy/blob/2.4.1/rootstore/policy.md
Update process:
https://wiki.mozilla.org/CA:CertPolicyUpdates




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Response X

2017-04-11 Thread Jakob Bohm via dev-security-policy

On 10/04/2017 16:58, Steve Medin wrote:

Issue X: Incomplete RA Program Remediation (February - March 2017)

The only Symantec RAs capable of authorizing and issuing publicly trusted 
SSL/TLS certificates are: CrossCert, Certisign, Certsuperior and Certisur. 
Symantec continues to maintain a partner program for non-TLS certificates. 
E-Sign SA and MSC Trustgate are amongst these partners.



Please note that the Mozilla root program covers both SSL/TLS and
e-mail certificates (with slightly different inclusion policies).

Thus while the CABF BR rules may not apply to e-mail certificates,
Mozilla root program requirements do apply to such certificates and the
roots that are trusted to issue them.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Symantec Issues doc updated

2017-04-11 Thread Jakob Bohm via dev-security-policy

On 11/04/2017 18:53, Gervase Markham wrote:

On 11/04/17 17:34, Ryan Sleevi wrote:

Can you clarify what issues you believe this to be related?


That is a fair question. And also hard work to answer  :-)


Given that Symantec has a routine habit of exceeding any reasonable
deadline for response, at what point do you believe it is appropriate for
the Mozilla Root Store to begin discussing what steps can or should be
taken with respect to the documented and supported incidents, which
Symantec has not provided counter-factual data?


Yes, fair enough. Rick and Steve: I will be taking Symantec's statements
to this group as of one week from today as the sum total of what you
have to say on the subjects under discussion. After that point, we will
draw conclusions based on the available data and decide on what course
of action we may take. I hope that by then you will be able to answer my
8 questions, and provide responses or comments to any of Ryan's or other
people's questions that you wish to address.

Kathleen is on vacation this week, and so no decisions could be taken
until next week at the earliest anyway.



Please consider the fact that this is Easter week, and most of the
industry, including many people (on both the Browser and Symantec sides
of the process) are likely to be unavailable for precisely this week of
the entire year.

In general, sending deadline mails (by paper, e-mail, process server or
otherwise) to anyone during a public holiday demanding actions during
that holiday is considered morally deficient at a minimum.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Criticism of Google Re: Google Trust Services roots

2017-04-06 Thread Jakob Bohm via dev-security-policy

On 06/04/2017 23:49, Ryan Sleevi wrote:

On Thu, Apr 6, 2017 at 1:42 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:



Here are some ideas for reasonable new/enhanced policies (rough
sketches to be discussed and honed before insertion into a future
Mozilla policy version):



Are you suggesting that the current policies that have been pointed out are
insufficient to address these cases?



Others have suggested they were insufficient, I just tried to come up
with a wording for what others said was lacking.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Criticism of Google Re: Google Trust Services roots

2017-04-06 Thread Jakob Bohm via dev-security-policy
ing behind this land of hypotheticals‎, it seems to me the policy as
written is weaker than it ought to be. My own opinion is that only a
member CA should be allowed to purchase a root cert (and assets),
regardless if it's only one cert or the whole company. If that's going
too far, I think details are needed for what "regular business
operations" are allowed during the period between acquisition of the
root and acceptance into the Mozilla root program. And should there be a
maximum time allowed to become such a member?


*From: *Nick Lamb via dev-security-policy
*Sent: *Tuesday, April 4, 2017 3:42 AM
*To: *mozilla-dev-security-pol...@lists.mozilla.org
*Reply To: *Nick Lamb
*Subject: *Re: Criticism of Google Re: Google Trust Services roots


On Monday, 3 April 2017 23:34:44 UTC+1, Peter Kurrasch wrote:

I must be missing something still? The implication here is that a

purchaser who is not yet part of the root program is permitted to take
possession of the root cert private key and possibly the physical space,
key personnel, networking infrastructure, revocation systems, and
responsibility for subordinates without having first demonstrated any
competence at ‎running a CA organization.

This appears to me to simply be a fact, not a policy.

Suppose Honest Achmed's used car business has got him into serious debt.
Facing bankruptcy, Achmed is ordered by a court to immediately sell the
CA to another company Rich & Dick LLC, which has never historically
operated a CA but has made informal offers previously.

Now, Mozilla could say, OK, if that happens we'll immediately distrust
the root. But to what end? This massively inconveniences everybody,
there's no upside except that in the hypothetical scenario where Rick &
Dick are bad guys the end users are protected (eventually, as distrust
trickles out into the wild) from bad issuances they might make. But a
single such issuance would trigger that distrust already under the
policy as written and we have no reason to suppose they're bad guys.

On the other hand, if Rich & Dick are actually an honest outfit, the
policy as written lets them talk to Mozilla, make representations to
m.d.s.policy and get themselves trusted, leaving the existing Honest
Achmed subscribers with working certificates while everything is
straightened out, all Rich & Dick need to do is leave issuance switched
off while they reach an agreement.

Because continuing trust is always at Mozilla's discretion if something
truly egregious happened (e.g. Achmed's CA is declared bankrupt, a San
Francisco start-up with four employees and $6Bn of capital buys their
hardware for pennies on the dollar and announces it'll be issuing free
MITM SSL certificates starting Monday morning) then Mozilla is still
free to take extraordinary action and distrust Achmed's root immediately
without waiting until Monday morning.





Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: GlobalSign BR violation

2017-04-06 Thread Jakob Bohm via dev-security-policy

On 04/04/2017 22:25, Doug Beattie wrote:




-Original Message-
From: dev-security-policy [mailto:dev-security-policy-
bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of Nick
Lamb via dev-security-policy

I have a question: These certificates appear to be not only forbidden by the BRs
but also technically unlikely to function as desired by the subscriber. Did any
customers report problems which (perhaps in hindsight now that the problem is
understood at GlobalSign) show they'd noticed the certificates they obtained
did not work ?


I'm not aware of any communications to us about certificates they received but 
didn't work.  If they did call support then I would have assumed the order 
would have been cancelled/revoked or otherwise updated to fix the problem, but 
none of that happened.  All I can assume is that they didn't actually use it to 
secure those SANs and only used it on the CN.



Just a tip: How about the account with 34 reissues, this may have been
a failed attempt to self-service the bad certificate by requesting
reissue when it didn't work?  Maybe you need to check your history with
this customer to see if some kind of reach out would be appropriate
from a customer service perspective.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Grace Period for Sub-CA Disclosure

2017-04-03 Thread Jakob Bohm via dev-security-policy

On 04/04/2017 05:30, Ryan Sleevi wrote:

On Mon, Apr 3, 2017 at 11:18 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:




So why does Mozilla want disclosure and not just a blanket X on a form
stating that all SubCAs are adequately audited, follow BRs etc.?

What use does Mozilla have for any of that information if not to act on
it in relation to trust decisions?



The incorrect part is that you're assuming it's a blocking process. It's
not - it's entirely asynchronous. Us folks who actually review CP/CPSes are
barely handling it at the root layer, let alone the intermediate. That's
why the CCADB - and the automation being developed by Microsoft and the
standardization I've been pushing - is key and useful.

The tradeoff has always been that CAs are granted the flexibility to
delegate, which intentionally allows them to bypass any blocking browser
dependencies, but at the risk to the issuing CA that the issuing CA may be
suspended if they do an insufficient job. It's a distribution of workload,
in which the issuing CA accepts the liability to be "as rigorous" as the
browser programs in return for the non-blocking flexibility of the
subscriber CA. In turn, that risk proposition (of the issuing CA) is offset
by the cost they impose on the subscriber CA.



That permitted asynchronicity was never clearly stated, and I was led
astray by explicit language prohibiting issuance by the SubCA before
the moment of disclosure, which is an explicit synchronization
requirement.  Once it is clear that it is permitted to disclose after
most *other* associated risks begin, then the entire problem goes away.

That simple inconsistency is what led to all this.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Grace Period for Sub-CA Disclosure

2017-04-03 Thread Jakob Bohm via dev-security-policy

On 04/04/2017 00:31, Peter Bowen wrote:

On Mon, Apr 3, 2017 at 1:45 PM, Jakob Bohm via dev-security-policy
<dev-security-policy@lists.mozilla.org> wrote:

On 03/04/2017 21:48, Ryan Sleevi wrote:


On Mon, Apr 3, 2017 at 3:36 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:



The assumptions are:

0. All relevant root programs set similar/identical policies or they
  get incorporatated into the CAB/F BRs on a future date.



This is not correct at present.



It is a simply application of the rule that policies should not fall
apart if others in the industry do the same.  It's related to the
"Golden Rule".




1. When the SubCA must be disclosed to all root programs upon the
  *earlier* of issuance + grace period OR outside facility SubCA
  receiving the certificate (no grace period).



This is not correct as proposed.



It is intended to prevent SubCAs issued to "new" parties from
meaningfully issuing trusted certificates before root programs have had
a chance to check the contents of the disclosure (CP/CPS, point in time
audit, whatever each root program requires).


I don't see this as part of the proposed requirement.  The requirement
is simply disclosure, not approval.



I see it as part of the underlying reasoning.  Mozilla et al wants
disclosure in order to take action if the disclosed facts are deemed
unacceptable (under policy or otherwise).  Upon receiving the
disclosure, the root program gains the ability to take counteractions,
such as protesting to the issuing CA, or manually blocking the unwanted
SubCA cert via mechanisms such as OneCRL.  The rules don't make the CAs
wait for the root programs to get upset, but must allow at least zero
time for this time to happen.


2. The SubCA must not issue any certificate (other than not-yet-used
  SubCAs, OCSP certs and other such CA operation certs generated in the
  same ceremony) until Disclosure to all root programs has been
  completed.



This is correct.



3. Disclosing to an operational and not-on-holiday root program team
  (such as the the CCADB most of the time) indirectly makes the SubCA
  certificate available to the SubCA operator, *technically* (not
  legally) allowing that SubCA to (improperly) start issuing before
  rule #2 is satisfied.



And given that this disclosure (in the CCADB) satisfies #2, why is this an
issue?



It is merely a step in the detailed logic argument that Ryan Sleevi
requested.

Note that no Browser or other client will trust certificates from the
new SubCA until the new SubCA or its clients can send the browser the
signed SubCA cert.


(Note, I split my paragraph to connect your reply to the specific
sentence it applies to)


 This technical point is also crucial for after-
the-fact cross certificates.


This is a more interesting case.  Going back to the start:

"The CA with a certificate included in Mozilla's CA Certificate
Program MUST disclose this information before any such subordinate CA
is allowed to issue certificates."

This implies that the subordinate CA is not already issuing
certificates.  If a CA signs a certificate naming an existing CA as
the subject, then what?


Indeed.  The easy case would be if the existing CA was already in the
CCADB and accepted in the Mozilla root program, making the cross-cert a
mere convenience for older browsers (old Mozilla or non-Mozilla), with
an added requirement that the cross-signing CA now must now do its own
audited checking and referencing of the existing CA's status and
compliance (the cost of which is hopefully included in the fee charged
for doing so, but that's their money to gain or loose).




If the Mozilla program member certifies a CA that is not a terminal CA
(e.g. not pathlen:0) and that CA then issues to another CA, how does
that certificate get into the CCADB?



The issuing CA would gain (by existing policy, I presume) an obligation
to disclose those too, and thus an incentive to contractually require
this disclosure from the SubCA.


5. SubCA Disclosure and processing of said disclosure should be done
  nearly simultaneously to minimize the problem mentioned in 3.



I believe you're suggesting simultaneously across all root programs, is
that correct? But that's not a requirement (and perhaps based on the
incorrect and incomplete understanding of point 1)



Yes, across all root programs, that is the key point, see #0.

Also, it is argued as a logical consequence of #3, #2, #0, i.e.
assume another root program enacts similar rules.  Once the SubCA cert
is disclosed on the CCADB for Mozilla and Chrome, the SubCA operator
can download the SubCA cert from the CCADB and use it to make users of
that other root program trust issued certificates before that other
root program received the disclosure.


I see zero problem with the SubCA receiving the certificate
immediately from the issuing CA, even prior to disclosure in the
CCADB.  The proposed requirement is th

Re: Grace Period for Sub-CA Disclosure

2017-04-03 Thread Jakob Bohm via dev-security-policy

On 03/04/2017 21:48, Ryan Sleevi wrote:

On Mon, Apr 3, 2017 at 3:36 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


The assumptions are:

0. All relevant root programs set similar/identical policies or they
  get incorporatated into the CAB/F BRs on a future date.



This is not correct at present.



It is a simply application of the rule that policies should not fall
apart if others in the industry do the same.  It's related to the
"Golden Rule".




1. When the SubCA must be disclosed to all root programs upon the
  *earlier* of issuance + grace period OR outside facility SubCA
  receiving the certificate (no grace period).



This is not correct as proposed.



It is intended to prevent SubCAs issued to "new" parties from
meaningfully issuing trusted certificates before root programs have had
a chance to check the contents of the disclosure (CP/CPS, point in time
audit, whatever each root program requires).





2. The SubCA must not issue any certificate (other than not-yet-used
  SubCAs, OCSP certs and other such CA operation certs generated in the
  same ceremony) until Disclosure to all root programs has been
  completed.



This is correct.



3. Disclosing to an operational and not-on-holiday root program team
  (such as the the CCADB most of the time) indirectly makes the SubCA
  certificate available to the SubCA operator, *technically* (not
  legally) allowing that SubCA to (improperly) start issuing before
  rule #2 is satisfied.



And given that this disclosure (in the CCADB) satisfies #2, why is this an
issue?


It is merely a step in the detailed logic argument that Ryan Sleevi
requested.

Note that no Browser or other client will trust certificates from the
new SubCA until the new SubCA or its clients can send the browser the
signed SubCA cert.  This technical point is also crucial for after-
the-fact cross certificates.






5. SubCA Disclosure and processing of said disclosure should be done
  nearly simultaneously to minimize the problem mentioned in 3.



I believe you're suggesting simultaneously across all root programs, is
that correct? But that's not a requirement (and perhaps based on the
incorrect and incomplete understanding of point 1)


Yes, across all root programs, that is the key point, see #0.

Also, it is argued as a logical consequence of #3, #2, #0, i.e.
assume another root program enacts similar rules.  Once the SubCA cert
is disclosed on the CCADB for Mozilla and Chrome, the SubCA operator
can download the SubCA cert from the CCADB and use it to make users of
that other root program trust issued certificates before that other
root program received the disclosure.

By symmetry, if Mozilla has to shut down the CCADB for maintenance for
2 days, another root program might receive and publish the disclosure
first, causing the same problem for users of Mozilla and Chrome
products.




I think the rest of the argument now falls apart.



7. Thus between performing the audited root key access ceremony to
  issue one or more SubCA certificates etc., and actually disclosing
  those SubCA certificates to the root programs, an issuing CA may have
  to wait for all the root programs to be *simultaneously* ready to
  receive the SubCA certificate, without violating the grace period as
  per assumption #1.



This is definitely not correct, or at the least, this is not Mozilla's
problem to solve.


Again it is a logic consequence of the other items, not an explicit
rule or assumption.




Thanks for clarifying your response. It's clear now we disagree because you
expect Mozilla to accommodate the entirety of all needs for all other
browser programs. That is something I fundamentally disagree with. It is
unnecessary to introduce complexity to the Mozilla process for hypothetical
third-parties. That is, in some degree, indistinguishable from concern
trolling (if insisted upon the hypothetical abstract, without evidence),
but is otherwise, not Mozilla's problem to solve the challenges for
hypothetical French and Russian programs. I think
https://wiki.mozilla.org/CA:FAQ#Can_I_use_Mozilla.27s_set_of_CA_certificates.3F
is relevant to that case as it is to the general case.



See my answer to #0 why I consider this a valid consideration.  Think
of it as "scalability".


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


<    1   2   3   4   5   6   >