Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-11-18 Thread Jakob Bohm via dev-security-policy

On 2020-11-18 16:36, Ryan Sleevi wrote:

On Wed, Nov 18, 2020 at 8:19 AM Nils Amiet via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


We have carefully read your email, and believe we’ve identified the
following
important points:

1. Potential feasibility issue due to lack of path building support

2. Non-robust CRL implementations may lead to security issues

3. Avoiding financial strain is incompatible with user security.



I wasn't trying to suggest that they are fundamentally incompatible (i.e.
user security benefits by imposing financial strain), but rather, the goal
is user security, and avoiding financial strain is a "nice to have", but
not an essential, along that path.



#1 Potential feasibility issue due to lack of path building support

If a relying party does not implement proper path building, the EE
certificate
may not be validated up to the root certificate. It is true that the new
solution adds some complexity to the certificate hierarchy. More
specifically
it would turn a linear hierarchy into a non-linear one. However, we
consider
that complexity as still being manageable, especially since there are
usually
few levels in the hierarchy in practice. If a CA hierarchy has utilized the
Authority Information Access extension the chance that any PKI library can
build the path seems to be very high.



I'm not sure which PKI libraries you're using, but as with the presumption
of "robust CRL" implementations, "robust AIA" implementations are
unquestionably the minority. While it's true that Chrome, macOS, and
Windows implement AIA checking, a wide variety of popular browsers
(Firefox), operating systems (Android), and libraries (OpenSSL, cURL,
wolf/yassl, etc etc) do not.

Even if the path could not be built, this would lead to a fail-secure

situation
where the EE certificate is considered invalid and it would not raise a
security issue. That would be different if the EE certificate would
erroneously
be considered valid.



This perspective too narrowly focuses on the outcome, without considering
the practical deployment expectations. Failing secure is a desirable
property, surely, but there's no question that a desirable property for CAs
is "Does not widely cause outages" and "Does not widely cause our support
costs to grow".



The focus seems to be on limiting the ecosystem risks associated with
non-comliant 3rd party SubCAs.  Specifically, in the (already important)
situation where SICA was operated by a 3rd party (not the root CA
operator), their proposal would enforce the change (by replacing and
revoking the root CA controlled ICA), whereas the original procedure
would rely on auditing a previously unaudited 3rd party, such as a name
constrained corporate CA.


Consider, for example, that despite RFC 5280 having a "fail close" approach
for nameConstraints (by MUST'ing the extension be critical), the CA/B Forum
chose to deviate and make it a MAY. The reason for this MAY was that the
lack of support for nameConstraints on legacy macOS (and OpenSSL) versions
meant that if a CA used nameConstraints, it would fail closed on those
systems, and such a failure made it impractical for the CA to deploy. The
choice to make such an extension non-critical was precisely because
"Everyone who implements the extension is secure, and everyone who doesn't
implement the extension works, but is not secure".

It is unfortunate-but-seemingly-necessary to take this mindset into
consideration: the choice between hard-failure for a number of clients (but
guaranteed security) or the choice between less-than-ideal security, but
practical usability, is often made for the latter, due to the socioeconomic
considerations of CAs.

Now, as it relates to the above point, the path building complexity would
inevitably cause a serious spike in support overhead, because it requires
that in order to avoid issues, every server operator would need to take
action to reconfigure their certificate chains to ensure the "correct"
paths were built to support such non-robust clients (for example, of the
OpenSSL forks, only LibreSSL supports path discovery, and only within the
past several weeks). As a practical matter, that is the same "cost" of
replacing a certificate, but with less determinism of behaviour: instead of
being able to test "am I using the wrong cert or the right one", a server
operator would need to consider the entire certification path, and we know
they'll get that wrong.



Their proposal would fall directly into the "am I using the wrong cert
or the right one" category.  EE subscribers would just have to do a
SubCA certifiate replacement, just like they would if a CA-internal
cross-certificate expires and is reissued (as happened in 2019 for
GlobalSign's R1-R3 cross cert, which was totally workable for us
affected subscribers, though insufficient notices were sent out).


This is part of why I'm dismissive of the solution; not because it isn't
technically workable, but because it's 

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-11-17 Thread Jakob Bohm via dev-security-policy

On 2020-11-16 23:17, Ryan Sleevi wrote:

On Mon, Nov 16, 2020 at 8:40 AM Nils Amiet via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


Hi Nils,

This is interesting, but unfortunately, doesn’t work. The 4-certificate
hierarchy makes the very basic, but understandable, mistake of assuming

the

Root CA revoking the SICA is sufficient, but this thread already

captures

why it isn’t.

Unfortunately, as this is key to the proposal, it all falls apart from
there, and rather than improving security, leads to a false sense of it.

To

put more explicitly: this is not a workable or secure solution.


Hello Ryan,

We agree that revoking SICA would not be sufficient and we mentioned that
in section 2.3 in the above message.

The new solution described in section 2.3, not only proposes to revoke
SICA (point (iv)) but also to revoke ICA (point (ii)) in the 4-level
hierarchy (RCA -> ICA -> SICA -> EE).

We believe this makes a substantial difference.



Sorry, I should have been clearer that even the assumptions involved in the
four-certificate design were flawed.

While I do want to acknowledge this has some interesting edges, I don't
think it works as a general solution. There are two critical assumptions
here that don't actually hold in practice, and are amply demonstrated as
causing operational issues.

The first assumption here is the existence of robust path building, as
opposed to path validation. In order for the proposed model to work, every
server needs to know about SICA2 and ICA2, so that it can properly inform
the client, so that the client can build a correct path and avoid ICA /
SICA1. Yet, as highlighted by
https://medium.com/@sleevi_/path-building-vs-path-verifying-the-chain-of-pain-9fbab861d7d6
, very few libraries actually implement path building. Assuming they
support revocation, but don't support path building, it's entirely
reasonable for them to build a path between ICA and SICA and treat the
entire chain as revoked. Major operating systems had this issue as recently
as a few years ago, and server-side, it's even more widespread and rampant.
In order to try to minimize (but impossible to prevent) this issue is to
require servers to reconfigure the intermediates they supply to clients, in
order to try to serve a preferred path, but has the same operational impact
as revoke-and-replace, under the existing, dominant approach.



Correct, thus most affected EE-certificate holders would still have to
reinstall the certificate configuration for their (unchanged) EE cert,
which is still less work than requesting and getting a new certificate.


The second assumption here is with an assumption on robust CRL support,
which I also hope we can agree is hardly the case, in practice. In order
for the security model to be realized by your proposed 4-tier plan, the
clients MUST know about the revocation event of SICA/ICA, in order to
ensure that SICA cannot spoof messages from ICA. In the absence of this,
the risk is entirely borne by the client. Now, you might think it somehow
reasonable to blame the implementer here, but as the recent outage of Apple
shows, there are ample reasons for being cautious regarding revocation. As
already discussed elsewhere on this thread, although Mozilla was not
directly affected due to a secondary check (prevent CAs from signing OCSP
responses), it does not subject responder certificates to OneCRL checks,
and thus there's still complexity lurking.


Actually, for the current major browsers, the situation is better:

1. Chrome would distribute the revocation of the "ICA" cert in it's
  centralized CRL mechanism.

2. Non-chrome Microsoft Browsers would actually check CRLs and or OCSP
 for the root CA to discover that the ICA cert is revoked.  This would
 be done against the CRL/OCSP servers of the root CA, not those of ICA.

3. Firefox would distribute the revocation of ICA (or any other
  intermediary CA) through its centralized "SubCA revocation" mechanism.






As it relates to the conclusions, I think the risk surface is quite
misguided, because it overlooks these assumptions with handwaves such as
"standard PKI procedures", when in fact, they don't, and have never, worked
that way in practice. I don't think this is intentional dismissiveness, but
I think it might be an unsupported rosy outlook on how things work. The
paper dismisses the concern that "key destruction ceremonies can't be
guaranteed", but somehow assumes complete and total ubiquity for deployment
of CRLs and path verification; I think that is, at best, misguided.
Although Section 3.4 includes the past remarks about the danger of
solutions that put all the onus on application developers, this effectively
proposes a solution that continues more of the same.

As a practical matter, I think we may disagree on some of the potential
positive outcomes from the path currently adopted by a number of CAs, such
as the separation of certificate hierarchies, a better awareness and
documentation 

Re: Policy 2.7.1: MRSP Issue #186: Requirement to Disclose Self-signed Certificates

2020-11-12 Thread Jakob Bohm via dev-security-policy

On 2020-11-12 05:15, Ben Wilson wrote:

Here is an attempt to address the comments received thus far. In Github,
here is a markup:

https://github.com/BenWilson-Mozilla/pkipolicy/commit/ee19ee89c6101c3a6943956b91574826e34c4932

This sentence would be deleted: "These requirements include all
cross-certificates which chain to a certificate that is included in
Mozilla’s CA Certificate Program."

And the following would be added:

"A certificate is deemed to directly or transitively chain to a CA
certificate included in Mozilla’s CA Certificate Program if:

(1)   the certificate’s Issuer Distinguished Name matches (according to the
name-matching algorithm specified in RFC 5280, section 7.1) the Subject
Distinguished Name in a CA certificate or intermediate certificate that is
in scope according to section 1.1 of this Policy, and

(2)   the certificate is signed with a Private Key whose corresponding
Public Key is encoded in the SubjectPublicKeyInfo of that CA certificate or
intermediate certificate.
Thus, these requirements also apply to so-called reissued/doppelganger CA
certificates (roots and intermediates) and to cross-certificates."

I think it is important not to lose sight of the main reason for this
proposed change-- there has been confusion about whether re-issued root CA
certificates need to be disclosed in the CCADB.

I look forward to your additional comments and suggestions.

Thank you,

Ben


On Mon, Nov 2, 2020 at 11:14 AM Corey Bonnell via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


As an alternate proposal, I suggest replacing the third paragraph of
section 5.3, which currently reads:

"These requirements include all cross-certificates which chain to a
certificate that is included in Mozilla’s CA Certificate Program."

with:

"A certificate is considered to directly or transitively chain to a
certificate included in Mozilla’s CA Certificate Program if there is a CA
or Intermediate certificate in scope (as defined in section 1.1 of this
Policy) where both of the following is true:
1)  The certificate’s Issuer Distinguished Name matches (according to
the name-matching algorithm specified in RFC 5280, section 7.1) the Subject
Distinguished Name of the certificate in scope, and
2)  The certificate is signed with a Private Key whose corresponding
Public Key is encoded in the SubjectPublicKeyInfo of the certificate in
scope."

This proposal better defines the meaning of chaining to certificates
included in the Mozilla CA program and covers the various scenarios that
have caused issues historically concerning cross-certificates and
self-signed certificates.

Thanks,
Corey

On Wednesday, October 28, 2020 at 8:25:50 PM UTC-4, Ben Wilson wrote:

Issue #186 in Github 
deals with the disclosure of CA certificates that directly or

transitively

chain up to an already-trusted, Mozilla-included root. A common scenario
for the situation discussed in Issue #186 is when a CA creates a second

(or

third or fourth) root certificate with the same key pair as the root

that

is already in the Mozilla Root Store. This problem exists at the
intermediate-CA-certificate level, too, where a self-signed
intermediate/subordinate CA certificate is created and not reported.

Public disclosure of such certificates is already required by section

5.3

of the MRSP, which reads, "All certificates that are capable of being

used

to issue new certificates, and which directly or transitively chain to a
certificate included in Mozilla’s CA Certificate Program, MUST be

operated

in accordance with this policy and MUST either be technically

constrained

or be publicly disclosed and audited."

There have been several instances where a CA operator has not disclosed

a

CA certificate under the erroneous belief that because it is self-signed

it

cannot be trusted in a certificate chain beneath the already-trusted,
Mozilla-included CA. This erroneous assumption is further discussed in

Issue

#186 .

The third paragraph of MRSP section 5.3 currently reads, " These
requirements include all cross-certificates which chain to a certificate
that is included in Mozilla’s CA Certificate Program."

I recommend that we change that paragraph to read as follows:

"These requirements include all cross-certificates *and self-signed
certificates (e.g. "Issuer" DN is equivalent to "Subject" DN and public

key

is signed by the private key) that* chain to a CA certificate that is
included in Mozilla’s CA Certificate Program*, and CAs must disclose

such

CA certificates in the CCADB*.

I welcome your recommendations on how we can make this language even

more

clear.



How would that phrasing cover doppelgangers of intermediary SubCAs under 
an included root CA?




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This 

Re: Policy 2.7.1: MRSP Issue #192: Require information about auditor qualifications in the audit report

2020-11-06 Thread Jakob Bohm via dev-security-policy

On 2020-11-06 18:31, Jeff Ward wrote:
> ...


Audit reports, whether for WebTrust, financial statements, or other forms of 
engagement reports providing assurance to users of the information, do not 
include specific audit team members’ names.  Simply stated, this desire to 
include individual auditor’s qualifications in a public report is not 
consistent with any other compliance reporting methods or reporting 
requirements for CAs, or any other auditee for that matter.



Most paper-based auditing schemes for company financial records (the
historic work area of auditors) include, on each report, the personal
signature and corresponding printed name of the responsible auditor,
optionally with an abbreviation of their national qualification level
(such as an abbreviation of "Examplarian State Authorized Public
Accountant").  From there, it would be possible for interested parties
to check that a physical person by that name is/was indeed on the roster
of such authorized individuals, but not if/why the State of Exemplar
decided to so include that person.  Furthermore, the auditor person
and/or their company may have voluntarily published further details of
their qualifications (in brochures, on websites etc.) and may have
applicable original degree documents framed and hanging on their walls
for all concerned to readily inspect.

In terms of GDPR, the state would have published rules for how to get
added/removed from the public roster, and each auditor would have the
opportunity, at all times, to retract their self-descriptions and/or
remove some or all of their framed documents from their public office.

A modern equivalent procedure for CA audits could be:

1. Each Auditor has their name and a unique public nickname registered
in a non-public roster at either CPA Canada or the relevant European
counterpart.  This is done to fulfill the contractual obligation of
their professional oath of responsibility.  The roster organization
might optionally provide alias e-mails based on the nicks.

2. Each non-public roster operates a public online service which will
confirm or deny the presence of a name/nick pair, with appropriate
safeguards against attempts to extract the roster by systematic polling
of made up names.

Unless otherwise stated in public by Mozilla (such as the statements
made a few years ago about certain auditors from E), any auditor on
these rosters shall be presumed sufficiently qualified to sign audits
used by Mozilla.

3. Each auditor person signs his public audit letters with his name,
nick, a reference to the roster-keeping organization and any other
honorific titles he/she may legitimately choose to use.  He does this to
satisfy his contractual obligation to provide the CA with that letter.
Any official physical copies will have his physical signature above his
name and may also carry a physical stamp or seal of him or his
organization, as dictated by local legal traditions.

4. Each such public audit letter is submitted to a public repository
operated by the roster-keeping organization, using a procedure that
verifies that the letter was submitted exactly as given, by that named
auditor from their roster.  This is done to satisfy the contractual
obligation of the auditor towards the CA in accordance with a
contractual reference to terms of the roster-keeping organization.

5. The roster-keeping organization publishes the public audit letters in
both a traditional paper journal deposited at major public archives and
as an online readily accessible web site with a Merkel hash tree
providing public verification that each letter was in the public record
on or before the stated inclusion date.  As hash algorithms fail to
future cracks, the roster-keeping organization retains the ability to
regenerate the signatures using new algorithms, based on its offline
archive of originals, including a signed public statement of said
regeneration.  This publication of records that include the identity of
both the actual auditors as well as relevant principal CA Officers is
done to further satisfy the contractual obligations in #4.  As is common
in paper-based book-keeping, retractions can be filed as separate
letters of correction, and the retracted documents may be made invisible
to the public without invalidating the hash-tree.
  For public access, each public letter is given a unique permanent URL
to which the CA may publicly refer, including in the CADB and on its
website.

6. Each auditor shall submit for publication by the roster organization
a self-authored but roster verified statement of qualifications, usually
just a few paragraphs.  Each such statement similarly gets a permanent
URL, but remains visible only until superseded or retracted by either
the auditor or roster-keeper.  This publication is done as part of the
auditor's contractual obligations to the roster-keeping organization,
and the ability to retract provides the GDPR right of deletion of any
included details.  Links to the current 

Re: Policy 2.7.1: MRSP Issue #153: Cradle-to-Grave Contiguous Audits

2020-11-06 Thread Jakob Bohm via dev-security-policy

On 2020-11-05 22:43, Tim Hollebeek wrote:

So, I'd like to drill down a bit more into one of the cases you discussed.
Let's assume the following:

1. The CAO [*] may or may not have requested removal of the CAC, but removal
has not been completed.  The CAC is still trusted by at least one public
root program.

2. The CAO has destroyed the CAK for that CAC.

The question we've been discussing internally is whether destruction alone
should be sufficient to get you out of audits, and we're very skeptical
that's desirable.

The problem is that destruction of the CAK does not prevent issuance by
subCAs, so issuance is still possible.  There is also the potential
possibility of undisclosed subCAs or cross relationships to consider,
especially since some of these cases are likely to be shutdown scenarios for
legacy, poorly managed hierarchies.  Removal may be occurring *precisely*
because there are doubts about the history, provenance, or scope of previous
operations and audits.

We're basically questioning whether there are any scenarios where allowing
someone to escape audits just because they destroyed the key is likely to
lead to good outcomes as opposed to bad ones.  If there aren't reasonable
scenarios where it is necessary to be able to remove CACs from audit scope
through key destruction while they are still trusted by Mozilla, it's
probably best to require audits as long as the CACs are in scope for
Mozilla.

Alternatively, if there really are cases where this needs to be done, it
would be wise to craft language that limits this exception to those
scenarios.



I believe that destruction of the Root CA Key should only end audit
requirements for the corresponding Root CA itself, not for any of its
still trusted SubCAs.

One plausible (but hypothetical) sequence of events is this:

1. Begin Root ceremony with Auditors present.

1.1 Create Root CA Key pair
1.2 Sign Root CA SelfCert
1.3 Create 5 SubCA Key pairs
1.4 Sign 5 SubCA pre-certificates
1.5 Request CT Log entries for the 5 SubCA pre-certificates
1.6 Sign 5 SubCA certificates with embedded CTs
1.7 Sign, but do not publish a set of post-dated CRLs for various 
contingencies
1.8 Sign, but do not publish a set of post-dated revocation OCSP 
responses for those contingencies
1.9 Sign, but do not yet publish, a set of post-dated non-revocation 
OCSP responses confirming that the SubCAs have not been revoked on each 
date during their validity.

1.10 Destroy Root CA Key pair.

2. Initiate audited storage of the unreleased CRL and OCSP signatures.

3. End Root ceremony, end root CAC audit period.

4. Release public audit report of this ceremony, this ends the ordinary 
audits required for the Root CA Cert.  However audit reports that only 
the correct contingency and continuation OCSP/CRL signatures were 
released from storage remain technically needed.


5. Maintain revocation servers that publish the prepared CRLs and OCSP 
answers according to their embedded dates.  Feed their publication queue

from audited batch releases from the storage.

6. Operate the 5 SubCAs under appropriate security and audit schemes 
detailed in CP/CPS document pairs.


7. Apply for inclusion in the Mozilla root program.


In the above hypothetical scenario, there would be no way for the the 
CAO to misissue new SubCAs or otherwise misuse the root CA Key Pair, but 
still the usual risks associated with the 5 SubCA operations.


Also the CAO would have no way to increase the set of top level SubCAs
or issue revocation statements in any yet-to-be-invented data formats,
even if doing so would be legitimate or even required by the root
programs.

Thus the hypothetical scenario could land the CAO in an impossible 
situation, if root program requirements or common CA protocols change,
and those changes would require even one additional signature by the 
root CA Key Pair.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #186: Requirement to Disclose Self-signed Certificates

2020-11-02 Thread Jakob Bohm via dev-security-policy

On 2020-10-30 18:45, Ryan Sleevi wrote:

On Fri, Oct 30, 2020 at 12:38 PM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


On 2020-10-30 16:29, Rob Stradling wrote:

Perhaps add: "And also include any other certificates sharing the same
private/public key pairs as certificates already included in the
requirements."  (this covers the situation you mentioned where a
self-signed certificate shares the key pair of a certificate that chains
to an included root).




Already rephrased to the following in my Friday post, as you actually 
quote below.


Perhaps add: "And also include any other certificates sharing the same
private/public key pairs as CA certificates already included in the
requirements."  (this covers the situation Rob mentioned where a
self-signed certificate shares the key pair of a certificate that chains
to an included root).


Jakob,

I agree that that would cover that situation, but your proposed language

goes way, way too far.


Any private CA could cross-certify a publicly-trusted root CA.  How

would the publicly-trusted CA Operator discover such a cross-certificate?
Why would such a cross-certificate be of interest to Mozilla anyway?  Would
it really be fair for non-disclosure of such a cross-certificate to be
considered a policy violation?



I agree with Rob that, while the intent is not inherently problematic, I
think the language proposed by Jakob is problematic, and might not be
desirable.



See above (and below) rephrasing to limit to reusing the private key 
from a CA certificate.





How would my wording include that converse situation (a CA not subject
to the Mozilla policy using their own private key to cross sign a CA
subject to the Mozilla policy)?

I do notice though that my wording accidentally included the case where
the private key of an end-entity cert is used in as the key of a private
CA, because I wrote "as certificates" instead of "as CA certificates".



Because "as certificates already included in the requirements" is ambiguous
when coupled with "any other certificates". Rob's example here, of a
privately-signed cross-certificate *is* an "any other certificate", and the
CA who was cross-signed is a CA "already included in the requirements"

I think this intent to restate existing policy falls in the normal trap of
"trying to say the same thing two different ways in policy results in two
different interpretations / two different policies"

Taking a step back, this is the general problem with "Are CA
(Organizations) subject to audits/requirements, are CA Certificates, or are
private keys", and that's seen an incredible amount of useful discussion
here on m.d.s.p. that we don't and shouldn't relitigate here. I believe
your intent is "The CA (Organization) participating in the Mozilla Root
Store shall disclose every Certificate that shares a CA Key Pair with a CA
Certificate subject to these requirements", and that lands squarely on this
complex topic.



This is precisely what I am trying to state in a concise manner, to
avoid overbloating policy with wordy sentences like the one you just
used.


A different way to achieve your goal, and to slightly tweak Ben's proposal
(since it appears many CAs do not understand how RFC 5280 is specified) is
to take a slight reword:

"""
These requirements include all cross-certificates that chain to a CA
certificate that is included in Mozilla’s CA Certificate Program, as well
as all certificates (e.g. including self-signed certificates and non-CA
certificates) issued by the CA that share the same CA Key Pair. CAs must
disclose such certificates in the CCADB.
"""



The words "issued by the CA" are problematic.  I realize that you are 
trying to limit the scope to certificates generated by the 
CA-organization, but as written it could be misconstrued as 
"certificates issued by the CA certificate that share the same CA Key Pair".


Proposed better wording of your text:

These requirements include all cross-certificates that chain to a CA
certificate that is included in Mozilla's CA Certificate Program, as
wall as all certificates  (e.g. including self-signed certificates and
non-CA certificates) created by the CA organization that share the same
CA Key Pair as any CA certificate or cross-certificate that chains to an
included CA certificate.

The intent of of my final words is to also cover reuse of keys belonging
to SubCA certificates.


However, this might be easier with a GitHub PR, as I think Wayne used to
do, to try to make sure the language is precise in context. It tries to
close the loophole Rob is pointing out about "who issued", while also
trying to address what you seem to be concerned about (re: key reuse). To
Ben's original challenge, it tries to avoid "chain to", since CAs
apparently are confus

Re: Policy 2.7.1: MRSP Issue #186: Requirement to Disclose Self-signed Certificates

2020-10-30 Thread Jakob Bohm via dev-security-policy

On 2020-10-30 16:29, Rob Stradling wrote:

Perhaps add: "And also include any other certificates sharing the same
private/public key pairs as certificates already included in the
requirements."  (this covers the situation you mentioned where a
self-signed certificate shares the key pair of a certificate that chains
to an included root).


Jakob,

I agree that that would cover that situation, but your proposed language goes 
way, way too far.

Any private CA could cross-certify a publicly-trusted root CA.  How would the 
publicly-trusted CA Operator discover such a cross-certificate?  Why would such 
a cross-certificate be of interest to Mozilla anyway?  Would it really be fair 
for non-disclosure of such a cross-certificate to be considered a policy 
violation?


How would my wording include that converse situation (a CA not subject
to the Mozilla policy using their own private key to cross sign a CA
subject to the Mozilla policy)?

I do notice though that my wording accidentally included the case where
the private key of an end-entity cert is used in as the key of a private
CA, because I wrote "as certificates" instead of "as CA certificates".




____________
From: Jakob Bohm via dev-security-policy 
Sent: 29 October 2020 14:57
To: mozilla-dev-security-pol...@lists.mozilla.org 

Subject: Re: Policy 2.7.1: MRSP Issue #186: Requirement to Disclose Self-signed 
Certificates


On 2020-10-29 01:25, Ben Wilson wrote:

Issue #186 in Github <https://github.com/mozilla/pkipolicy/issues/186=04>
deals with the disclosure of CA certificates that directly or transitively
chain up to an already-trusted, Mozilla-included root. A common scenario
for the situation discussed in Issue #186 is when a CA creates a second (or
third or fourth) root certificate with the same key pair as the root that
is already in the Mozilla Root Store. This problem exists at the
intermediate-CA-certificate level, too, where a self-signed
intermediate/subordinate CA certificate is created and not reported.

Public disclosure of such certificates is already required by section 5.3
of the MRSP, which reads, "All certificates that are capable of being used
to issue new certificates, and which directly or transitively chain to a
certificate included in Mozilla’s CA Certificate Program, MUST be operated
in accordance with this policy and MUST either be technically constrained
or be publicly disclosed and audited."

There have been several instances where a CA operator has not disclosed a
CA certificate under the erroneous belief that because it is self-signed it
cannot be trusted in a certificate chain beneath the already-trusted,
Mozilla-included CA. This erroneous assumption is further discussed in Issue
#186 <https://github.com/mozilla/pkipolicy/issues/186=04 >.

The third paragraph of MRSP section 5.3 currently reads, " These
requirements include all cross-certificates which chain to a certificate
that is included in Mozilla’s CA Certificate Program."

I recommend that we change that paragraph to read as follows:

"These requirements include all cross-certificates *and self-signed
certificates (e.g. "Issuer" DN is equivalent to "Subject" DN and public key
is signed by the private key) that* chain to a CA certificate that is
included in Mozilla’s CA Certificate Program*, and CAs must disclose such
CA certificates in the CCADB*.

I welcome your recommendations on how we can make this language even more
clear.



Perhaps add: "And also include any other certificates sharing the same
private/public key pairs as certificates already included in the
requirements."  (this covers the situation you mentioned where a
self-signed certificate shares the key pair of a certificate that chains
to an included root).




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: TLS certificates for ECIES keys

2020-10-30 Thread Jakob Bohm via dev-security-policy

On 2020-10-30 01:50, Matthew Hardeman wrote:

On Thu, Oct 29, 2020 at 6:30 PM Matt Palmer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

The way I read Jacob's description of the process, the subscriber is

"misusing" the certificate because they're not going to present it to TLS
clients to validate the identity of a TLS server, but instead they (the
subscriber) presents the certificate to Apple (and other OS vendors?) when
they know (or should reasonably be expected to know) that the certificate
is
not going to be used for TLS server identity verification -- specifically,
it's instead going to be presented to Prio clients for use in some sort of
odd processor identity parallel-verification dance.



To my knowledge, caching/storing a leaf certificate isn't misuse.  While
they appear to be presenting it in some manner other than via a TLS
session, I don't believe there's any prohibition against such a thing.
Would it cure the concern if they also actually ran a TLS server that does
effectively nothing at the host name presented in the SAN dnsName?




Certainly, whatever's going on with the certificate, it most definitely
*isn't* TLS, and so absent an EKU that accurately describes that other
behaviour,
I can't see how it doesn't count as "misuse", and since the subscriber has
presented the certificate for that purpose, it seems reasonable to describe
it as "misuse by the subscriber".



Not all distribution of a leaf certificate is "use", let alone "misuse".
There are applications which certificate PIN rather than key PIN.  Is that
misuse?




Although misuse is problematic, the concerns around agility are probably
more concerning, IMO.  There's already more than enough examples where
someone has done something "clever" with the WebPKI, only to have it come
back and bite everyone *else* in the arse down the track -- we don't need
to
add another candidate at this stage of the game.  On that basis alone, I
think it's worthwhile to try and squash this thing before it gets any more
traction.



My question is by what rule do you squash this thing that doesn't also
cover a future similar use by a third-party relying party that makes
"additional" use of some subscriber's certificate.




Given that Apple is issuing another certificate for each processor anyway,
I
don't understand why they don't just embed the processor's SPKI directly in
that certificate, rather than a hash of the SPKI.  P-256 public keys (in
compressed form) are only one octet longer than a SHA-256 hash.  But
presumably there's a good reason for not doing that, and this isn't the
relevant forum for discussing such things anyway.



Presumably this is so that the data processors can choose a key for the
encryption of their data shards and bind that to a DNS name demonstrated to
be under the data processor's control via a standard CA issuance process
without abstracting the whole thing away to certificates controlled by
Apple and/or Google.  To demonstrate that the fractional data shard
holder's domain was externally validated by a party that isn't Apple or
Google.

People scrape and analyze other parties' leaf certificates all the time.
What those third parties do with those certificates (if anything) is up to
those third parties.

If a third party can do things which causes a subscriber's certificate to
be revokable for misuse without having derived or acquired the private key,
I hesitate to call that ridiculous, but it is probably unsustainable.
Extending upon that, if the mere fact that the subscriber and the author of
the relying party validation agent are part of the same corporate hierarchy
changes the decision for the same set of circumstances, that's suspect.



Cryptographically, I think the concern is this:

In this scheme, the authenticated server B will use the corresponding
private key in a mathematically complex protocol that is neither TLS nor
CMS.  It is conceivable that said protocol may have a weakness that
allows a clever opponent M to exchange traffic with B in order to 
discover B's private key.


Thus using a WebPKI "Server Authentication" certificate to bind a public
key used by party A to identify party B in the "prio" protocol creates a
potential risk of creating a family of certificates with easily
compromised keys.

Thus it makes sense for the involved CAs (such as Let's Encrypt) to 
issue these certificates with a unique EKU other than the generic 
"Server Authentication" traditionally associated with TLS.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: TLS certificates for ECIES keys

2020-10-29 Thread Jakob Bohm via dev-security-policy

On 2020-10-29 19:06, Jacob Hoffman-Andrews wrote:

Hi all,

ISRG is working with Apple and Google to deploy Prio, a "privacy-preserving
system for the collection of aggregate statistics:"
https://crypto.stanford.edu/prio/. Mozilla has previously demonstrated Prio
for use with telemetry data:
https://hacks.mozilla.org/2018/10/testing-privacy-preserving-telemetry-with-prio/
and
https://blog.mozilla.org/security/2019/06/06/next-steps-in-privacy-preserving-telemetry-with-prio.
Part of the plan involves using Web PKI certificates in an unusual way, so
I wanted to get feedback from the community and root programs.

In Prio, clients (mobile devices in this case) generate "shares" of data to
be sent to non-colluding processors. Those processors calculate aggregate
statistics without access to the underlying data, and their output is
combined to determine the overall statistic - for instance, the number of
users who clicked a particular button. The goal is that no party learns the
information for any individual user.

As part of this particular deployment, clients encrypt their shares to each
processor (offline), and then send the resulting encrypted "packets" of
share data via Apple and Google servers to the processors (of which ISRG
would be one). The encryption scheme here is ECIES (
https://en.wikipedia.org/wiki/Integrated_Encryption_Scheme).

The processors need some way to communicate their public keys to clients.
The current plan is this: A processor chooses a unique, public domain name
to identify its public key, and proves control of that name to a Web PKI
CA. The processor requests issuance of a TLS certificate with
SubjectPublicKeyInfo set to the P-256 public key clients will use to
encrypt data share packets to that processor. Note that this certificate
will never actually be used for TLS.

The processor sends the resulting TLS certificate to Apple. Apple signs a
second, non-TLS certificate from a semi-private Apple root. This root is
trusted by all Apple devices but is not in other root programs.
Certificates chaining to this root are accepted for submission by most CT
logs. This non-TLS certificate has a CN field containing text that is not a
domain name (i.e. it has spaces). It has no EKUs, and has a special-purpose
extension with an Apple OID, whose value is the hash of the public key from
the TLS certificate (i.e. the public key that will be used by clients to
encrypt data share packets). This certificate is submitted to CT and uses
the precertificate flow to embeds SCTs.

The Prio client software on the devices receives both the TLS and non-TLS
certificate from their OS vendor, and validates both, checking OCSP and CT
requirements, and checking that the public key hash in the non-TLS
certificate's special purpose extension matches the SubjectPublicKeyInfo in
the TLS certificate. If validation passes, the client software will use
that public key to encrypt data share packets.

The main issue I see is that the processor (a Subscriber) is using the TLS
certificate for a purpose not indicated by that certificate's EKUs. RFC
5280 says (https://tools.ietf.org/html/rfc5280#section-4.2.1.12):



Maybe allocate your own OID (within ISRGs own OID space) for this 
purpose and put it in the EKU.  Something like


iso.org.dod.internet.private.enterprise(1.3.6.1.4.1).isrg(44947).isrg-divisionX(???).priu-processor(1)


4.2.1.12.  Extended Key Usage
If the extension is present, then the certificate MUST only be used
   for one of the purposes indicated.


The BRs say (
https://cabforum.org/wp-content/uploads/CA-Browser-Forum-BR-1.7.3.pdf):


4.9.1.1 Reasons for Revoking a Subscriber Certificate
The CA SHOULD revoke a certificate within 24 hours and MUST revoke a

Certificate within 5 days if one or more of the following occurs:

2. The CA obtains evidence that the Certificate was misused;


"Misused" is not defined here but I think a reasonable interpretation would
be that a Subscriber would trigger the revocation requirement by violating
the EKU MUST from RFC 5280.

I also have a concern about ecosystem impact. The Web PKI and Certificate
Transparency ecosystems have been gradually narrowing their scope - for
instance by requiring single-purpose TLS issuance hierarchies and planning
to restrict CT logs to accepting only certificates with the TLS EKU. New
key distribution systems will find it tempting to reuse the Web PKI by
assigning additional semantics to certificates with the TLS EKU, but this
may make the Web PKI less agile.

I've discussed the plan with Apple, and they're fully aware this is an
unusual and non-ideal use of the Web PKI, and hope to propose a timeline
for a better system soon. One of the constraints operating here is that
Apple has already shipped software implementing the system described above,
and plans to use it in addressing our current, urgent public health crisis.
As far as I know, no publicly trusted Web PKI certificates are currently in
use for this purpose.

So, mdsp folks and 

Re: Policy 2.7.1: MRSP Issue #186: Requirement to Disclose Self-signed Certificates

2020-10-29 Thread Jakob Bohm via dev-security-policy

On 2020-10-29 01:25, Ben Wilson wrote:

Issue #186 in Github 
deals with the disclosure of CA certificates that directly or transitively
chain up to an already-trusted, Mozilla-included root. A common scenario
for the situation discussed in Issue #186 is when a CA creates a second (or
third or fourth) root certificate with the same key pair as the root that
is already in the Mozilla Root Store. This problem exists at the
intermediate-CA-certificate level, too, where a self-signed
intermediate/subordinate CA certificate is created and not reported.

Public disclosure of such certificates is already required by section 5.3
of the MRSP, which reads, "All certificates that are capable of being used
to issue new certificates, and which directly or transitively chain to a
certificate included in Mozilla’s CA Certificate Program, MUST be operated
in accordance with this policy and MUST either be technically constrained
or be publicly disclosed and audited."

There have been several instances where a CA operator has not disclosed a
CA certificate under the erroneous belief that because it is self-signed it
cannot be trusted in a certificate chain beneath the already-trusted,
Mozilla-included CA. This erroneous assumption is further discussed in Issue
#186 .

The third paragraph of MRSP section 5.3 currently reads, " These
requirements include all cross-certificates which chain to a certificate
that is included in Mozilla’s CA Certificate Program."

I recommend that we change that paragraph to read as follows:

"These requirements include all cross-certificates *and self-signed
certificates (e.g. "Issuer" DN is equivalent to "Subject" DN and public key
is signed by the private key) that* chain to a CA certificate that is
included in Mozilla’s CA Certificate Program*, and CAs must disclose such
CA certificates in the CCADB*.

I welcome your recommendations on how we can make this language even more
clear.



Perhaps add: "And also include any other certificates sharing the same
private/public key pairs as certificates already included in the 
requirements."  (this covers the situation you mentioned where a 
self-signed certificate shares the key pair of a certificate that chains 
to an included root).



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EJBCA performs incorrect calculation of validities

2020-10-28 Thread Jakob Bohm via dev-security-policy

On 2020-10-28 20:54, Ryan Sleevi wrote:

On Wed, Oct 28, 2020 at 10:50 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


This aspect of RFC5280 section 4.1.2.5 is quite unusual in computing,
where the ends of intervals are typically encoded such that subtracting
the interval ends (as pure numbers) yields the interval length.




= notBefore, <= notAfter is as classic as "< size - 1" in

0-indexed for-loops (i.e. a size of 1 indicates there's one element - at
index 0), or "last - first" needs to add +1 if counting elements in a
pointer range.



0-indexed for-loops typically use "< size" or "<= size - 1", 
illustrating how hard it is to get such cases right.





As a data point, the reference CA code in the OpenSSL library,
version 1.1.0



Generates a whole bunch of completely invalid badness that is completely
non-compliant, and is hardly a "reference" CA (c.f. the long time to switch
to utf8only for DNs as just one of the basic examples)


The "ca" "app" inside the openssl command line tool is officially 
defined as being "reference code only", not intended as an actual 
production CA implementation, although many people have found it

useful as a component of low volume production CA implementations.

This "reference" or "sample" code and its sample configuration contains
comments as to what choices are compatible with older Mozilla code,
including that using UTF-8 strings in DNs would cause older Mozilla code
to fail.



So this seems another detail where the old IETF working group made

things unnecessarily complicated for everybody.



https://www.youtube.com/watch?v=HMqZ2PPOLik

https://tools.ietf.org/rfcdiff?url2=draft-ietf-pkix-new-part1-01.txt dated
2000. This is 2020.

Where does that change come from?
https://www.itu.int/rec/T-REC-X.509-23-S/en (aka ITU's X.509), which in
2000, stated "TA indicates the period of validity of the certificate, and
consists of two dates, the first and last on which the certificate is
valid."

Does that mean this was a change in 2000? Nope.  It's _always been there_,
as far back as ITU-T's X.509 (11/88) -
https://www.itu.int/rec/T-REC-X.509-198811-S/en



ITU's X.509 (10/2012) doesn't seem to contain the sentence quoted and
seems to be completely silent as to the inclusive/exclusive
interpretation of the ends of the validity interval.  And this doesn't 
seem to change in corrigendum 2 from 04/2016.



It helps to do research before casting aspersions or proposing to
reinterpret meanings that are older than some members here.



The overarching problem is that some people love to do language
lawyering with the exact meanings of specifications, and strictly
enforcing every minute detail, such as the fact that RFC5280 from 2008
insists that these two ASN.1 fields should be interpreted as
"inclusive", while 398 days and 1 second is technically more than 398
days.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EJBCA performs incorrect calculation of validities

2020-10-28 Thread Jakob Bohm via dev-security-policy

On 2020-10-28 11:55, Mike Kushner wrote:

Hi all,

We were alerted to the fact that EJBCA does not calculate certificate and OCSP validities 
in accordance with RFC 5280, which has been a requirement since BR 1.7.1 The word 
"inclusive" was not caught, meaning that a certificate/response issued by EJBCA 
will have a validity of one second longer than intended by the RFC.

This will only cause an incident for certificates of a validity of exactly 398 
days - any certificates with shorter validities are still within the 
requirements.

This has been fixed in the coming EJBCA 7.4.3, and all PrimeKey customers were 
alerted a week ago and recommended to review their certificate profiles and 
responder settings to be within thresholds.

While investigating this we noticed that several non-EJBCA CAs seem to issue 
certificates with the same non RFC-compliant validity calculation (but still 
within the 398 day limit), so as a professional courtesy we wish like to alert 
other vendors to review their implementations and lessen the chance of any 
misissuance.



Any response from the Mozilla NSS team as to the correct implementation
of this detail in all related NSS code (including, but not limited to,
the client side code interpreting the validity data in received
certificates, OCSP responses etc.).

This aspect of RFC5280 section 4.1.2.5 is quite unusual in computing,
where the ends of intervals are typically encoded such that subtracting
the interval ends (as pure numbers) yields the interval length.

As a data point, the reference CA code in the OpenSSL library,
version 1.1.0 also treats the "Not after" time as exclusive when
generating certificates and the OpenSSL client code treats both
timestamps as exclusive when validating certificates.

So this seems another detail where the old IETF working group made
things unnecessarily complicated for everybody.

From a policy perspective, if enough code out there has the same
interpretation as old EJBCA versions, maybe it would make more sense
for the policy bodies to override RFC5280.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PEM of root certs in Mozilla's root store

2020-10-19 Thread Jakob Bohm via dev-security-policy

On 2020-10-17 01:38, Ryan Sleevi wrote:

On Fri, Oct 16, 2020 at 5:27 PM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


RFC4180 section 3 explicitly warns that there are other variants and
specifications of the CSV format, and thus the full generalizations in
RFC4180 should not be exploited to their extremes.



You're referring to this section, correct?

"""
Interoperability considerations:
   Due to lack of a single specification, there are considerable
   differences among implementations.  Implementors should "be
   conservative in what you do, be liberal in what you accept from
   others" (RFC 793 [8]) when processing CSV files.  An attempt at a
   common definition can be found in Section 2.

   Implementations deciding not to use the optional "header"
   parameter must make their own decision as to whether the header is
   absent or present.
"""



Splitting the input at newlines before parsing for quotes and commas is
a pretty common implementation strategy as illustrated by my examples of
common tools that actually do so.



This would appear to be at fundamental odds with "be liberal in what you
accept from others" and, more specifically, ignoring the remark that
Section 2 is an admirable effort at a "common" definition, which is so
called as it minimizes such interoperability differences.

As your original statement was the file produced was "not CSV", I believe
that's been thoroughly dispelled by highlighting that, indeed, it does
conform to the grammar set forward in RFC 4180, and is consistent with the
IANA mime registration for CSV.

Although you also raised concern that naive and ill-informed attempts at
CSV parsing, which of course fail to parse the grammar of RFC 4180, there
are thankfully alternatives for each of those concerns. With awk, you have
FPAT. With Perl, you have Text::CSV. Of course, as the cut command is too
primitive to handle a proper grammar, there are plenty of equally
reasonable alternatives to this, and which take far less time than the
concerns and misstatements raised on this thread, which I believe we can,
thankfully, end.



Please stop trolling, the section you quoted clearly states that section 
2 of the RFC was only an *attempt* at a common definition.


Ideally, a CSV parser should be liberal in what it accepts, but a CSV 
producer (which is the subject of this thread) should be conservative in 
what it provides, thus avoiding the least implemented/tested aspects

of the generalized grammar in that section 2.

Putting line feeds inside CSV fields is like using bang paths in an
RFC822 To: field.  Theoretically permitted and implemented by the most
complete e-mail parsing libraries in the world, but not something that
one can expect to work for a global audience.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PEM of root certs in Mozilla's root store

2020-10-16 Thread Jakob Bohm via dev-security-policy

On 2020-10-16 14:11, Ryan Sleevi wrote:

On Thu, Oct 15, 2020 at 7:44 PM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


On 2020-10-15 11:57, Ryan Sleevi wrote:

On Thu, Oct 15, 2020 at 1:14 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


For example, embedded new lines are discussed in 2.6 and the ABNF

therein.




The one difference from RFC4180 is that CR and LF are not part of the
alternatives for the inner part of "escaped".



Again, it would do a lot of benefit for everyone if you would be more
precise here.

For example, it seems clear and unambiguous that what you just stated is
factually wrong, because:

escaped = DQUOTE *(TEXTDATA / COMMA / CR / LF / 2DQUOTE) DQUOTE



I was stating the *difference* from RFC4180 being precisely that
"simple, traditional CSV" doesn't accept the CR and LF alternatives in
that syntax production.



Ah, that would explain my confusion: you’re using “CSV” in a manner
different than what is widely understood and standardized. The complaint
about newlines would be as technically accurate and relevant as a complaint
that “simple, traditional certificates should parse as JSON” or that
“simple, traditional HTTP should be delivered over port 23”; which is to
say, it seems like this concern is not relevant.

As the CSVs comply with RFC 4180, which is widely recognized as what “CSV”
means, I think Jakob’s concern here can be disregarded. Any implementation
having trouble with the CSVs produced is confused about what a CSV is, and
thus not a CSV parser.



RFC4180 section 3 explicitly warns that there are other variants and 
specifications of the CSV format, and thus the full generalizations in 
RFC4180 should not be exploited to their extremes.


Splitting the input at newlines before parsing for quotes and commas is
a pretty common implementation strategy as illustrated by my examples of
common tools that actually do so.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Sectigo to Be Acquired by GI Partners

2020-10-16 Thread Jakob Bohm via dev-security-policy

On 2020-10-16 12:33, Rob Stradling wrote:

...clarification of what meaning was intended.


Merely this...

"Hi Ryan.  Tim Callan posted a reply to your questions last week, but his message 
has not yet appeared on the list.  Is it stuck in a moderation queue?"



The part needing clarification started with:

> In addition to the questions posted by Wayne, I think it'd be useful
> to confirm:
> ...

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Sectigo to Be Acquired by GI Partners

2020-10-15 Thread Jakob Bohm via dev-security-policy

On 2020-10-15 16:46, Rob Stradling wrote:

Hi Jacob.  I don't believe that this list mandates any particular posting style 
[https://en.wikipedia.org/wiki/Posting_style].

Although interleaved/inline posting is my preferred style, I'm stuck using 
Outlook365 as my mail client these days.  (Sadly, Thunderbird's usability 
worsened dramatically for me after Sectigo moved corporate email to Office365 a 
few years ago).  So this is the situation I find myself in...

"This widespread policy in business communication made bottom and inline posting so 
unknown among most users that some of the most popular email programs no longer support 
the traditional posting style. For example, Microsoft Outlook, AOL, and Yahoo! make it 
difficult or impossible to indicate which part of a message is the quoted original or do 
not let users insert comments between parts of the original."
[https://en.wikipedia.org/wiki/Posting_style#Quoting_support_in_popular_mail_clients]



I realized that the problem was caused by broken client software, and
was pointing out than in this case, it had led to a specific lack of
clarity and was asking for clarification of what meaning was intended.




From: dev-security-policy  on behalf 
of Jakob Bohm via dev-security-policy 
Sent: 12 October 2020 22:41
To: mozilla-dev-security-pol...@lists.mozilla.org 

Subject: Re: Sectigo to Be Acquired by GI Partners

Hi Rob,

The e-mail you quote below seems to be inadvertently "confirming" some
suspicions that someone else posed as questions. I think the group as a
whole would love to have actual specific answers to those original
questions.

Remember to always add an extra layer of ">" indents for each level of
message quoting, so as to not misattribute text.

On 2020-10-12 10:43, Rob Stradling wrote:

Hi Ryan.  Tim Callan posted a reply to your questions last week, but his 
message has not yet appeared on the list.  Is it stuck in a moderation queue?


From: dev-security-policy  on behalf 
of Ryan Sleevi via dev-security-policy 
Sent: 03 October 2020 22:16
To: Ben Wilson 
Cc: mozilla-dev-security-policy 
Subject: Re: Sectigo to Be Acquired by GI Partners


In a recent incident report [1], a representative of Sectigo noted:

The carve out from Comodo Group was a tough time for us. We had twenty

years’ worth of completely intertwined systems that had to be disentangled
ASAP, a vast hairball of legacy code to deal with, and a skeleton crew of
employees that numbered well under half of what we needed to operate in any
reasonable fashion.



This referred to the previous split [2] of the Comodo CA business from the
rest of Comodo businesses, and rebranding as Sectigo.

In addition to the questions posted by Wayne, I think it'd be useful to
confirm:

1. Is it expected that there will be similar system and/or infrastructure
migrations as part of this? Sectigo's foresight of "no effect on its
operations" leaves it a bit ambiguous whether this is meant as "practical"
effect (e.g. requiring a change of CP/CS or effective policies) or whether
this is meant as no "operational" impact (e.g. things will change, but
there's no disruption anticipated). It'd be useful to frame this response
in terms of any anticipated changes at all (from mundane, like updating the
logos on the website, to significant, such as any procedure/equipment
changes), rather than observed effects.

2. Is there a risk that such an acquisition might further reduce the crew
of employees to an even smaller number? Perhaps not immediately, but over
time, say the next two years, such as "eliminating redundancies" or
"streamlining operations"? I recognize that there's an opportunity such an
acquisition might allow for greater investment and/or scale, and so don't
want to presume the negative, but it would be good to get a clear
commitment as to that, similar to other acquisitions in the past (e.g.
Symantec CA operations by DigiCert)

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1648717#c21
[2]
https://groups.google.com/g/mozilla.dev.security.policy/c/AvGlsb4BAZo/m/p_qpnU9FBQAJ

On Thu, Oct 1, 2020 at 4:55 PM Ben Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


   As announced previously by Rob Stradling, there is an agreement for
private investment firm GI Partners, out of San Francisco, CA, to acquire
Sectigo. Press release:
https://sectigo.com/resource-library/sectigo-to-be-acquired-by-gi-partners
.


I am treating this as a change of legal ownership covered by section 8.1
<
https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/#81-change-in-legal-ownership



of the Mozilla Root Store Policy, which states:


If the receiving or acquiring company is new to the Mozilla root program,
it must demonstrate compliance with the entirety of this policy and there
MUST be a pu

Re: PEM of root certs in Mozilla's root store

2020-10-15 Thread Jakob Bohm via dev-security-policy

On 2020-10-15 11:57, Ryan Sleevi wrote:

On Thu, Oct 15, 2020 at 1:14 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


For example, embedded new lines are discussed in 2.6 and the ABNF

therein.




The one difference from RFC4180 is that CR and LF are not part of the
alternatives for the inner part of "escaped".



Again, it would do a lot of benefit for everyone if you would be more
precise here.

For example, it seems clear and unambiguous that what you just stated is
factually wrong, because:

escaped = DQUOTE *(TEXTDATA / COMMA / CR / LF / 2DQUOTE) DQUOTE



I was stating the *difference* from RFC4180 being precisely that
"simple, traditional CSV" doesn't accept the CR and LF alternatives in
that syntax production.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PEM of root certs in Mozilla's root store

2020-10-14 Thread Jakob Bohm via dev-security-policy

On 2020-10-15 04:52, Ryan Sleevi wrote:

On Wed, Oct 14, 2020 at 7:31 PM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


Only the CSV form now contains CSV artifacts.  And it isn't really CSV
either (even if Microsoft Excel handles it).



Hi Jakob,

Could you be more precise here? Embedded new lines within quoted values is
well-defined for CSV.

Realizing that there are unfortunately many interpretations of what
“well-defined for CSV” here, perhaps you can frame your concerns in terms
set out in
https://tools.ietf.org/html/rfc4180 . This at least helps make sure we can
understand and are in the same understanding.

For example, embedded new lines are discussed in 2.6 and the ABNF therein.



The one difference from RFC4180 is that CR and LF are not part of the
alternatives for the inner part of "escaped".




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PEM of root certs in Mozilla's root store

2020-10-14 Thread Jakob Bohm via dev-security-policy

On 2020-10-15 00:16, Kathleen Wilson wrote:
The text version has been updated to have each line limited to 64 
characters.


Text:

https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEMTxt?TrustBitsInclude=Websites 



https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEMTxt?TrustBitsInclude=Email 



CSV:
https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEMCSV?TrustBitsInclude=Websites 



https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEMCSV?TrustBitsInclude=Email 




1. The reports that contain /only/ PEM data.  I argue that the
traditional format of concatenated PEM files (as used by e.g. the
openssl command line tool) without CSV embellishments would be
preferable, and that the reports in the latest post by Kathleen
lacked the PEM line wrapping while still containing CSV
artifacts. 


Jakob, please explain what you mean by "still containing CSV artifacts". 
Does the new version of the text report have the problem?




Only the CSV form now contains CSV artifacts.  And it isn't really CSV
either (even if Microsoft Excel handles it).




2. The reports that contain other data in CSV format.  ...


Opening the CSV version of the reports in Excel and Numbers works fine 
for me. So I don't understand what the problem is with having this 
report be a direct extract of the PEM data that the CCADB has for each 
root certificate.




However, it might also be reasonable that if these concerns aren't easily
addressable, perhaps not offering the feed is another option, since it
seems a lot of work for something that should be and is naturally
discouraged.


It's not a lot of work, just a matter of understanding what is most 
useful to folks.


I think it would be good to have downstreams using current data, e.g. 
data published directly via the CCADB. I think that providing the data 
in an easily consumable format is better than having folks extract the 
data from certdata.txt.


Thanks,
Kathleen




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PEM of root certs in Mozilla's root store

2020-10-14 Thread Jakob Bohm via dev-security-policy

On 2020-10-13 16:32, Ryan Sleevi wrote:

Jakob,

I had a little trouble following your mail, despite being quite familiar
with PEM, so hopefully you'll indulge me in making sure I've got your
criticisms/complaints correct.

Your objection to the text report is that RFC 7468 requires generators to
wrap lines (except the last line) at exactly 64 characters, right? That is,
the textual report includes the base-64 data with no embedded newlines, and
this causes your PEM decoder trouble, despite being able to easily inject
them programmatically after you download the file.



I was commenting on the /general/ usability of the reports mentioned in 
the first message on this thread, by considering what naive parsers 
would do upon reading the files.  As I have no direct need to parse 
these files, I have no actual parser failing to do so.


My comments fell in two categories:

1. The reports that contain /only/ PEM data.  I argue that the
traditional format of concatenated PEM files (as used by e.g. the
openssl command line tool) without CSV embellishments would be
preferable, and that the reports in the latest post by Kathleen
lacked the PEM line wrapping while still containing CSV
artifacts.

2. The reports that contain other data in CSV format.  I argue that
those reports would be more useful without in-field line breaks, thus
having the Base64 encoded certificates as long strings without PEM
embellishments.  Goal is to make them traditional CSV files with one 
record per line, commas only between fields and optional double quotes

around non-numerical field values.  A sample parser would be awk, cut or
the perl command line option "-F,".

Simply viewing each report in a basic text viewer should make the 
problematic format deviations clear.




I'm not sure I fully understand the CSV objection, despite having inspected
the file, so perhaps you can clarify a bit more.

Perhaps the simplest approach would be that you could attach versions that
look how you'd want.



Here are the top 3 lines of IncludedCACertificateWithPEMReport.csv
reformatted (but with extra line breaks every 68 chars for posting
purposes):

"Owner","Certificate Issuer Organization","Certificate Issuer Organ
izational Unit","Common Name or Certificate Name","Certificate Seri
al Number","SHA-256 Fingerprint","Subject + SPKI SHA256","Valid Fro
m [GMT]","Valid To [GMT]","Public Key Algorithm","Signature Hash Al
gorithm","Trust Bits","Distrust for TLS After Date","Distrust for S
/MIME After Date","EV Policy OID(s)","Approval Bug","NSS Release Wh
en First Included","Firefox Release When First Included","Test Webs
ite - Valid","Test Website - Expired","Test Website - Revoked","Moz
illa Applied Constraints","Company Website","Geographic Focus","Cer
tificate Policy (CP)","Certification Practice Statement (CPS)","Sta
ndard Audit","BR Audit","EV Audit","Auditor","Standard Audit Type",
"Standard Audit Statement Dt","PEM Info"
"AC Camerfirma, S.A.","AC Camerfirma SA CIF A82743287","http://www.
chambersign.org","Chambers of Commerce Root","00","0C258A12A5674AEF
25F28BA7DCFAECEEA348E541E6F5CC4EE63B71B361606AC3","BC2FD9EA61581CB2
2BB859690D61430E7D222D1119E8C41649B9B1D556D439A4","2003.09.30","203
7.09.30","RSA 2048 bits","SHA1WithRSA","Email","","","Not EV","http
s://bugzilla.mozilla.org/show_bug.cgi?id=261778","","Firefox 1","",
"","","","http://www.camerfirma.com","Spain","","https://www.camerf
irma.com/publico/DocumentosWeb/politicas/CPS_eidas_EN_1.2.12.pdf","
https://www.csqa.it/getattachment/Sicurezza-ICT/Documenti/Attestazi
one-di-Audit-secondo-i-requisiti-ETSI/2020-03-CSQA-Attestation-CAME
RFIRMA-rev-2-signed.pdf.aspx?lang=it-IT","https://bugzilla.mozilla.
org/attachment.cgi?id=8995930","","CSQA Certificazioni srl","ETSI E
N 319 411","2020.03.05","MIIEvTCCA6WgAwIBAgIBADANBgkqhkiG9w0BAQUFAD
B/MQswCQYDVQQGEwJFVTEnMCUGA1UEChMeQUMgQ2FtZXJmaXJtYSBTQSBDSUYgQTgyN
zQzMjg3MSMwIQYDVQQLExpodHRwOi8vd3d3LmNoYW1iZXJzaWduLm9yZzEiMCAGA1UE
AxMZQ2hhbWJlcnMgb2YgQ29tbWVyY2UgUm9vdDAeFw0wMzA5MzAxNjEzNDNaFw0zNzA
5MzAxNjEzNDRaMH8xCzAJBgNVBAYTAkVVMScwJQYDVQQKEx5BQyBDYW1lcmZpcm1hIF
NBIENJRiBBODI3NDMyODcxIzAhBgNVBAsTGmh0dHA6Ly93d3cuY2hhbWJlcnNpZ24ub
3JnMSIwIAYDVQQDExlDaGFtYmVycyBvZiBDb21tZXJjZSBSb290MIIBIDANBgkqhkiG
9w0BAQEFAAOCAQ0AMIIBCAKCAQEAtzZV5aVdGDDg2olUkfzIx1L4L1DZ77F1c2VHfRt
bunXF/KGIJPov7coISjlUxFF6tdpg6jg8gbLL8bvZkSM/SAFwdakFKq0fcfPJVD0dBm
pAPrMMhe5cG3nCYsS4No41XQEMIwRHNaqbYE6gZj3LJgqcQKH0XZi/caulAGgq7YN6D
6IUtdQis4CwPAxaUWktWBiP7Zme8a7ileb2R6jWDA+wWFjbw2Y3npuRVDM30pQcakjJ
yfKl2qUMI/cjDpwyVV5xnIQFUZot/eZOKjRa3spAN2cMVCFVd9oKDMyXroDclDZK9D7
ONhMeU+SsTjoF7Nuucpw4i9A5O4kKPnf+dQIBA6OCAUQwggFAMBIGA1UdEwEB/wQIMA
YBAf8CAQwwPAYDVR0fBDUwMzAxoC+gLYYraHR0cDovL2NybC5jaGFtYmVyc2lnbi5vc
mcvY2hhbWJlcnNyb290LmNybDAdBgNVHQ4EFgQU45T1sU3p26EpW1eLTXYGduHRooow
DgYDVR0PAQH/BAQDAgEGMBEGCWCGSAGG+EIBAQQEAwIABzAnBgNVHREEIDAegRxjaGF
tYmVyc3Jvb3RAY2hhbWJlcnNpZ24ub3JnMCcGA1UdEgQgMB6BHGNoYW1iZXJzcm9vdE
BjaGFtYmVyc2lnbi5vcmcwWAYDVR0gBFEwTzBNBgsrBgEEAYGHLgoDATA+MDwGCCsGA

Re: PEM of root certs in Mozilla's root store

2020-10-13 Thread Jakob Bohm via dev-security-policy

On 2020-10-12 20:50, Kathleen Wilson wrote:

On 10/7/20 1:09 PM, Jakob Bohm wrote:
Please note that at least the first CSV download is not really a CSV 
file, as there are line feeds within each "PEM" value, and only one 
column.  It would probably be more useful as a simple concatenated PEM 
file, as used by various software packages as a root store input format.





Here's updated reports...

Text:

https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEMTxt?TrustBitsInclude=Websites 



https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEMTxt?TrustBitsInclude=Email 


These two are bad multi-pem files, as each certificate likes the
required line wrapping.

The useful multi-pem format would keep the line wrapping from the
old report, but not insert stray quotes and commas.



CSV:
https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEMCSV?TrustBitsInclude=Websites 



https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEMCSV?TrustBitsInclude=Email 



These two are like the old bad multi-pem reports, with the stray
internal line feeds after/before the - marker lines, but without the
internal linefeeds in the Base64 data.

Useful actual-csv reports like IncludedCACertificateWithPEMReport.csv
would have to omit the marker lines and their linefeeds as well as the
internal linefeeds in the Base64 data in order to keep each CSV record
entirely on one line.






If the Text reports look good, I'll add them to the wiki page, 
https://wiki.mozilla.org/CA/Included_Certificates



Thanks,
Kathleen




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Sectigo to Be Acquired by GI Partners

2020-10-12 Thread Jakob Bohm via dev-security-policy

Hi Rob,

The e-mail you quote below seems to be inadvertently "confirming" some
suspicions that someone else posed as questions. I think the group as a
whole would love to have actual specific answers to those original
questions.

Remember to always add an extra layer of ">" indents for each level of
message quoting, so as to not misattribute text.

On 2020-10-12 10:43, Rob Stradling wrote:

Hi Ryan.  Tim Callan posted a reply to your questions last week, but his 
message has not yet appeared on the list.  Is it stuck in a moderation queue?


From: dev-security-policy  on behalf 
of Ryan Sleevi via dev-security-policy 
Sent: 03 October 2020 22:16
To: Ben Wilson 
Cc: mozilla-dev-security-policy 
Subject: Re: Sectigo to Be Acquired by GI Partners


In a recent incident report [1], a representative of Sectigo noted:

The carve out from Comodo Group was a tough time for us. We had twenty

years’ worth of completely intertwined systems that had to be disentangled
ASAP, a vast hairball of legacy code to deal with, and a skeleton crew of
employees that numbered well under half of what we needed to operate in any
reasonable fashion.



This referred to the previous split [2] of the Comodo CA business from the
rest of Comodo businesses, and rebranding as Sectigo.

In addition to the questions posted by Wayne, I think it'd be useful to
confirm:

1. Is it expected that there will be similar system and/or infrastructure
migrations as part of this? Sectigo's foresight of "no effect on its
operations" leaves it a bit ambiguous whether this is meant as "practical"
effect (e.g. requiring a change of CP/CS or effective policies) or whether
this is meant as no "operational" impact (e.g. things will change, but
there's no disruption anticipated). It'd be useful to frame this response
in terms of any anticipated changes at all (from mundane, like updating the
logos on the website, to significant, such as any procedure/equipment
changes), rather than observed effects.

2. Is there a risk that such an acquisition might further reduce the crew
of employees to an even smaller number? Perhaps not immediately, but over
time, say the next two years, such as "eliminating redundancies" or
"streamlining operations"? I recognize that there's an opportunity such an
acquisition might allow for greater investment and/or scale, and so don't
want to presume the negative, but it would be good to get a clear
commitment as to that, similar to other acquisitions in the past (e.g.
Symantec CA operations by DigiCert)

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1648717#c21
[2]
https://groups.google.com/g/mozilla.dev.security.policy/c/AvGlsb4BAZo/m/p_qpnU9FBQAJ

On Thu, Oct 1, 2020 at 4:55 PM Ben Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


  As announced previously by Rob Stradling, there is an agreement for
private investment firm GI Partners, out of San Francisco, CA, to acquire
Sectigo. Press release:
https://sectigo.com/resource-library/sectigo-to-be-acquired-by-gi-partners
.


I am treating this as a change of legal ownership covered by section 8.1
<
https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/#81-change-in-legal-ownership



of the Mozilla Root Store Policy, which states:


If the receiving or acquiring company is new to the Mozilla root program,
it must demonstrate compliance with the entirety of this policy and there
MUST be a public discussion regarding their admittance to the root

program,

which Mozilla must resolve with a positive conclusion in order for the
affected certificate(s) to remain in the root program.


In order to comply with policy, I hereby formally announce the commencement
of a 3-week discussion period for this change in legal ownership of Sectigo
by requesting thoughtful and constructive feedback from the community.

Sectigo has already stated that it foresees no effect on its operations due
to this ownership change, and I believe that the acquisition announced by
Sectigo and GI Partners is compliant with Mozilla policy.

Thanks,

Ben
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PEM of root certs in Mozilla's root store

2020-10-07 Thread Jakob Bohm via dev-security-policy

On 2020-10-06 23:47, Kathleen Wilson wrote:

All,

I've been asked to publish Mozilla's root store in a way that is easy to 
consume by downstreams, so I have added the following to 
https://wiki.mozilla.org/CA/Included_Certificates


CCADB Data Usage Terms


PEM of Root Certificates in Mozilla's Root Store with the Websites 
(TLS/SSL) Trust Bit Enabled (CSV)
 



PEM of Root Certificates in Mozilla's Root Store with the Email (S/MIME) 
Trust Bit Enabled (CSV)
 




Please let me know if you have feedback or recommendations about this.



Please note that at least the first CSV download is not really a CSV 
file, as there are line feeds within each "PEM" value, and only one 
column.  It would probably be more useful as a simple concatenated PEM 
file, as used by various software packages as a root store input format.


I have also noted that at least one downstream root store (Debian) takes
all Mozilla-trusted certificates and labels them as simply 
"mozilla/cert-public-name", even though more useful naming can be 
extracted from the last (most complete) report, after finding a non-gui 
tool that can actually parse CSV files with embedded newlines in string 
values.





Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Temporary WebTrust Seal for COVID Issues

2020-08-24 Thread Jakob Bohm via dev-security-policy

On 2020-08-20 20:34, Ben Wilson wrote:

All,

Some CAs have inquired about Mozilla's acceptance of WebTrust's temporary,
6-month seal related to COVID19 issues.
See
https://www.cpacanada.ca/en/business-and-accounting-resources/audit-and-assurance/overview-of-webtrust-services

According to that WebTrust webpage, the temporary seal will be offered only
in situations that meet the following criteria:

- The practitioner report has been qualified,
- The qualification is directly related to government-imposed COVID-19
scope restrictions only and is disclosed in the practitioner report, and
- There are no qualifications due to control deficiencies in the period.

It also states, "When a temporary seal has been granted, it is expected
that a practitioner will be able to perform the procedures that could not
be completed initially which gave rise to the scope limitation before the
temporary seal expires. Where the practitioner is able to perform such
procedures and is able to issue subsequently an unqualified report for the
CA, the unqualified report could then be submitted to CPA Canada to obtain
the traditional seal."

For purposes of obtaining a timely audit, it appears that such a timely
filed report would satisfy Mozilla Policy 3.1.3's annual audit filing
requirements (
https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/#313-audit-parameters)
and therefore it would not be a "delay".  For context see
https://wiki.mozilla.org/CA/Audit_Statements#Audit_Delay
and https://wiki.mozilla.org/CA/Audit_Statements#WebTrust_Audits.


So as further guidance on the above page, I am proposing clarification that
the Temporary WebTrust Seal for COVID-19-related qualified reports does not
require the CA to file an Incident Report, but rather that we will create a
CA Compliance bug in Bugzilla simply to track the expiration of the
temporary seal.



As a relying party (end user of Mozilla products that use the root
store), I appreciate this, however I have a suggested simplification:

Simply mark the (early) expiry date of the temporary seal in the CCADB,
such that the usual audit-renewal procedures will trigger at the
appropriate time.

This obviously presumes that there are CCADB to mark an audit reports as
having a shorter-than-one-year validity, as public discussions in this
group have frequently mentioned 3-month audits in specific cases.

Besides, as this crisis is expected to last closer to a full year than
6 months, one must wonder if auditors would have to inspect CA
facilities wearing full disposable hazmat suits to avoid transporting
the virus between redundant backup CA offices that have been kept
separate to ensure CA operations continue even if every person at one
office become critically ill.


Thanks,
Ben Wilson
Mozilla Root Store Manager





Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: How Certificates are Verified by Firefox

2019-12-09 Thread Jakob Bohm via dev-security-policy

On 2019-12-09 11:44, Ben Laurie wrote:

On Wed, 4 Dec 2019 at 22:13, Ryan Sleevi  wrote:


Yes, I am one of the ones who actively disputes the notion that AIA
considered harmful.

I'm (plesantly) surprised that any CA would be opposed to AIA (i.e.
supportive of "considered harmful", since it's inherently what gives them
the flexibility to make their many design mistakes in their PKI and still
have certificates work. The only way "considered harmful" would work is if
we actively remove the flexibility afforded CAs in this realm, which I'm
highly supportive of, but which definitely encourages more distinctive PKIs
(i.e. more explicitly reducing the use of Web PKI in non-Web cases)

Of course, AIA is also valuable in helping browsers push the web forward,
so I can see why "considered harmful" is useful, especially in that it
helps further the notion that root certificates are a thing of value (and
whose value should increase with age). AIA is one of the key tools to
helping prevent that, which we know is key to ensuring a more flexible, and
agile, ecosystem.

The flaw, of course, in a "considered harmful", is the notion that there's
One Chain or One Right Chain. That's not the world we have, nor have we
ever. The notion that there's One Right Chain for a TLS server to send
presumes there's One Right Set of CA Trust Anchors. And while that's
definitely a world we could pursue, I think we know from the past history
of CA incidents, there's incredible benefit to users to being able to
respond to CA security incidents differently, to remove trust in
deprecated/insecure things differently, and to set policies differently.
And so we can't expect servers to know the Right Chain because there isn't
One Right Chain, and AIA (or intermediate preloading with rapid updates)
can help address that.



It would be a whole lot more efficient and private if the servers did the
chasing.



It would also greatly help if:

1. More clients (especially non-browser client such as libraries used by
  IoT devices) supported treating the received "chain" more like a pool
  of potential intermediaries and accepted any acceptable combination of
  received certs and locally trusted roots.  This would allow servers to
  send an appropriate collection of intermediaries for different client
  needs.  Clients that detect different levels of trust (such as the
  Qualsys checkers and EV clients) also need to choose the best of the
  offered set, as the alternative certs are obviously for use by other
  clients.  In particular, clients should not panic and block on the
  presence of any "bad" certificates in the pool, if a valid chain can
  be assembled without that certificate.
   This is already in the CMS/PKCS#7 spec and apparently also in the TLS
  1.3 spec, but remains seemingly optional when TLS 1.2 or older is
  negotiated.
   I recently became aware of at least one IoT-focused TLS library that
  doesn't support the "pool" interpretation due to lack of a memory-
  efficient chain building algorithm.

2. Certain CAs made it a lot easier to get the recommended-to-send list
  of certificates ("chain") for server operators to configure.  Some
  CAs make server operators manually chase down links to each cert in
  the list, and some don't send out pre-emptive notification of changes
  to paying subscribers.

3. Certain TLS libraries didn't refuse to provide server software
  vendors with working stapling code, especially the code to collect
  and cache OCSP responses.

P.S. One commonly wilified server brand actually does use AIA to build
  the server chain.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificate OU= fields with missing O= field

2019-11-01 Thread Jakob Bohm via dev-security-policy

[Top posting because previous post did]

As a relying party and a subscriber of some certificates, I would
consider each of the following combinations as something that should be
both permitted and useful under any future rules, even if the current
BRs don't allow it.

1. O=Actual Org name and OU=An actual company name for a division that
  is not obviously misleading, for example "HQ", "Accounting", "East
  campus", "Virginia servers", even if there is no direct way for any
  regular CA to verify the reality at time of issuance (will that
  certificate actually be used only at the company HeadQuarters?  Does
  the organization actually have an accounting department other than an
  old shoe-box filled with receipts, do they really have any servers in
  either of the Virginia states?).

2. O=Actual Org name, OU=Actual company division, GivenNameEtc=An actual
  person in that division.

3. O=Actual Org name, OU=Actual company division, No specific individual
  listed because certificate is for entire division.

4. OU=Domain Validated or OU=Extended Validation etc. to indicate the
  level of validation to relying parties that lack the skills to extract
  this from the more formal fields such as policy OIDs.  While this is
  not in itself the subject identity, it is a hierarchical part of a
  structured designation of the subject, similar to adding ST=California
  to a DN that already contains L=Los Angeles and C=US.

5. 1, 2 or 3 combined with 4

The following would not be allowed:

6. OU=Google when the subject is not part of that company and has no
  rights to that trademark.

7. OU=Ministry of Defence when the subject is not a (quasi) government
  that could have such.

The following would be routinely revoked as no-longer-valid-but-not-
a-CA-incident if later discovered.  (Similar to the BR rules about a
subscriber loosing their legal domain control during certificate
validity).

8. OU="Virginia servers" when it is found that the subject owns or
  operates no servers in the Virginia States.  Further sanctions against
  subject would depend if the certificate was ever used elsewhere and if
  the subject had actual servers in Virginia at a different time and
  used the cert only for those.

9. Similar to 8 but for other such cases, see 1. for examples.




On 01/11/2019 17:31, Jeremy Rowley wrote:

My view is that the OU field is a subject distinguished name field and that a 
CA must have a process to prevent unverified information from being included in 
the field.

Subject Identity Information is defined as information that identifies the 
Certificate Subject.

I suppose the answer to your question depends on a) what you consider as 
information that identifies the Certificate Subject and b) whether the process 
required establishes the minimum relationship between that information and your 
definition of SII.

From: Ryan Sleevi 
Sent: Friday, November 1, 2019 10:11 AM
To: Jeremy Rowley 
Cc: mozilla-dev-security-policy 
Subject: Re: Certificate OU= fields with missing O= field

Is your view that the OU is not Subject Identity Information, despite that 
being the Identity Information that appears in the Subject? Are there other 
fields and values that you believe are not SII? This seems inconsistent with 
7.1.4.2, the section in which this is placed.

As to the .com in the OU, 7.1.4.2 also prohibits this:
CAs SHALL NOT include a Domain Name or IP Address in a Subject attribute except 
as specified in Section 3.2.2.4 or Section 3.2.2.5.

On Fri, Nov 1, 2019 at 8:41 AM Jeremy Rowley via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
 wrote:
A mistake in the BRs (I wrote the language unfortunately so shame on me for not 
matching the other sections of org name or the given name). There's no 
certificate that ever contains all of these fields. How would you ever have 
that?

There's no requirement that the OU field information relate to the O field 
information as long as the information is verified.

-Original Message-
From: dev-security-policy 
mailto:dev-security-policy-boun...@lists.mozilla.org>>
 On Behalf Of Alex Cohn via dev-security-policy
Sent: Friday, November 1, 2019 9:13 AM
To: Kurt Roeckx mailto:k...@roeckx.be>>
Cc: Matthias van de Meent 
mailto:matthias.vandeme...@cofano.nl>>; MDSP 
mailto:dev-security-policy@lists.mozilla.org>>
Subject: Re: Certificate OU= fields with missing O= field

On Fri, Nov 1, 2019 at 5:14 AM Kurt Roeckx via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
 wrote:


On Fri, Nov 01, 2019 at 11:08:23AM +0100, Matthias van de Meent via 
dev-security-policy wrote:

Hi,

I recently noticed that a lot of leaf certificates [0] have
organizationalUnitName specified without other organizational
information such as organizationName. Many times this field is used
for branding purposes, e.g. "issued through "
or "SomeBrand SSL".

BR v1.6.6 § 7.1.4.2.2i has guidance on usage of the OU field: "The
CA SHALL implement a process that 

Re: Firefox removes UI for site identity

2019-10-23 Thread Jakob Bohm via dev-security-policy

On 23/10/2019 01:49, Matt Palmer wrote:

On Tue, Oct 22, 2019 at 03:35:52PM -0700, Kirk Hall via dev-security-policy 
wrote:

I also have a question for Mozilla on the removal of the EV UI.


This is a mischaracterisation.  The EV UI has not been removed, it has been
moved to a new location.



It was moved entirely off screen, and replaced with very subtle
differences in the contents of a pop-up.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAs cross-signing roots whose subjects don't comply with the BRs

2019-10-08 Thread Jakob Bohm via dev-security-policy

On 08/10/2019 13:41, Corey Bonnell wrote:

On Monday, October 7, 2019 at 10:52:36 AM UTC-4, Ryan Sleevi wrote:

I'm curious how folks feel about the following practice:

Imagine a CA, "Foo", that creates a new Root Certificate ("Root 1"). They
create this Root Certificate after the effective date of the Baseline
Requirements, but prior to Root Programs consistently requiring compliance
with the Baseline Requirements (i.e. between 2012 and 2014). This Root
Certificate does not comply with the BRs' rules on Subject: namely, it
omits the Country field.

...

> ...


Given that there is discussion about mandating the use of ISO3166 or other 
databases for location information, the profile of the subjectDN may change 
such that future cross-signs cannot be done without running afoul of policy.

With this issue and Ryan’s scenario in mind, I think there may need to be some 
sort of grandfathering allowed for roots so that cross-signs can be issued 
without running afoul of policy. What I’m less certain on, is to what extent 
this grandfathering clause would allow for non-compliance of the current 
policies, as that is a very slippery slope and hinders progress in creating a 
saner webPKI certificate profile. For the CA that Ryan brings up, I’m less 
inclined to allow for a “grandfathering” as the root certificate in question 
was originally mis-issued. But for a root certificate that was issued in 
compliance with the policy at the time but now no longer has a compliant 
subjectDN, perhaps a carve-out in Mozilla Policy to allow for a cross-sign 
(using the now non-compliant subjectDN) is warranted.



Please note the situation explained in the first paragraph of Ryan's
scenario: The (hypothetical) Root 1 without a C element may have been
issued before Vrowser Policy made BR compliance mandatory.  In other
words, BR non-compliance may not have been actual non-compliance at
that time.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAs cross-signing roots whose subjects don't comply with the BRs

2019-10-07 Thread Jakob Bohm via dev-security-policy
On 07/10/2019 17:35, Ryan Sleevi wrote:
> On Mon, Oct 7, 2019 at 11:26 AM Jakob Bohm via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
>> On 07/10/2019 16:52, Ryan Sleevi wrote:
>>> I'm curious how folks feel about the following practice:
>>>
>>> Imagine a CA, "Foo", that creates a new Root Certificate ("Root 1"). They
>>> create this Root Certificate after the effective date of the Baseline
>>> Requirements, but prior to Root Programs consistently requiring
>> compliance
>>> with the Baseline Requirements (i.e. between 2012 and 2014). This Root
>>> Certificate does not comply with the BRs' rules on Subject: namely, it
>>> omits the Country field.
>>
>> Clarification needed: Does it omit Country from the DN of the root 1
>> itself, from the DN of intermediary CA certs and/or from the DN of End
>> Entity certs?
>>
> 
> It's as I stated: The Subject of the Root Certificate omits the Country
> field.

You were unclear if Root 1 omitted the C element from it's own name
(a BR requirement for new roots), or from various aspects of the
issuance from root 1 (also BR requirements).

It is now clear that the potential BR violation is only in the DN of
Root 1 itself, and for the purpose of this hypothetical, we can assume
that all other aspects of Root 1 operation are BR compliant.

> 
> 
>>>
>>> Later, in 2019, Foo takes their existing Root Certificate ("Root 2"),
>>> included within Mozilla products, and cross-signs the Subject. This now
>>> creates a cross-signed certificate, "Root 1 signed-by Root 2", which has
>> a
>>> Subject field that does not comport with the Baseline Requirements.
>>
>> Nit: Signs the Subject => Signs Root 1
>>
> 
> Perhaps it would be helpful if you were clearer about what you believe you
> were correcting.
> 

An minor typo (nit) in your original post.  You wrote -"signs the 
Subject" instead of -"signs Root 1".


> I thought I was very precise here, so it's useful to understand your
> confusion:
> 
> Root 2, a root included in Mozilla products, cross-signs Root 1, a root
> which omits the Country field from the Subject.
> 
> This creates a certificate, whose issuer is Root 2 (a Root included in
> Mozilla Products), and whose Subject is Root 1. The Subject of Root 1 does
> not meet the BRs requirements on Subjects for intermediate/root
> certificates: namely, the certificate issued by Root 2 omits the C, because
> Root 1 omits the C.
> 

This is now clear after the clarification that C was only omitted in the
DN of Root 1 itself.


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAs cross-signing roots whose subjects don't comply with the BRs

2019-10-07 Thread Jakob Bohm via dev-security-policy

On 07/10/2019 16:52, Ryan Sleevi wrote:

I'm curious how folks feel about the following practice:

Imagine a CA, "Foo", that creates a new Root Certificate ("Root 1"). They
create this Root Certificate after the effective date of the Baseline
Requirements, but prior to Root Programs consistently requiring compliance
with the Baseline Requirements (i.e. between 2012 and 2014). This Root
Certificate does not comply with the BRs' rules on Subject: namely, it
omits the Country field.


Clarification needed: Does it omit Country from the DN of the root 1
itself, from the DN of intermediary CA certs and/or from the DN of End
Entity certs?

Also is the omission limited to historic certs issued before some date,
or also in new certs issued in 2019 (not counting the cross cert below).



Later, in 2019, Foo takes their existing Root Certificate ("Root 2"),
included within Mozilla products, and cross-signs the Subject. This now
creates a cross-signed certificate, "Root 1 signed-by Root 2", which has a
Subject field that does not comport with the Baseline Requirements.


Nit: Signs the Subject => Signs Root 1



To me, this seems like a clear-cut violation of the Baseline Requirements,
and "Foo" could have pursued an alternative hierarchy to avoid needing to
cross-sign. However, I thought it interesting to solicit others' feedback
on this situation, before opening the CA incident for Foo.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert OCSP services returns 1 byte

2019-09-17 Thread Jakob Bohm via dev-security-policy
On 17/09/2019 00:58, Wayne Thayer wrote:
> On Mon, Sep 16, 2019 at 5:02 AM Rob Stradling  wrote:
> 
>> On 14/09/2019 00:27, Andrew Ayer via dev-security-policy wrote:
>> 
>>
>> If a certificate (with embedded SCTs and no CT poison extension) is
>> "presumed to exist" but the CA has not actually issued it, then to my
>> mind that's a "certificate that has not been issued"; and therefore, the
>> OCSP 'responder SHOULD NOT respond with a "good" status'.
>>
>> However, this is Schrödinger's "certificate that has not been issued",
>> because a Precertificate has been issued that has the same serial number
>> (as the "certificate presumed to exist" that doesn't actually exist).
>>
>> And so at this point ISTM that the OCSP responder is expected to
>> implement two conflicting requirements for the serial number in question:
>> (1) MUST respond "good", because an unrevoked/unexpired
>> precertificate exists (and because BR 4.9.9 mandates a signed OCSP
>> response).
>> (2) SHOULD NOT respond "good" (see BR 4.9.10).
>>
>>
> If I'm reading BR 4.9.10 correctly, the situation is worse because it goes
> on to state "Effective 1 August 2013, OCSP responders for CAs which are not
> Technically Constrained in line with Section 7.1.5 MUST NOT respond with a
> "good" status for such certificates." (referring to 'certificates that have
> not been issued' from the prior paragraph)
> 
> If the desired outcome is for CAs to respond "good" to a precertificate
> without a corresponding certificate, we could override the BRs in Mozilla
> policy, but I'd want to get the BRs updated quickly as Rob suggested to
> avoid audit findings.
> 
> The other piece of this policy that's still unclear to me relates to the
> "unknown" OCSP status. Specifically, Is it currently forbidden for a CA to
> provide an "unknown" OCSP response for an issued certificate? If not,
> should it be? The implication here would be that CAs responding "unknown"
> to precertificates without corresponding certificates are doing the right
> thing, despite prior precedent indicating that this is a violation. [1]
> 

In any networked system, delays are inevitable.

If data (in this case replication status data) is to be reliably
replicated, there will be a further delay to ensure that bad data is
not forced onto all mirrors too quickly (Consider earlier incident(s)
where corrupted revocation data was rolled out to all the servers used
by a CA.  Also consider the delay in the recent rollout of incident-
initiated revocations by GTS).

Thus it is technically inevitable that some revocation servers will
respond "unknown" for a freshly issued (as in signed) certificate.

So the real question is which of the various steps in a CT-logged
issuance are allowed to happen while the certificate status is
replicated to all the OCSP and CRL servers operated by a CA.

It is of cause possible to pretend that a signed but not yet submitted
certificate and/or preCertificate doesn't exist, creating a strict
serialization of steps:

  1. Allocate serial number and other content.
  2. Replicate this allocation to survive crashes.
  3. Sign the preCertificate, canned "good" OCSP responses and a fresh
CRL (if the CRL contains extensions describing the valid serial
numbers).
  4. Replicate the preCertificate and OCSP responses to survive crashes.
(Failure during this step results in having to sign the same
preCertificate again, usually getting the same signature).
  5. Wait for all those OCSP responses and CRLs to reach all mirrors.
  6. Record the completion of this rollout in the replicated dataset.
  7. Submit the preCertificate to CT logs (which unnecessarily check
that the mirrors report "good" instead of "unknown" according to
the strict policy);
  8. Wait for all the CT logs to return SCTs.
  9. Replicate the SCTs to survive crashes.
(failure during this step will require extracting SCTs from the
CT logs or revoking the preCertificate and starting over).
 10. Sign the actual certificate and any further canned OCSP responses
   and CRLs that refer to the contents or signature of the actual
   certificate.
 11. Replicate the actual certificate and any new revocation data to
   survive crashes.
(Failure during this step results in having to sign the same
preCertificate again, usually getting the same signature,
alternatively revoke and start over).
 12. Wait for any such OCSP responses and CRLs to reach all mirrors.
 13. Record the completion of this rollout in the replicated dataset.
(Failure during this step results in having to check the rollout 
status again).
 14. Submit the actual certificate to CT logs and await acknowledgement.
 15. Finally send the actual certificate to the subscriber.

My alternative 4 paragraph proposal (rest was rationale) was to accept
the existence of delays, aim for eventual consistency, maximize overlap
between the operations by allowing the initial CT submission to happen
before all revocation 

Re: DigiCert OCSP services returns 1 byte

2019-09-16 Thread Jakob Bohm via dev-security-policy
On 16/09/2019 19:08, Andrew Ayer wrote:
> On Fri, 13 Sep 2019 08:22:21 +
> Rob Stradling via dev-security-policy
>  wrote:
> 
>> Thinking aloud...
>> Does anything need to be clarified in 6962-bis though?
> 
> Yes, it's long past time that we clarified what this means:
> 
> "This signature indicates the CA's intent to issue the certificate.  This
> intent is considered binding (i.e., misissuance of the precertificate is
> considered equivalent to misissuance of the corresponding certificate)."
> 
> The goal is that a precertificate signature creates an unrebuttable
> presumption that the CA has issued the corresponding certificate. If a
> CA issues a precertificate, outside observers will treat the CA as if
> it had issued the corresponding certificate - whether or not the CA
> really did - so the CA should behave accordingly.
> 
> It's worth explicitly mentioning the implications of this:
> 
> * The CA needs to operate revocation services for the corresponding
> certificate as if the certificate had been issued.
> 
> * If the corresponding certificate would be misissued, the CA will be
> treated as if it had really issued that certificate.
> 
> Are there any other implications that 6962-bis should call out
> explicitly?
> 
> Regards,
> Andrew
> 

How about the following (Mozilla) policy rules:

If a CA submits a preCertificate to CT, then it MUST ensure that one of 
the following is true no more than 96 hours after submitting the 
preCertificate and no more than 24 hours after signing the corresponding 
actual certificate:

 - The corresponding actual certificate has been signed and submitted to 
  CT (regardless if said submission was accepted), and all CA provided 
  revocation mechanisms and servers report that the actual certificate 
  is valid and not revoked.

 - The corresponding actual certificate has been signed, SHOULD have 
  been submitted to CT (regardless if said submission was accepted) and 
  was later revoked.  All CA provided revocation mechanisms and servers 
  MUST report that the actual certificate exists and SHOULD report that 
  it has been revoked at a time no earlier than 1 second before the 
  notBefore time in the preCertificate (other policy requirements or BRs 
  state that they MUST so report before a different deadline).

 - The corresponding actual certificate has not been signed and MUST not 
  be subsequently signed.  All CA provided revocation mechanisms and 
  servers must report that the certificate serial number was revoked at 
  least 60 seconds before the time specified as "notBefore" in the 
  preCertificate.

Rationale:
 - Requirements explicitly refer to revocation of the corresponding 
  actual certificate, not the (perhaps differently signed) 
  preCertificate.

 - 96 hours is the existing BR deadline for updating revocation servers.

 - 24 hours is reasonable time for rolling out the "certificate good" 
  status throughout a CA infrastructure, before taking actions such 
  as submitting the actual certificate to CT.  In practice, a CA will 
  need to do this faster if it wants to provide the actual certificate 
  to subscribers faster than this.

 - If a CA wishes to make an actually/possibly signed certificate never 
  valid, it can report it as revoked 1 or 0 seconds before the notBefore 
  time.  This is appropriate where a problem report indicates that the 
  certificate should never have been issued, or if signing the actual 
  certificate was initiated but the result was not successfully stored.

 - Revocation times of 2 to 59 seconds before notBefore are reserved for 
  use in future policies.

 - A CA system can be compliant while having only transaction-level 
  reliability.

 - This caters to fast online CAs that complete the entire process in a 
  few minutes or seconds.

 - This also caters to CAs that keep the private CA key offline and 
  require human confirmation of signing actions.  Such a CA may perform 
  human confirmation of PreCertificate signing on a Friday, only accept 
  problem report revocations during the weekend, then manually confirm 
  signing of actual certificate, CRL and canned OCSP responses on 
  Monday.  Root CAs with signing ceremonies would be a typical case.  
  Dedicated high security issuing SubCAs would be another case.

 - This also caters to the scenario where a usually fast online CA
  experiences a technical glitch that prevents completion of issuance,
  followed by a few days to detect the situation and revoke the non-
  issued certificates.

 - It also caters to the scenario of a CT log failing to issue the
  expected CT proofs, and the CA timing out the wait for that CT log
  after 24 to 72 hours.

 - It also caters to the scenario where a CT log fails to accept
  submission of the signed actual certificate.

 - Issuing a CT logged certificate but keeping it locked up in the CA
  building explicitly becomes a policy violation after just a few days,
  because it is then explicitly required to be published 

Re: Question about the issuance of OCSP Responder Certificates by technically constrained CAs

2019-09-04 Thread Jakob Bohm via dev-security-policy
On 04/09/2019 17:14, Ryan Sleevi wrote:
> On Wed, Sep 4, 2019 at 11:06 AM Ben Wilson  wrote:
> 
>> I thought that the EKU "id-kp-OCSPSigning" was for the OCSP responder
>> certificate itself (not the CA that issues the OCSP responder certificate).
>> I don't think I've encountered a problem before, but I guess it would
>> depend
>> on the implementation?
> 
> 
> Correct. Mozilla does not require the EKU chaining, in technical
> implementation or in policy. The aforementioned comments, however, indicate
> CAs have reported that Microsoft does. That is, the assertion is that
> Microsoft requires that issuing CAs bear an overlapping set of EKUs that
> align with their issued certificates, whether subordinate CAs, end-entity,
> or OCSP responders. Mozilla requires the same thing with respect to
> id-kp-serverAuth, but the Mozilla code has a special carve-out for
> id-kp-OCSPSigning that both doesn't require it on intermediate CAs, but
> also allows it to be present, precisely because of the presumed Microsoft
> requirement.
> 

This Microsoft requirement is highly unfortunate, because unless there 
is an explicit RFC permission that allows OCSP clients to reject OCSP 
certificates from delegated OCSP responder certificates with CA:TRUE, 
yet accept them from issuing CA certificates with CA:TRUE; clients will 
be required by RFCs to accept bogus OCSP responses signed by SubCA 
certificates that contain the EKU for Microsoft compatibility.

This is especially bad if the SubCA is controlled by an entity other 
than its direct parent CA.


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2019.08.28 Let’s Encrypt OCSP Responder Returned “Unauthorized” for Some Precertificates

2019-09-02 Thread Jakob Bohm via dev-security-policy
On 03/09/2019 00:54, Ryan Sleevi wrote:
> On Mon, Sep 2, 2019 at 2:14 PM Alex Cohn via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
>> On Mon, Sep 2, 2019 at 12:42 PM Jakob Bohm via dev-security-policy <
>> dev-security-policy@lists.mozilla.org> wrote:
>>
>>> If an OCSP server supports returning (or always returns) properties of
>>> the actual cert, such as the CT proofs, then it really cannot do its
>>> usual "good" responses until the process of retrieving CT proofs and
>>> creating the final TBScertificate (and possibly signing it) has been
>>> completed.
>>>
>>> Thus as a practical matter, treating a sign-CT-sign-CT in-process state
>>> as "unknown serial, may issue in future" may often be the only practical
>>> solution.
>>>
>>
>> Waiting until CT submission of the final certificate is complete to return
>> "good" OCSP responses is definitely wrong. OCSP should return "good" at the
>> moment the final certificate is issued, which means in practice that there
>> might be a "good" OCSP response that doesn't contain SCTs yet.
>>
>> I don't know if any log does this, but RFC6962 allows logs to check for
>> certificate revocation before issuing a SCT; if the OCSP responder doesn't
>> return "good" the CA might never get the needed SCTs?
> 
> 
> Correct. Which is why I recommend CAs ignore Jakob Bohm’s advice here, as
> it can lead to a host of complications for CAs, Subscribers, and Relying
> Parties.
> 

I was not advising either cause of action, I was trying to explore 
conflicting requirements between the BRs, PKIX and the CT spec, which 
has apparently lead to confusion as to the what OCSP should return for 
soon-to-be-issued and not-yet-issued certificates.

This particular CT requirement contradicted a common Google practice 
across its services, which made me suspect it might be a specification 
oversight rather than intentional.


> 
>>
>> Also, if a CA is signing a precert, getting SCTs for that precert, then
>> embedding the SCTs in the final cert, they are already satisfying the
>> browsers' CT requirements. It would not be necessary for them to
>> additionally embed SCTs for the final cert in their OCSP responses.
>>
>> Now depending on interpretations, I am unsure if returning "revoked" for
>>> the general case of "unknown serial, may issue in future" would violate
>>> the ban on unrevoking certificates.
>>>
>>
>> RFC6960 section 2.2 documents a technique for indicating "unknown serial,
>> may issue in future" that involves returning "revoked" with a revocation
>> date of 1970-1-1 and a reason of certificateHold. I don't know if this
>> technique is used anywhere in practice - IIRC it requires the OCSP signing
>> key to be online and able to sign responses for arbitrary serial numbers in
>> real time.
> 
> 
> The BRs explicitly prohibit this.
> 
> You cannot unrevoke or suspend.
> 

That was my interpretation too.

> (Are any CAs even using the OCSP SCT delivery option? I haven't come across
>> this technique in the wild)
> 
> 
> Yes, several are.
> 


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2019.08.28 Let’s Encrypt OCSP Responder Returned “Unauthorized” for Some Precertificates

2019-09-02 Thread Jakob Bohm via dev-security-policy

On 02/09/2019 20:13, Alex Cohn wrote:

On Mon, Sep 2, 2019 at 12:42 PM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


If an OCSP server supports returning (or always returns) properties of
the actual cert, such as the CT proofs, then it really cannot do its
usual "good" responses until the process of retrieving CT proofs and
creating the final TBScertificate (and possibly signing it) has been
completed.

Thus as a practical matter, treating a sign-CT-sign-CT in-process state
as "unknown serial, may issue in future" may often be the only practical
solution.



Waiting until CT submission of the final certificate is complete to return
"good" OCSP responses is definitely wrong. OCSP should return "good" at the
moment the final certificate is issued, which means in practice that there
might be a "good" OCSP response that doesn't contain SCTs yet.

I don't know if any log does this, but RFC6962 allows logs to check for
certificate revocation before issuing a SCT; if the OCSP responder doesn't
return "good" the CA might never get the needed SCTs?



This seems to be an unfortunate aspect of the CT spec that wasn't
thought through properly.  In particular, it should cause unnecessary
delay if a CA updates its OCSP servers using "eventual consistency"
principles.  Maybe it should be fixed in the next update of the spec.

The BRs have the following requirements that are best satisfied by
delayed update of OCSP servers:

BR4.10.2 The OCSP servers must be up 24x7 .  Thus rolling out updates to
different servers at different times would be a typical best practice.

BR4.9.10 The OCSP servers only need to be updated twice a week (4 days) 
..  This could only satisfy the CT requirement if issued certificates 
were somehow withheld from the subscriber for up to 4 days.




Also, if a CA is signing a precert, getting SCTs for that precert, then
embedding the SCTs in the final cert, they are already satisfying the
browsers' CT requirements. It would not be necessary for them to
additionally embed SCTs for the final cert in their OCSP responses.

Now depending on interpretations, I am unsure if returning "revoked" for

the general case of "unknown serial, may issue in future" would violate
the ban on unrevoking certificates.



RFC6960 section 2.2 documents a technique for indicating "unknown serial,
may issue in future" that involves returning "revoked" with a revocation
date of 1970-1-1 and a reason of certificateHold. I don't know if this
technique is used anywhere in practice - IIRC it requires the OCSP signing
key to be online and able to sign responses for arbitrary serial numbers in
real time.



The question was if this technique would be in violation of the BRs, as
those generally prohibit the use of "certificateHold" .


(Are any CAs even using the OCSP SCT delivery option? I haven't come across
this technique in the wild)

Alex




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2019.08.28 Let’s Encrypt OCSP Responder Returned “Unauthorized” for Some Precertificates

2019-09-02 Thread Jakob Bohm via dev-security-policy
If an OCSP server supports returning (or always returns) properties of 
the actual cert, such as the CT proofs, then it really cannot do its 
usual "good" responses until the process of retrieving CT proofs and 
creating the final TBScertificate (and possibly signing it) has been 
completed.

Thus as a practical matter, treating a sign-CT-sign-CT in-process state 
as "unknown serial, may issue in future" may often be the only practical 
solution.

Now depending on interpretations, I am unsure if returning "revoked" for 
the general case of "unknown serial, may issue in future" would violate 
the ban on unrevoking certificates.

On 31/08/2019 17:07, Jeremy Rowley wrote:
> Obviously I think good is the best answer based on my previous posts. A 
> precert is still a cert. But I can see how people could disagree with me.
> 
> From: dev-security-policy  on 
> behalf of Jeremy Rowley via dev-security-policy 
> 
> Sent: Saturday, August 31, 2019 9:05:24 AM
> To: Tomas Gustavsson ; 
> mozilla-dev-security-pol...@lists.mozilla.org 
> 
> Subject: Re: 2019.08.28 Let’s Encrypt OCSP Responder Returned “Unauthorized” 
> for Some Precertificates
> 
> I dont recall the cab forum ever contemplating or discussing  ocsp for 
> precertificates. The requirement to provide responses is pretty clear, but 
> what that response should be is a little confusing imo.
> ...


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2019.08.28 Let’s Encrypt OCSP Responder Returned “Unauthorized” for Some Precertificates

2019-08-30 Thread Jakob Bohm via dev-security-policy
On 30/08/2019 01:36, Jacob Hoffman-Andrews wrote:
> Also filed at https://bugzilla.mozilla.org/show_bug.cgi?id=1577652
> 
> On 2019.08.28 we read Apple’s bug report at 
> https://bugzilla.mozilla.org/show_bug.cgi?id=1577014 about DigiCert’s OCSP 
> responder returning incorrect results for a precertificate. This prompted us 
> to run our own investigation. We found in an initial review that for 35 of 
> our precertificates, we were serving incorrect OCSP results (“unauthorized” 
> instead of “good”). Like DigiCert, this happened when a precertificate was 
> issued, but the corresponding certificate was not issued due to an error.
> 
> We’re taking these additional steps to ensure a robust fix:
>- For each precertificate issued according to our audit logs, verify that 
> we are serving a corresponding OCSP response (if the precertificate is 
> currently valid).
>- Configure alerting for the conditions that create this problem, so we 
> can fix any instances that arise in the short term.
>- Deploy a code change to Boulder to ensure that we serve OCSP even if an 
> error occurs after precertificate issuance.
> 

For CAs affected by OCSP misbehavior in this particular scenario (Pre-
Cert issued and submitted to CT, actual cert not issued), they should 
take a look at those error cases and subdivide them into:

1. No intent to actually issue the actual cert.  Best handling is to 
  treat as revoked in all revocation protocols and logs, but with audit 
  and incident systems reporting that the cert wasn't actually issued.
  ( *Most common case* ).

2. Intent to actually issue later, if/when something happens that just 
  takes longer than usual.  Here it makes sense for OCSP and other such 
  mechanisms to return "good", due to the ban on reporting "certificate 
  hold" or otherwise exiting a revoked state.  It of cause remains 
  possible to later revoke such a half-issued cert if there is a reason 
  to do so.

3. Intent to issue in a few minutes, somehow OCSP was queried during the 
  short processing delay between CT submission and actual issuing of 
  cert with embedded CT proofs.  Because inserting the CT proofs in the 
  OCSP responses probably awaits the same technical condition as the 
  cert issuing, it makes sense to return "unknown" for those few 
  minutes, as when delivery of the cert status to the OCSP servers is 
  itself delayed by a few minutes (up to whatever limit policies set 
  for updating revocation servers with knowledge of new certs).

Scenario 2 can be subdivided in two sub-cases for compliance purposes:

2A: Pre-cert (and thus cert) has a "valid from" date equal or near the 
  CT submission time.  Here the CT logged pre-cert provides proof that 
  this is not backdating, even though cert usage won't start until 
  later.

2B: Pre-cert (and thus cert) has a "valid from" date closer to the 
  intended issuing date for the final cert.  Here the CT logged pre-cert 
  provides proof that certain issuance decisions happened earlier, thus 
  affecting when some of the validity time limits are reached.

Note that for some cert types (such as certs for S/MIME SubCAs), it is 
trivially possible for a subscriber to use the validity before the cert 
actually exists, while in other cases it is not possible, except for the 
difficulty in proving that the cert doesn't exist.



Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-29 Thread Jakob Bohm via dev-security-policy
On 29/08/2019 19:47, Nick Lamb wrote:
> On Thu, 29 Aug 2019 17:05:43 +0200
> Jakob Bohm via dev-security-policy
>  wrote:
> 
>> The example given a few messages above was a different jurisdiction
>> than those two easily duped company registries.
> 
> I see. Perhaps Vienna, Austria has a truly exemplary registry when it
> comes to such things. Do you have evidence of that? I probably can't
> read it even if you do.
> 

I have no specific knowledge, but Austrians probably do, using empirical 
data from their life experience and press coverage of all kinds of 
business and fraud related mischief: Did fraudsters get away with 
registering misleading company names? were the registrations revoked by 
authorities soon after discovery?  Was there a serious police effort to 
prosecute?

Same for any other country as known by its citizens.

> But Firefox isn't a Viennese product, it's available all over the
> world. If only some handful of exemplary registries contain trustworthy
> information, you're going to either need to persuade the CAs to stop
> issuing for all other jurisdictions, or accept that it isn't actually
> helpful in general.
> 

The point is you keep bringing up examples from exactly two countries in 
a world with more than 100 countries.

The usefulness of knowing that a Mozilla-accepted and regularly audited 
CA has confirmed the connection to match a government record in a 
country ties directly to the trust that people can reasonably attribute 
to that part of that government.  This in turn is approximately the same 
trust applicable to those government records being reflected in other 
parts of official life, such as phone books, building permits, business 
certificates posted in offices, etc. etc.

In either case there is some residual risk of fraud, as always.


>> You keep making the logic error of concluding from a few example to
>> the general.
> 
> The IRA's threat to Margaret Thatcher applies:
> 
> We only have to be lucky once. You will have to be lucky always.
> 
> Crooks don't need to care about whether their crime is "generally"
> possible, they don't intend to commit a "general" crime, they're going
> to commit a specific crime.
> 

Almost any anti-crime effort has some probability of success and 
failure.  If a measure would stop the IRA's attacks on Thatcher 99.5% of 
the time, they would, on average, have to try 100 times to get a 50/50 
chance of getting lucky.  If another measure takes away another 80% of 
their chance, she would get 99.9% and so on.  At some point her chances 
became good enough to actually retire and die of old age having 
accumulated even more enemies.

One of the measures known to have saved her at least once was a number 
of barriers forcing an IRA rocket attack to fly just far enough to miss.  
Certainly not a perfect measure, but clearly better than nothing.


>> A user can draw conclusions from their knowledge of the legal climate
>> in a jurisdiction, such as how easy it is to register fraudulent
>> untraceable business names there, and how quickly such fraudulent
>> business registrations are shut down by the legal teams of high
>> profile companies such as MasterCard Inc.
> 
> Do you mean knowledge here, or beliefs? Because it seems to me users
> would rely on their beliefs, that may have no relationship whatsoever
> to the facts.
> 

Of cause they would use their imperfect knowledge (beliefs) about the 
country they live and survive in.  Knowing what kind of official 
paperwork to trust is a basic life skill in any society with common 
literacy (illiterates wouldn't be able to read what it says on an 
official document, nor read the words in a browser UI).

>> That opinion still is lacking in strong evidence of anything but spot
>> failures under specific, detectable circumstances.
> 
> We only have to be lucky once.
> 

When fighting a wave of similar crimes committed many times, reducing 
the number of times the criminals get lucky is a win, even if that crime 
is murder instead of theft.

>> Except that any event allowing a crook to hijack http urls to a
>> domain is generally sufficient for that crook to instantly get and
>> use a corresponding DV certificate.
> 
> If the crook hijacks the actual servers, game is over anyway,
> regardless of what type of certificate is used.
> 

Hijacking the authorized server is obviously game over.

Hijacking DNS or IP routing in any repeatable manner can be used to get 
a DV cert, then a bit later presenting that to victim browsers.  Of 
cause if the hijack ability happens to not include the view from any of 
the DV issuing CAs, then that one-two punch fails.

> Domain owners can set CAA (now that it's actually enforced) to deny
> crooks the opportunity from an IP hijack. More sophisticat

Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-29 Thread Jakob Bohm via dev-security-policy
On 29/08/2019 10:58, Nick Lamb wrote:
> On Wed, 28 Aug 2019 11:51:37 -0700 (PDT)
> Josef Schneider via dev-security-policy
>  wrote:
> 
>> Not legally probably and this also depends on the jurisdiction. Since
>> an EV cert shows the jurisdiction, a user can draw conclusions from
>> that.
> 
> Yes it is true that crimes are illegal. This has not previously stopped
> criminals, and I think your certainty that it will now is misplaced.
> 
> What conclusions would you draw from the fact that the jurisdiction is
> the United Kingdom of Great Britain and Northern Ireland? Or the US
> state of Delaware ?
> 
> Those sound fine right? Lots of reputable businesses?
> 
> Yes, because those are great places to register a business,
> tremendously convenient. They have little if any regulation on
> registering businesses, light touch enforcement and they attract a
> modest fee for each one.
> 
> This is of course also exactly the right environment for crooks.
> 

The example given a few messages above was a different jurisdiction than 
those two easily duped company registries.

You keep making the logic error of concluding from a few example to the 
general.

A user can draw conclusions from their knowledge of the legal climate in 
a jurisdiction, such as how easy it is to register fraudulent 
untraceable business names there, and how quickly such fraudulent 
business registrations are shut down by the legal teams of high profile 
companies such as MasterCard Inc.

Such knowledge can be expected to be greater among residents of said 
jurisdiction, who may also be in a much better position to recognize 
that an address is a building site, the city stadium etc.

> 
>> But removing the bar is also not the correct solution. If you find
>> out that the back door to your house is not secured properly, will
>> you remove the front door because it doesn't matter anyway or do you
>> strengthen the back door?
> 
> Certainly if crooks are seen to walk in through the back door and none
> has ever even attempted to come through the upstairs windows, it is
> strange to insist that removing the bars from your upstairs windows to
> let in more light makes the house easier to burgle.
> 

None attempting to enter through the visibly barred window says nothing 
about how many turned away when seeing the bars.  Nor does it say 
anything about which other measures have been taken to deal with the 
known problems of the back door (Maybe they make a habit of blocking the 
door between the back entrance and the stairwell whenever leaving the 
house).

>> The current
>> EV validation information in the URL works and is helpful to some
>> users (maybe only a small percentage of users, but still...)
> 
> Is it helpful, or is it misleading? If you are sure it's helpful, and
> yet as we saw above you don't really understand the nuances of what
> you're looking at (governments are quite happy to collect business
> registration fees from crooks) then I'd say that means it's misleading.
> 

You presume government failures in some jurisdictions imply such 
failures everywhere.  Some jurisdictions require various economic 
assurances of registered companies, often tiered by the desired level of 
incorporation.  Some require a known arrestable citizen or liability 
insured public accountant to securely sign off on the registration.

>> EV certificates do make more assurances about the certificate owner
>> than DV certificates. This is a fact. This information can be very
>> useful for someone that understands what it means. Probably most
>> users don't understand what it means. But why not improve the display
>> of this valuable information instead of hiding it?
> 
> The information is valuable to my employer, which does with it
> something that is useless to Mozilla's users and probably not in line
> with what EV certificate purchasers were intending, but I'm not on
> m.d.s.policy to speak for my employer, and they understood that
> perfectly well when they hired me.
> 
> In my opinion almost any conceivable display of this information is
> likely to mislead users in some circumstances and bad guys are ideally
> placed to create those circumstances. So downgrading the display is a
> reasonable choice especially when screen real estate is limited.
> 

That opinion still is lacking in strong evidence of anything but spot 
failures under specific, detectable circumstances.


>> Certificates cannot magically bring security. Certificates are about
>> identity. But the fact that the owner of the website somebank.eu is
>> the owner of the domain somebank.eu is not that helpful in
>> determining the credibility.
> 
> If I process a link (as browsers do many times in constructing even
> trivial web pages these days) then this assures me it actually links to
> what was intended.
> 
> This is enough to bootstrap WebAuthn (unphishable second factor
> credentials) and similar technologies, to safeguard authentication
> cookies and sandbox active code inside an 

Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-27 Thread Jakob Bohm via dev-security-policy
On 27/08/2019 08:03, Peter Gutmann wrote:
> Jakob Bohm via dev-security-policy  
> writes:
> 
>> <https://www.typewritten.net/writer/ev-phishing/> and
>> <https://stripe.ian.sh/> both took advantage of weaknesses in two
>> government registries
> 
> They weren't "weaknesses in government registries", they were registries
> working as designed, and as intended.  The fact that they don't work in
> they way EV wishes they did is a flaw in EV, not a problem with the
> registries.
> 

"Working as designed" doesn't mean "working as it should".

The confusion that could be created online by getting EV certificates 
matching those company registrations were almost the same as those that 
could be created in the offline world by the registrations directly.


>> Both demonstrations caused the researchers real name and identity to become
>> part of the CA record, which was hand waved away by claiming that could
>> have been avoided by criminal means.
> 
> It wasn't "wished away", it's avoided without too much trouble by criminals,
> see my earlier screenshot of just one of numerous black-market sites where
> you can buy fraudulent EV certs from registered companies.  Again, EV may
> wish this wasn't the case, but that's not how the real world works.
> 

The screenshots you showed were for code signing EV certificates, not 
TLS EV certificates.  They seem related to a report a few years ago that 
spurned work to check the veracity of those screenshots and create 
appropriate countermeasures.

>> 12 years old study involving en equally outdated browser.
> 
> So you've published a more recent peer-reviewed academic study that
> refutes the earlier work?  Could you send us the reference?
> 

These two studies are outdated because they study the effects in a 
different overall situation (they were both made when the TLS EV concept 
had not yet been globally deployed).  They are thus based on entirely 
different facts (measured and unmeasured) than the situation in 2019.

Very early in this thread someone quoted from a very recent study 
published at usenix, comparing the prevalence of malicious sites with 
different types of certificates.  The only response was platitudes, 
such as a emphasizing a small number being nonzero.

Someone is trying very hard to create a fait acompli without going 
through proper debate and voting in relevant organizations such as 
the CAB/F.  So when challenged they play very dirty, using every 
rhetorical trick they can find to overpower criticism of the action.



Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-26 Thread Jakob Bohm via dev-security-policy
On 26/08/2019 21:49, Jonathan Rudenberg wrote:
> On Mon, Aug 26, 2019, at 15:01, Jakob Bohm via dev-security-policy wrote:
>> <https://www.typewritten.net/writer/ev-phishing/> and
>> <https://stripe.ian.sh/> both took advantage of weaknesses in two
>> government registries to create actual dummy companies with misleading
>> names, then trying to get EV certs for those (with mixed success, as at
>> least some CAs rejected or revoked the certs despite the government
>> failures).
> 
> There were no "weaknesses" or "government failures" here, everything was 
> operating exactly as designed.
> 

The weakness is that those two government registries don't prevent 
conflicting or obviously bad registrations, not even by retroactively 
aborting the process in a few business days.

Even without the Internet this constitutes an obvious avenue for frauds.

> 
>> At least the first of those demonstrations involved a no
>> longer trusted CA (Symantec).
> 
> This doesn't appear to be relevant. The process followed was compliant with 
> the EVGLs, and Symantec was picked because they were one of the most popular 
> CAs at the time.
> 

Symantec was distrusted for sloppy operation, that document version 
(which we have since been informed was not the final version) claimed 
that the only other CA tried did in fact reject the cert application, 
indicating that issuing may not have been following "best current 
practice" at the time.  The revised link posted tonight reverses this 
information.

> 
>> Both demonstrations caused the
>> researchers real name and identity to become part of the CA record,
>> which was hand waved away by claiming that could have been avoided by
>> criminal means.
> 
> It's not handwaving to make the assertion that a fraudster would be willing 
> to commit fraud while committing fraud. Can you explain why you think this 
> argument is flawed?
> 

The EVG requires the CA to attempt to verify the personal identity 
information.  Stating without evidence that this verification is easily 
defrauded is hand waving it away.

> 
>> Studies quoted by Tom Ritter on 24/08/2019:
>>
>>>
>>> "By dividing these users into three groups, our controlled study
>>> measured both the effect of extended validation certificates that
>>> appear only at legitimate sites and the effect of reading a help file
>>> about security features in Internet Explorer 7. Across all groups, we
>>> found that picture-in-picture attacks showing a fake browser window
>>> were as effective as the best other phishing technique, the homograph
>>> attack. Extended validation did not help users identify either
>>> attack."
>>>
>>> https://www.adambarth.com/papers/2007/jackson-simon-tan-barth.pdf
>>>
>>
>> 12 years old study involving en equally outdated browser.
> 
> Can you explain why you believe the age this study is disqualifying? What 
> components of the study do you believe are no longer valid due to their age? 
> Are you aware of subsequent studies showing different results?
> 

IE7 may have had a bad UI since changed.  12 years ago, there had not 
been any big outreach campaigns telling users to look for the green bar, 
nor a 10 year build up of user expectation that it would be there for 
such sites.


> 
>>> "Our results showed that the identity indicators used in the
>>> unmodified FF3browser did not influence decision-making for the
>>> participants in our study interms of user trust in a web site. These
>>> new identity indicators were ineffectivebecause none of the
>>> participants even noticed their existence."
>>>
>>> http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.543.2117=rep1=pdf
>>>
>>
>> An undated(!) study involving highly outdated browsers.  No indication
>> this was ever in a peer reviewed journal.
> 
> This is a peer-reviewed paper that was published in the proceedings of 
> ESORICS 2008: 13th European Symposium on Research in Computer Security, 
> Málaga, Spain, October 6-8, 2008. Dates are actually (unfortunately) uncommon 
> on CS papers unless the publication metadata/frontmatter is intact.
> 

The link posted on Saturday did not in any way provide that publication 
data, attempting to remove the "type=pdf" parameter from the link just 
provided a 404, rather than the expected metadata page or link, which is 
probably a failure of the citeseerx software.

Once again, the study is more than 10 years old, not reflecting the 
public consciousness after years of outreach and user experience.


> 
>>> DV is sufficient. Why pay for something you d

Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-26 Thread Jakob Bohm via dev-security-policy

On 24/08/2019 05:55, Tom Ritter wrote:

On Fri, 23 Aug 2019 at 22:53, Daniel Marschall via dev-security-policy
 wrote:


Am Freitag, 23. August 2019 00:50:35 UTC+2 schrieb Ronald Crane:

On 8/22/2019 1:43 PM, kirkhalloregon--- via dev-security-policy wrote:

Whatever the merits of EV (and perhaps there are some -- I'm not
convinced either way) this data is negligible evidence of them. A DV
cert is sufficient for phishing, so there's no reason for a phisher to
obtain an EV cert, hence very few phishing sites use them, hence EV
sites are (at present) mostly not phishing sites.


Can you proove that your assumption "very few phishing sites use EV (only) because 
DV is sufficient" is correct?


As before, the first email in the thread references the studies performed.


The (obviously outdated) studies quoted below were NOT referenced by the
first message in this thread.  The first message only referenced two
highly unpersuasive demonstrations of the mischief possible in
controlled experiments.

 and
 both took advantage of weaknesses in two
government registries to create actual dummy companies with misleading
names, then trying to get EV certs for those (with mixed success, as at
least some CAs rejected or revoked the certs despite the government
failures).  At least the first of those demonstrations involved a no
longer trusted CA (Symantec).  Both demonstrations caused the
researchers real name and identity to become part of the CA record,
which was hand waved away by claiming that could have been avoided by
criminal means.


Studies quoted by Tom Ritter on 24/08/2019:



"By dividing these users into three groups, our controlled study
measured both the effect of extended validation certificates that
appear only at legitimate sites and the effect of reading a help file
about security features in Internet Explorer 7. Across all groups, we
found that picture-in-picture attacks showing a fake browser window
were as effective as the best other phishing technique, the homograph
attack. Extended validation did not help users identify either
attack."

https://www.adambarth.com/papers/2007/jackson-simon-tan-barth.pdf



12 years old study involving en equally outdated browser.


"Our results showed that the identity indicators used in the
unmodified FF3browser did not influence decision-making for the
participants in our study interms of user trust in a web site. These
new identity indicators were ineffectivebecause none of the
participants even noticed their existence."

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.543.2117=rep1=pdf



An undated(!) study involving highly outdated browsers.  No indication
this was ever in a peer reviewed journal.


DV is sufficient. Why pay for something you don't need?



Unproven claim, especially by studies from before free DV without
traceable credit card payments became the norm.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Jurisdiction of incorporation validation issue

2019-08-23 Thread Jakob Bohm via dev-security-policy
[Please note that the way MS Outlook marks quoted text doesn't work well 
with Mozilla mail programs].

On 23/08/2019 22:37, Jeremy Rowley wrote:
>> 1. I believe the BRs and/or underlying technical standards are very
>> clear if the ST field should be a full name ("California") or an
>> abbreviation ("CA").
> 
> This is only true of the EV guidelines and only for Jurisdiction of
> Incorporation.  There is no formatting requirement for place of business.
> I think requiring a format would help make the data more useful as you
> could consume it easier en masse.
> 
X.520 (10/2012) says this:

6.3.3 State or Province Name

The State or Province Name attribute type specifies a state or province. 
When used as a component of a directory name, it identifies a geographical 
subdivision in which the named object is physically located or with which 
it is associated in some other important way.

An attribute value for State or Province Name is a string, e.g., S = "Ohio".

stateOrProvinceName ATTRIBUTE ::= {
  SUBTYPE OF   name
  WITH SYNTAX  UnboundedDirectoryString
  LDAP-SYNTAX  directoryString.
  LDAP-NAME{"st"}
  ID   id-at-stateOrProvinceName }

The Collective State or Province Name attribute type specifies a state or 
province name for a collection of entries.

collectiveStateOrProvinceName ATTRIBUTE ::= {
  SUBTYPE OF  stateOrProvinceName
  COLLECTIVE  TRUE
  ID  id-at-collectiveStateOrProvinceName }

[End of X.520 section 6.3.3]

For the location, (L and street attributes), X.520 is quite vague, but 
for the remarkably similar "postalAddress" attribute is defined in terms 
of the F.401 specification.


>> 2. The fact that a country has subdivisions listed in the general ISO
>> standard for country codes doesn't mean that those are always part of
>> the jurisdiction of incorporation and/or address.
> 
> Right. For the EV Guidelines, what matters is the Jurisdiction of 
> Registration or Jurisdiction of Incorporation as that is what is used> to 
> determine the Jurisdiction of Incorporation/Registration information,
> including what goes into the Registration Number Field.

As I mentioned, these are issues seen with other CAs blindly importing 
ISO 3166-2 into their systems.  For example one CA recently insisted 
that we filled the ST field with the equivalent of a county, because 
there was a political desire to eliminate having elected officials at 
the equivalent of state level, so someone in government probably went 
ahead and submitted an update to 3166-2 presuming success of that 
effort.

>   
> Incorporating Agency is defined as: In the context of a Private
> Organization, the government agency in the Jurisdiction of
> Incorporation under whose authority the legal existence of the entity
> is registered (e.g., the government agency that issues certificates 
> of formation or incorporation). In the context of a Government Entity,
> the entity that enacts law, regulations, or decrees establishing the
> legal existence of Government Entities
> 
> Registration Agency: A Governmental Agency that registers business
> information in connection with an entity's business formation or
> authorization to conduct business under a license, charter or other
> certification. A Registration Agency MAY include, but is not limited
> to (i) a State Department of Corporations or a Secretary of State;
> (ii) a licensing agency, such as a State Department of Insurance; or
> (iii) a chartering agency, such as a state office or department of
> financial regulation, banking or finance, or a federal agency such
> as the Office of the Comptroller of the Currency or Office of Thrift
> Supervision
> 
> This is broad. IMO we should reduce it to be the number listed on the> 
> certificate of formation/incorporation so there is consistency to what
> the registration means. We should also identify in the certificate the
> source of the registration number as it provides information to relying
> parties about the actual organization.

For most of the non-default numbering sources, the addition made in EVG 
1.7.0 appears to provide this.  Ideally, this should leave us with 
exactly one number-authority for each jurisdiction, org type and number 
format, subject of cause to random changes in local legislation and/or 
government practice.

For my example of C=DK, the numbering system for government entities has 
changed multiple times in recent decades.  In the 1970s there was only 
some tiny numbering systems such as 3 digit county numbers found in some 
obscure government records.  In the early 2000s it was decreed that all 
billing of government customers at all levels should use an XML format 
that identified each sub-entity by an EAN number (as in the 13 digit 
number system for product barcodes!), which was subsequently changed to 
many of the larger entities instead getting numbers from the companies 

Re: Jurisdiction of incorporation validation issue

2019-08-23 Thread Jakob Bohm via dev-security-policy

On 23/08/2019 04:29, Jeremy Rowley wrote:

I posted this tonight: https://bugzilla.mozilla.org/show_bug.cgi?id=1576013. It's sort of 
an extension of the "some-state" issue, but with the incorporation information 
of an EV cert.  The tl;dr of the bug is that sometimes the information isn't perfect 
because of user entry issues.

What I was hoping to do is have the system automatically populate the 
jurisdiction information based on the incorporation information. For example, 
if you use the Delaware secretary of state as the source, then the system 
should auto-populate Delaware as the State and US as the jurisdiction. And it 
does...with some.

However, you do you have jurisdictions like Germany that consolidate incorporation 
information to www.handelsregister.de so you 
can't actually tell which area is the incorporation jurisdiction until you do a 
search. Thus, the fields to allow some user input. That user input is what hurts.   
In the end, we're implementing an address check that verifies the 
locality/state/country combination.

The more interesting part (in my opinion) is how to find and address these 
certs. Right now, every time we have an issue or whenever a guideline changes 
we write a lot of code, pull a lot of certs, and spend a lot of time reviewing. 
Instead of doing this every time, we're going to develop a tool that will run 
automatically every time we change a validation rule to find everything else 
that will fail the new update rules. IN essence, building unit tests on the 
data. What I like about this approach is it ends up building a system that lets 
us see how all the rule changes interplay since sometimes they may intercept in 
weird ways. It'll also let us easier measure impact of changes on the system. 
Anyway, I like the idea. Thought I'd share it here to get feedback and 
suggestions for improvement. Still in spec phase, but I can share more info as 
it gets developed.

Thanks for listening.



Additional issues seen at some CAs (not necessarily Digicert):

1. I believe the BRs and/or underlying technical standards are very
  clear if the ST field should be a full name ("California") or an
  abbreviation ("CA").

2. The fact that a country has subdivisions listed in the general ISO
  standard for country codes doesn't mean that those are always part of
  the jurisdiction of incorporation and/or address.

3. The fact that a government data source lists the incorporation
  locality of a company, doesn't mean that this locality detail is
  actually a relevant part of the jurisdictionOfIncorporation.  This
  essentially depends if the rules in that country ensure uniqueness of
  both the company number and company name at a higher jurisdiction
  level (national or state) to the same degree as at the lower level.
   For example, in the US the company name "Stripe" is not unique
  nationwide.

In practice this means that validation specialists need to draw up
various common facts for each country served by a CA, and keep those up 
to date.


As a non-expert citizen, I believe the proper details for my own country 
(C=DK) are:


1. ST= should be Greenland or Grønland (Greenland self-governing
  territory aka .gl), Faeroe Islands or Færøerne (Faeroe Islands self-
  governing territory aka .fo) or omitted (main country, under the
  central government).  Other territories were lost more than 100 years
  ago and can't occur in current certificate subjects.

2. Company numbers for the main country are numbers from the online CVR
  database, they are the same as VAT numbers except: No leading DK, not
  all companies have a VAT registration and not all VAT registrations
  are companies (some are actual the social security numbers of the
  owners of sole proprietorships).  Other private organizations are
  often listed in CVR too.

3. Government institutions at all levels have numbers from a database
  used for electronic billing in OIO UBL XML formats.  Some of those
  numbers are CVR numbers (like for companies), some are SE numbers and
  some are EAN/GLN numbers.

4. Postal codes are 4 digits (leading 0 only occurs in some special
  cases, DK prefix is added on international physical mail, but is not
  actually part of the postal code).

5. The new code number types added in EV 1.7.0 require additional
  research on how they officially map to Danish public administration.

But a CA validation team should research this further to set up proper
templates and scripts for validating EV/OV/IV applicants claiming C=DK.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org

Re: CA handling of contact information when reporting problems

2019-08-19 Thread Jakob Bohm via dev-security-policy

On 20/08/2019 03:15, Corey Bonnell wrote:

On Monday, August 19, 2019 at 10:26:06 AM UTC-4, Mathew Hodson wrote:

Tom Wassenberg on Twitter reported an experience he had with Sectigo
when reporting a compromised private key.

https://twitter.com/tomwas54/status/1162114413148725248
https://twitter.com/tomwas54/status/1162114465065840640
https://twitter.com/tomwas54/status/1162114495017299976

"So a few weeks ago, I came across a private key used for a TLS
certificate, posted online. These should never be public (hence the
"private"), and every trusted CA is obliged to revoke any certificate
they issued when they become aware its private key is compromised.

"So when I informed the issuing CA (@SectigoHQ) about this, they
promptly revoked the cert. Two weeks later however, I receive an angry
email from the company using the cert (cc'd to their lawyer), blaming
me for a disruption in the services they provide.

"The company explicitly mentioned @SectigoHQ "was so kind" to give
them my contact info! It was a complete surprise for me that
@SectigoHQ would do this without my consent. Especially seeing how the
info was used to badger me."

If these situations were common, it could create a chilling effect on
problem reporting that would hurt the WebPKI ecosystem. Are specific
procedures and handling of contact information in these situations
covered by the BRs or Mozilla policy?


Many CAs disclose the reporter's name and email address as part of their 
response to item 1 of the incident report template [1]. So this information is 
already publicly available if the Subscriber were so inclined to look for it.

Section 9.6.3 of the BRs list the provisions that must be included in the 
Subscriber Agreement that every Applicant must agree to. Notably, one of them 
is protection of the private key. The Subscriber in this case materially 
violated the Subscriber Agreement by disclosing their private key, so I don't 
think they have much footing to go badgering others for problems that they 
brought on themselves.

Thanks,
Corey

[1] https://wiki.mozilla.org/CA/Responding_To_An_Incident#Incident_Report



The question was if this is appropriate behavior, when the incident is
not at the CA, but at a subscriber.  This is typically different from
the case of security researchers filing systemic CA issues for which
they typically want public recognition.

Specificially, the question is one of whistleblower protection for the
reporter (in the general sense of whistleblower protection, not that of
any single national or other whistleblower protection legal regime).

On the other hand there is the question of subscribers having a right
to face their accuser when there might be a question of trust of
subjectivity (example: Someone with trusted subscriber private key
access maliciously sending it to the CA to cause revocation for failure
to protect said key).

Situation would get much more complicated when the report is one of
claiming a subscriber violates a subjective rule, such as malicious cert
use or name ownership conflicts.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-17 Thread Jakob Bohm via dev-security-policy

On 17/08/2019 00:56, James Burton wrote:

If one compares the first EV specification with the current EV
specification one will notice that the EV specification hasn't changed that
much during its lifetime. The issues presented during the last years though
research have been known about since the first adoption of the EV
specification. If CAs really cared about EV they would have tried and
improved it during the past 10+ years but nothing happened. If browsers
decided to keep EV what would change? Nothing at all.


Latest change was May 21, 2019.  This added a way to include additional 
government identifiers and very strict validation if that was a banking 
identity.


The EV standards development has always been an adversarial thing with
relying party representatives (only browsers allowed to participate,
unfortunately) requiring stronger checks and CAs trying to minimize the
requirements imposed on them.

The alleged "issues presented during the last years through research"
are mostly red herrings.



There is no one point in discussing the removal of EV any further because
the EV specification had already died.


This change doesn't just remove EV, it kills OV too.  I have always
argued that the UI difference between EV and OV should be reduced or
removed, but removing the difference between EV and DV is so obviously
malicious it is indefensible.



On Fri, Aug 16, 2019 at 11:19 PM Matthew Hardeman via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


Honestly the issues, as I see them, are twofold:

1.  When I visit a site for the first time, how do I know I should expect
an EV certificate?  I am conscientious about subsequent visits, especially
financial industry sites.

2.  The browsers seem to have a bias toward the average user, that user
literally being less ...smart/aware... than half of all of users.  EV is a
feature that can only benefit people who are vigilant and know what to look
for.  It seems dismissive of the more capable users, but I suppose that's
their call.


A stronger, less confusing UI indicator of EV vs. DV (the important
distinction) would greatly reduce that problem.  Instead there has been
an agenda to gradually fade the indication into invisibility, and this
is apparently the final blow.

An obvious non-malicious way forward would be:

1. Reject and revert this malicious change from Mozilla based browsers
  and market this difference as a major benefit of Firefox over the 3
  platform browsers (IE/Edge, Safari and Chrome).  Minor browsers are
  thus shown that there still is a real choice to be better too.

2. Restore the clear rule that the entire address bar will be in the
  color of the certificate validation strength, (initially green for
  EV, white for DV/none, Red for clearly invalid).  Not just a partial
  coloration.

3. Fix the indicator weaknesses (spoofable color favicon in indicator
  area, not switching to the color of a weaker page element cert,
  showing identity fields that are not the same for all page elements).

4. Use different color for OV vs. no cert/DV to reflect the actual
  difference.

5. Display the information from OV certs like the same information in
  EV certs, even if it doesn't qualify for EV color.  For example
  company name and country in OV color where those values pass all
  checks except being EV.

6. Continue to push for stronger (but not excessive) validation of
  certificate elements other than the domain name.  For example there
  should be meaningful validation that the certificate requester
  controls the claimed street address, but having to send an man to
  physically visit every business address every few years would be too
  much for regular EV certs, but still appropriate for "high value/risk
  identities" (similar to "high value/risk domains", but different
  fields).




On Fri, Aug 16, 2019 at 5:15 PM Daniel Marschall via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


I have a few more comments/annotations:

(1) Pro EV persons argue "Criminals have problems getting an EV
certificate, so most of them are using only DV certificates".

Anti EV persons argue "Criminals just don't use EV certificates, because
they know that end users don't look at the EV indicator anyway".

I assume, we do not know which of these two assumptions fits to the
majority of criminals. So why should we make a decision (change of UI)
based on such assumptions?

(2) I am a pro EV person, and I do not have any financial benefit from EV
certificates. I do not own EV certificates, instead my own websites use
Let's Encrypt DV certificates. But when I visit important pages like

Google

or PayPal, I do look at the EV indicator bar, because I know that these
pages always have an EV certificate. If I would visit PayPal and only

see a

normal pad lock (DV), then I would instantly leave the page because I

know

that PayPal always has an EV certificate. So, at least for me, the UI
change is very negative (except if 

Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-16 Thread Jakob Bohm via dev-security-policy

On 17/08/2019 03:15, Peter Gutmann wrote:

Corey Bonnell via dev-security-policy  
writes:


the effectiveness of the EV UI treatment is predicated on whether or not the
user can memorize which websites always use EV certificates *and* no longer
proceed with using the website if the EV treatment isn't shown. That's a huge
cognitive overhead for everyday web browsing


In any case things like Perspectives and Certificate Patrol already do this
for you, with no overhead for the user, and it's not dependent on whether the
cert is EV or not.  They're great add-ons for detecting sudden cert changes.

Like EV certs though, they have no effect on phishing.  They do very
effectively detect MITM, but for most users it's phishing that's the real
killer.



Your legendary dislike for all things X.509 is showing.  You are
constantly arguing that because they are not perfect, they are useless,
while ignoring any and all improvements since you original write ups.

You really should look at the long term agendas at work here and
reconsider what you may be inadvertently supporting.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-14 Thread Jakob Bohm via dev-security-policy

On 14/08/2019 18:18, Peter Bowen wrote:

On Tue, Aug 13, 2019 at 4:24 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


A policy of switching from positive to negative indicators of security
differences is no justification to switch to NO indication.  And it
certainly doesn't help user understanding of any indicator to
arbitrarily change it with 3 days of no meaningful discussion.

The only thing that was insecure with Firefox EV has been that the
original EV indicator only displayed the O= and C= field without enough
context (ST, L).  The change fixes nothing, but instead removes the direct
indication of
the validation strength (low-effort DV vs. EV) AND removes the one piece
of essential context that was previously there (country).

If something should be done, it would be to merge the requirements for
EV and OV with an appropriate transition period to cause the distinction
to disappear (so at least 2 years from new issuance policy).  UI
indication should continue to distinguish between properly validated OV
and the mere "enable encryption with no real checks" DV certificates.



I have to admit that I'm a little confused by this whole discussion.  While
I've been involved with PKI for a while, I've never been clear on the
problem(s) that need to be solved that drove the browser UIs and creation
of EV certificates.


EV was originally an initiative to make the CAs properly vet OV
certificates, and to mark those CAs that had done a proper job.
EV issuing CAs were permitted to still sell the sloppily validated
OV certs to compete against the CAs that hadn't yet cleaned up their
act.

This was before the BRs took effect, meaning that the bar for issuing OV
certs was very low.

To heavihandidly pressure the bad CAs to get in line, Firefox 
simultaneously started to display exaggerated and untruthful warnings 
for OV certificates, essentially telling users they were merely DV 
certificates.


So the intended long term benefit would be that less reliable CAs would
exit the market, making the certificate information displayed more
reliable for users.

The intended short term benefit would be to prevent users from believing
unvalidated certificate information from CAs that didn't check things 
properly.


As BRs and audits for OV certs have been ramped up, the difference 
between OV and EV has become less significant, while the difference 
between DV and OV has massively increased.


Thus blurring the line between OV and EV could now be justified, but 
blurring the line between DV and EV can not.






On thing I've found really useful in working on user experience is to
discuss things using problem & solution statements that show the before and
after.  For example, "It used to take 10 minutes for the fire sprinklers to
activate after sensing excessive heat in our building.  With the new
sprinkler heads we installed they will activate within 15 seconds of
detecting heat above 200ºC, which will enable fire suppression long before
it spreads."



It used to be easy for fraudsters to get an OV certificate with untrue 
company information from smaller CAs.  By only displaying company 
information for more strictly checked EV certificates, it now becomes 
much more difficult for fraudsters to pretend to be someone else, making 
fewer users fall for such scams.


Displaying an overly truncated form of the company information, combined 
with genuine high-trust companies (banks, credit card companies) often 
using obscure subsidiary names instead of their user trusted company 
names for their EV certs has greatly reduced this benefit.





If we assume for a minute that Firefox had no certificate information
anywhere in the UI (no subject info, no issuer info, no way to view chains,
etc), what user experience problem would you be solving by adding
information about certificates to the UI?


This hasn't been the case since before Mozilla was founded.

But lets assume we started from there, the benefit would be to tell
users when they were dealing with the company they know from the
physical world versus someone almost quite unlike them.

Making this visible with as few (maybe 0) extra user actions increases
the likelihood that users will spot the problem when there is one.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-13 Thread Jakob Bohm via dev-security-policy

DO NOT SHIP THIS.  Revert the change immediately and request a CVE
number for the nightlies with this change included.

That Chrome does something harmful is not surprising, and is no
justification for a supposedly independent browser to do the same.

A policy of switching from positive to negative indicators of security
differences is no justification to switch to NO indication.  And it
certainly doesn't help user understanding of any indicator to
arbitrarily change it with 3 days of no meaningful discussion.

The only thing that was insecure with Firefox EV has been that the
original EV indicator only displayed the O= and C= field without enough
context (ST, L).  This was used to create tons of uninformed debate
in order to later present that noise as "extensive discusison [SIC] in
the security community about the usefulness of EV certificates".

The change fixes nothing, but instead removes the direct indication of
the validation strength (low-effort DV vs. EV) AND removes the one piece
of essential context that was previously there (country).

If something should be done, it would be to merge the requirements for
EV and OV with an appropriate transition period to cause the distinction
to disappear (so at least 2 years from new issuance policy).  UI
indication should continue to distinguish between properly validated OV
and the mere "enable encryption with no real checks" DV certificates.

On 12/08/2019 20:30, Wayne Thayer wrote:

Mozilla has announced that we plan to relocate the EV UI in Firefox 70,
which is expected to be released on 22-October. Details below.

If the before and after images are stripped from the email, you can view
them here:

Before:
https://lh4.googleusercontent.com/pSX4OAbkPCu2mhBfeleKKe842DgW28-xAIlRjhtBlwFdTzNhtNE7R43nqBS1xifTuB0L8LO979yhpPpLUIOtDdfJd3UwBmdxFBl7eyX_JihYi7FqP-2LQ5xw4FFvQk2bEObdKQ9F

After:
https://lh5.googleusercontent.com/kL-WUskmTnKh4vepfU3cSID_ooTXNo9BvBOmIGR1RPvAN7PGkuPFLsSMdN0VOqsVb3sAjTsszn_3LjRf4Q8eoHtkrNWWmmxOo3jBRoEJV--XJndcXiCeTTAmE4MuEfGy8RdY_h5u

- Wayne

-- Forwarded message -
From: Johann Hofmann 
Date: Mon, Aug 12, 2019 at 1:05 AM
Subject: Intent to Ship: Move Extended Validation Information out of the
URL bar
To: Firefox Dev 
Cc: dev-platform , Wayne Thayer <
wtha...@mozilla.com>


In desktop Firefox 70, we intend to remove Extended Validation (EV)
indicators from the identity block (the left hand side of the URL bar which
is used to display security / privacy information). We will add additional
EV information to the identity panel instead, effectively reducing the
exposure of EV information to users while keeping it easily accessible.

Before:


After:


The effectiveness of EV has been called into question numerous times over
the last few years, there are serious doubts whether users notice the
absence of positive security indicators and proof of concepts have been pitting
EV against domains  for
phishing.

More recently, it has been shown  that EV
certificates with colliding entity names can be generated by choosing a
different jurisdiction. 18 months have passed since then and no changes
that address this problem have been identified.

The Chrome team recently removed EV indicators from the URL bar in Canary
and announced their intent to ship this change in Chrome 77
.
Safari is also no longer showing the EV entity name instead of the domain
name in their URL bar, distinguishing EV only by the green color. Edge is
also no longer showing the EV entity name in their URL bar.



On our side a pref for this
(security.identityblock.show_extended_validation) was added in bug 1572389
 (thanks :evilpie for
working on it!). We're planning to flip this pref to false in bug 1572936
.

Please let us know if you have any questions or concerns,

Wayne & Johann




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: How to use Cross Certificates to support Root rollover

2019-08-05 Thread Jakob Bohm via dev-security-policy

One note:

As a company that actively supports users with old operating systems and
OS-provided root stores, we have been deliberately including your R1-R3
cross, and are battling problems with a few really old platforms that
plain don't support any certs currently available.  This isn't just
being overcautious about hypothetical compatibility, it's ongoing work
to keep compatibility with a specific list of platforms.

Oh, and as for chain selection, it is important to distinguish between
client and server behaviour.  Ideally, servers would send a compatible
collection while clients would use the first or best valid chain (thus
ending their search when hitting a trusted root DN with matching key).
The AIA-pointing-to-cross trick may work for many GUI browsers, thus
making it useful for servers that only care about those browsers, while
other servers could just continue to send the cross directly.


On 05/08/2019 16:02, Doug Beattie wrote:

Ryan,

...


We have some customers that mandate a complete SHA-256 chain, including the root.  We 
had been using our older SHA-1 Root (R1) and recently moved to our newer SHA-265 
root, (R3).  We now can deliver certificates issued with SHA-256 at all levels, 
great!  In order to support some legacy applications that didn’t have R3 embedded, we 
created a cross certificate R1-R3.  You can get it here 

 .

  


The customer came back and said, hey, it still chains to R1, what’s up?  Oh, 
it’s because the client has the cross certificate cached, don’t worry about 
that, some users will see the chain up to R1 and others to R3.  Hmm, not good 
they say.


Even if this specific web site didn’t configure the extra certificate (the 
R1-R3 cross certificate) into their configuration, the end users may have 
picked it up somewhere else and have it cached so their specific chain goes: 
SSL, Intermediate CA, R1-R3 Cross certificate, Root R1




In a




They are stuck with inconsistent user experience and levels of “security” for 
their website.

  

  

  


At present (and this is changing), Chrome uses the CryptoAPI implementation, 
which is the same as IE, Edge, and other Windows applications.

You can read a little bit about Microsoft's logic here:

- 
https://blogs.technet.microsoft.com/pki/2010/05/12/certificate-path-validation-in-bridge-ca-and-cross-certification-environments/

And a little about how the IIS server selects which intermediates to include in 
the TLS handshake here:

- 
https://support.microsoft.com/en-us/help/2831004/certificate-validation-fails-when-a-certificate-has-multiple-trusted-c

The "short answer" is that, assuming both are trusted, either path is valid, 
and the preference for which path is going to be dictated by the path score, how you can 
influence that path score, and how ties are broken between similarly-scoring certificates.

It’s not clear how a CA can influence the path so the “most secure” or “newest” 
one.  Since CAs want to rollover to newer, “better” roots, how do we limit 
clients from continuing to use the older one during the transition?  Is 
creating a cross certificate with a not-before that is equal to or predates the 
new Root permitted?  Is it the only way we can be sure that the new path is 
selected?  Do most/all other web clients also follow this same logic?  Sorry, 
for all the questions.

  

  


   * increases TLS handshake packet sizes (or extra packet?), and
   * increases the certificate path from 3 to 4 certificates (SSL, issuing
CA, Cross certificate, Root), which increases the path validation time and
is typically seen as a competitive disadvantage

  


I'm surprised and encouraged to hear CAs think about client performance. That 
certainly doesn't align with how their customers are actually deploying things, based 
on what I've seen from the httparchive.org   data 
(suboptimal chains requiring AIA, junk stapled OCSP responses, CAs putting entire 
chains in OCSP responses).

It’s not really the answer I expected, but OK.  Since we don’t control how the 
web sites are configured it’s not clear how CAs can improve this (except for 
your last example).

  


Assuming some CAs want to provide certificates with optimal performance 
characteristics (ECDSA, shorter chains, smaller certificate size, etc.) it 
seems passing down an extra certificate in the handshake isn’t the best 
approach.  Maybe it’s so far in the noise it’s irrelevant.

  


As a practical matter, there are understandably tradeoffs. Yet you can allow 
your customers the control to optimize for their use case and make the decision 
best for them, which helps localize some of those tradeoffs. For example, when 
you (the CA) is first rolling out such a new root, you're right that your 
customers will likely want to include the cross-signed version back to the 
existing root within root stores. Yet as root stores update (which, in 

Re: Comodo password exposed in GitHub allowed access to internal Comodo files

2019-07-29 Thread Jakob Bohm via dev-security-policy

On 28/07/2019 00:41, Nick Lamb wrote:

On Sun, 28 Jul 2019 00:06:38 +0200
Ángel via dev-security-policy 
wrote:


A set of credentials mistakenly exposed in a public GitHub repository
owned by a Comodo software developer allowed access to internal Comodo
documents stored in OneDrive and SharePoint:

https://techcrunch.com/2019/07/27/comodo-password-access-data/


It doesn't seem that it affected the certificate issuance system, but
it's an ugly security incident nevertheless.


What was once the Comodo CA is named Sectigo these days, so conveniently
for us this makes it possible to simply ask whether the incident
affected Sectigo at all:

- Does Sectigo in practice share systems with Comodo such that this
   account would have access to Sectigo internal materials ?



Alternative problem scenario (and thus additional question):

- Did the Comodo systems or data compromised include sensitive
 information about Sectigo systems or operations, such as yet-to-be
 fixed security issues?

This could of cause be an effect of this information being present in
files that remained in Comodo's possession under promise of secrecy or
deletion after the operation split.


In passing it's probably a good time to remind all programme
participants that Multi-factor Authentication as well as being
mandatory for some elements of the CA function itself (BR 6.5.1), is a
best practice for any security sensitive business like yours to be using
across ordinary business functions in 2019. Don't let embarrassing
incidents like this happen to you.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Nation State MITM CA's ?

2019-07-20 Thread Jakob Bohm via dev-security-policy

On 21/07/2019 03:21, My1 wrote:

Hello everyone,

I am new here but also want to share my opinion about some posts here, I know 
it's a lot of text but I hope it's not too bad.

Am Freitag, 19. Juli 2019 23:42:47 UTC+2 schrieb dav...@gmail.com:

Wouldn't it be easier to just decree that HTTPS is illegal and block all 
outbound 443 (only plain-text readable comms are allowed)?  Then you would not 
have the decrypt-encrypt/decrypt-encrypt slowdown from the MITM.


if you want to block like half the internet or more which is on the way of 
HTTPS-only, go ahead.


If you don't want to make everyone install a certificate:
Issue a double-wildcard certificate (*.*) that can impersonate any site, load 
it on a BlueCoat system, and sell it to a repressive regime:
https://www.dailydot.com/news/blue-coat-syria-iran-spying-software/

Both scenarios end up in the same place: Nobody trusts encryption/SSL or CAs 
anymore.


As far as I remember, certs only allow one wildcard and that only on the very 
left so they would need at least *.tld and *.common-sub.tld for all eTLDs (*.jp 
doesnt cover *.co.jp).

Also, you say that this is for not wanting everyone to install a certificate. 
which Trusted CA in their right mind would actually do this? CT is becoming FAR 
bigger as time goes on and if any CA is catched going and issuing wildcards for 
public suffixes, that CA would be dead instantly.

Also browsers could just drop certificates with *.(public-suffix) entirely and 
not trust them, no matter the source.



I believe this is either done, or easy to add.




On Friday, July 19, 2019 at 1:27:17 PM UTC-7, Jakob Bohm wrote:

On 19/07/2019 21:13, andrey...@gmail.com wrote:

I am confused. Since when Mozilla is under obligation to provide customized 
solutions for corporate MITM? IMHO, corporations, if needed, can hire someone 
else to develop their own forks of Chrome/Firefox to do snooping on HTTPS 
connections.

In regular browsers, developed by community effort and with public funds, ALL 
MiTM certificates should be just permanently banned, no?



As others (and I) have mentioned, MitM is also how many ordinary
antivirus programs protect users from attacks.  The hard part is
how to distinguish between malicious and user-helping systems.


I fully agree that this is hard but one also needs to be aware that antiviruses 
while not unhelpful also provide risks by being VERY deep in the system. an 
unmonitored MITM action by that could end in disastrous results, in the worst 
case bank phishing could occur since the user cannot verify the certificate via 
the browser.



Hence my breakdown and suggestions below, which seem to agree with yours
in most cases.


one suggestion I think might be helpful is to have the entire data available, while 
keeping the HTTPS signature, and there could be a machanism that allows anti-virus 
software to check content before it is executed/loaded and if needed, put a big "do 
not execute" flag for certain things like script blocks that are clearly malicious 
or whatever, and the browser can check the signature that no content has been actually 
changed, but remove that flagged content, while displaying a notification that content 
has been blocked.

that way anti-viruses could only remove elements but not actually change 
anything.



Am Freitag, 19. Juli 2019 22:23:00 UTC+2 schrieb Jakob Bohm:

As someone actually running a corporate network, I would like to
emphasize that it is essential that such mechanisms try to clearly
distinguish the 5 common cases (listed by decreasing harmfulness).

1. A known malicious actor is intercepting communication (such as the
nation state here discussed).

2. An unknown actor is intercepting communication (hard to identify
safely, but there are meaningful heuristic tests).

3. A local/site/company network firewall is intercepting communications
for well-defined purposes known to the user, such as blocking virus
downloads, blocking surreptitious access to malicious sites or
scanning all outgoing data for known parts of site secrets (for
example the Coca-Cola company could block all HTTPS posts containing
their famous recipe, or a hospital could block posts of patient
records to unauthorized parties).  This case justifies a non-blocking
notification such as a different-color HTTPS icon.

4. An on-device security program, such as a local antivirus, does MitM
for local scanning between the browser and the network.  Mozilla could
work with the AV community to have a way to explicitly recognize the
per machine MitM certs of reputable AV vendors (regardless of
political sanctions against some such companies).  For example,
browsers could provide a common cross-browser cross-platform API for
passing the decoded traffic to local antivirus products, without each
AV-vendor writing (sometimes unreliable) plugins for each browser
brand and version, while also not requiring browser vendors to write

Re: Nation State MITM CA's ?

2019-07-20 Thread Jakob Bohm via dev-security-policy

On 20/07/2019 09:31, simc...@gmail.com wrote:

I think it must be quickly blacklisted by Google, Mozilla and Microsoft all together, 
because it is known as a state scale MITM affecting citizen "real" life.

The purpose of https is being defeated and such companies who tried to improve 
network security for past decade have to react (yes, security and privacy on 
which they work on are political).
If browser editors do blacklist, citizen will be able to rise against this 
privacy attack.

PS:When a MITM CA is known to be at a company scale, it is not that harmfull 
imho because citizen still have privacy at home.



Note that on a technical level, there is no difference between a (small)
company and a home (employees versus family members).

A small company to consider would be doctor or lawyer office, handling
other people's deepest secrets and under an obligation of secrecy from
even the authorities.

A large home to consider could be 4 generations living together, with
8 to 10 children and 4 spouses for each in each generation, but in
relative poverty.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Nation State MITM CA's ?

2019-07-19 Thread Jakob Bohm via dev-security-policy

On 19/07/2019 21:13, andrey.at.as...@gmail.com wrote:

I am confused. Since when Mozilla is under obligation to provide customized 
solutions for corporate MITM? IMHO, corporations, if needed, can hire someone 
else to develop their own forks of Chrome/Firefox to do snooping on HTTPS 
connections.

In regular browsers, developed by community effort and with public funds, ALL 
MiTM certificates should be just permanently banned, no?



As others (and I) have mentioned, MitM is also how many ordinary
antivirus programs protect users from attacks.  The hard part is
how to distinguish between malicious and user-helping systems.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Nation State MITM CA's ?

2019-07-19 Thread Jakob Bohm via dev-security-policy

On 19/07/2019 16:52, Troy Cauble wrote:

On Thursday, July 18, 2019 at 8:26:43 PM UTC-4, wolfgan...@gmail.com wrote:


Even on corporate hardware I would like at least a notification that this is
happening.



I like the consistency of a reminder in all cases, but this
might lead to corporate policies to use other browsers.



As someone actually running a corporate network, I would like to
emphasize that it is essential that such mechanisms try to clearly
distinguish the 5 common cases (listed by decreasing harmfulness).

1. A known malicious actor is intercepting communication (such as the
  nation state here discussed).

2. An unknown actor is intercepting communication (hard to identify
  safely, but there are meaningful heuristic tests).

3. A local/site/company network firewall is intercepting communications
  for well-defined purposes known to the user, such as blocking virus
  downloads, blocking surreptitious access to malicious sites or
  scanning all outgoing data for known parts of site secrets (for
  example the Coca-Cola company could block all HTTPS posts containing
  their famous recipe, or a hospital could block posts of patient
  records to unauthorized parties).  This case justifies a non-blocking
  notification such as a different-color HTTPS icon.

4. An on-device security program, such as a local antivirus, does MitM
  for local scanning between the browser and the network.  Mozilla could
  work with the AV community to have a way to explicitly recognize the
  per machine MitM certs of reputable AV vendors (regardless of
  political sanctions against some such companies).  For example,
  browsers could provide a common cross-browser cross-platform API for
  passing the decoded traffic to local antivirus products, without each
  AV-vendor writing (sometimes unreliable) plugins for each browser
  brand and version, while also not requiring browser vendors to write
  specific code for each AV product.  Maybe the ICAP protocol used for
  virus scanning in firewalls, but run against 127.0.0.1 / ::1 (RFC3507
  only discusses its use for HTTP filtering, but it was widely used for
  scanning content from mail protocols etc. and a lot less for
  insertion of advertising which is in the RFC).

5. A site, organization or other non-member CA that issues only non-MitM
  certificates according to a user-accepted policy.  Those would
  typically only issue for domains that request this or are otherwise
  closely aligned with the user organization.  Such a CA would
  (obviously) not be bound by Mozilla or CAB/F policies, but may need to
  do some specific token gestures to programmatically clarify their
  harmlessness, such as not issuing certs for browser pinned domains,
  only issue for domains listing them in CAA records or outside public
  DNS or similar.

I am aware of at least one system being overly alarmist about our
internal type 5 situation, making it impossible to distinguish from a
type 1 or 2 attack situation.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Expired Root CA in certdata.txt

2019-07-15 Thread Jakob Bohm via dev-security-policy

As Mozilla has stopped caring about code signatures, e-mails are much
more relevant for checking old certificates as of a known date:

Most e-mail systems provide the reader with a locally verified record of
when exactly the mail contents reached a trusted mail server and/or a
POP3 client.  Thus validating e-mail signatures as of that date is
perfectly sensible, either via a code improvement in Thunderbird or via
a knowledgeable user comparing the expiry time in error messages to the
mail server timestamp visible via "view source".

Some CAs also offer (as a paid service) to provide anonymous timestamp
signatures for use on signed documents, these should be technically
compatibible with use on e-mail signatures or entire e-mails, though
that procedure isn't very common.

I don't know if gMail implements these features in their web and app
interfaces, but they certainly could if they so wanted, and with 
Google's habit of keeping stored data, the features could probably be 
retroactively applied to mails received long before the feature was

implemented.

On 15/07/2019 03:11, Samuel Pinder wrote:

The way I understand it is, generally speaking, Root CAs may be kept in a
root store for as long as the root key material is not compromised in any
way. In practice Root CA certificates are removed at the operator's request
when they believe it is no longer needed, or the root store operator
believes it should be removed due to the key material being vulnerable to
brute forcing, or some other policy reason (e.g. violation or misuse). Of
course it is possible for renewed Root CA certificates to replace older
ones as long as they are signed with the same key material- allowing
everything else chained to it still be valid (I've seen this happen once
with the "GlobalSign Root CA" originally published back in 1998.)
While keeping expired Root CAs may not useful for web server certificates
chaining up to an expired CA certificate, it is useful for codesigning
certificates. Timestamped codesigned objects can continue to work after
their certificate has expired as the "timestamp" shows that a certificate
was valid *at the time it performed the signature*. Have a look here:
https://serverfault.com/questions/878919/what-happens-to-code-sign-certificates-when-when-root-ca-expires
  .
Samuel Pinder

On Mon, Jul 15, 2019 at 1:32 AM Vincent Lours via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


On Monday, 15 July 2019 04:41:12 UTC+10, Ryan Sleevi  wrote:

Thanks for mentioning this here.

Could you explain why you see it as an issue? RFC 5280 defines a trust
anchor as a subject and a public key. Everything else is optional, and

the

delivery of a trust anchor as a certificate does not necessarily imply

the

constraints of that certificate, including expiration, should apply.



Hi Ryan,

Thanks for your message.

First of all, I never said that was an issue. I was just reporting some
expired Root CA, as I was thinking that may impact peoples.

Secondly, I was not aware of the RFC 5280 defining a trust anchor as a
subject and a public key.
However, if you referrer to the same RFC, they defined a Public Key in the
"4.1.2.5. Validity" Section:

   "When the issuer will not be able to maintain status information until
the notAfter date (including when the notAfter date is
1231235959Z), the issuer MUST ensure that no valid certification
path exists for the certificate after maintenance of status information
is terminated.
This may be accomplished by expiration or
revocation of all CA certificates containing the public key used to
verify the signature on the certificate and discontinuing use of the
public key used to verify the signature on the certificate as a trust
anchor."

In the case of the "certdata.txt", this file is including Public CA
certificates. So an expired certificate means that the key cannot be used
anymore.

I'm still not expressing this message as an issue, but an suggestion to
update/remove those expired Public Keys from your certdata.txt.





Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Logotype extensions

2019-06-18 Thread Jakob Bohm via dev-security-policy
On 14/06/2019 18:54, Ryan Sleevi wrote:
> On Fri, Jun 14, 2019 at 4:12 PM Jakob Bohm via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
>> In such a case, there are two obvious solutions:
>>
>> A. Trademark owner (prompted by applicant) provides CA with an official
>> permission letter stating that Applicant is explicitly licensed to
>> mark the EV certificate for a specific list of SANs and and Subject
>> DNs with their specific trademark (This requires the CA to do some
>> validation of that letter, similar to what is done for domain
>> letters).
> 
> 
> This process has been forbidden since August 2018, as it is fundamentally
> insecure, especially as practiced by a number of CAs. The Legal Opinion
> Letter (LOL) has also been discussed at length with respect to a number of
> problematic validations that have occurred, due to CAs failing to exercise
> due diligence or their obligations under the NetSec requirements to
> adequately secure and authenticate the parties involved in validating such
> letters.
> 

Well, that was unfortunate for the case where it is not straing parent-
child (e.g. trademark owned by a foundation and licensed to the company) 
But ok, in that case the option is gone, and what follows below is moot:

> 
> Letter needs to be reissued for end-of-period cert
>> renewals, but not for unchanged early reissue where the cause is not
>> applicant loss of rights to items.  For example, the if the Heartbleed
>> incident had occurred mid-validity, the web server security teams
>> could get reissued certificates with uncompromised private keys
>> without repeating this time consuming validation step.
> 
> 
> EV certificates require explicit authorization by an authorized
> representative for each and every certificate issued. A key rotation event
> is one to be especially defensive about, as an attacker may be attempting
> to bypass the validation procedures to rotate to an attacker-supplied key.
> This was an intentional design by CAs, in an attempt to provide some value
> over DV and OV certificates by the presumed difficulty in substituting them.
> 

I was considering the trademark as a validated property of the subject 
(similar to e.g. physical address), thus normally subject to the 825 day 
reuse limit.  My wording was intended to require stricter than current 
BR revalidation for renewal within that 825 day limit.


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Logotype extensions

2019-06-14 Thread Jakob Bohm via dev-security-policy

On 14/06/2019 04:16, Corey Bonnell wrote:

On Thursday, June 13, 2019 at 2:04:48 AM UTC-4, kirkhal...@gmail.com wrote:

On Tuesday, June 11, 2019 at 2:49:31 PM UTC+3, Jeremy Rowley wrote:

We wanted to experiment a bit with logotype extensions and trademarks, but
we heard from the CAB Forum that whether inclusion is allowed is subject a
bit to interpretation by the browsers.

  


>From the BRs section 7.1.2.4

"All other fields and extensions MUST be set in accordance with RFC 5280.
The CA SHALL NOT issue a Certificate that contains a keyUsage flag,
extendedKeyUsage value, Certificate extension, or other data not specified
in section 7.1.2.1, 7.1.2.2, or 7.1.2.3 unless the CA is aware of a reason
for including the data in the Certificate. CAs SHALL NOT issue a Certificate
with: a. Extensions that do not apply in the context of the public Internet
(such as an extendedKeyUsage value for a service that is only valid in the
context of a privately managed network), unless: i. such value falls within
an OID arc for which the Applicant demonstrates ownership, or ii. the
Applicant can otherwise demonstrate the right to assert the data in a public
context; or b. semantics that, if included, will mislead a Relying Party
about the certificate information verified by the CA (such as including
extendedKeyUsage value for a smart card, where the CA is not able to verify
that the corresponding Private Key is confined to such hardware due to
remote issuance)."

  


In this case, the logotype extension would have a trademark included (or
link to a trademark). I think this allowed as:

1.  There is a reason for including the data in the Certificate (to
identify a verified trademark). Although you may disagree about the reason
for needing this information, there is a not small number of people
interested in figuring out how to better use identification information. No
browser would be required to use the information (of course), but it would
give organizations another way to manage certificates and identity
information - one that is better (imo) than org information.
2.  The cert applies in the context of the public Internet.
Trademarks/identity information is already included in the BRs.
3.  The trademark does not falls within an OID arc for which the
Applicant demonstrates ownership (no OID included).
4.  The Applicant can otherwise demonstrate the right to assert the data
in a public context. If we vet ownership of the trademark with the
appropriate office, there's no conflict there.
5.  Semantics that, if included, will not mislead a Relying Party about
the certificate information verified by the CA (such as including
extendedKeyUsage value for a smart card, where the CA is not able to verify
that the corresponding Private Key is confined to such hardware due to
remote issuance). None of these examples are very close to the proposal.

  


What I'm looking for is not a discussion on whether this is a good idea, but
rather  is it currently permitted under the BRs per Mozilla's
interpretation. I'd like to have the "is this a good idea" discussion, but
in a separate thread to avoid conflating permitted action compared to ideal
action.

  


Jeremy


Jeremy is correct - including strongly verified registered trademarks via 
extensions in EV certs is permitted (i.e., not forbidden) by BR Section 7.1.2.4.
  
Confirming registered trademarks (whether logos, word marks, or both in a combined mark) to include in an EV cert would be very easy for a CA.  Here are the steps:


1. Complete EV validation of the Organization
2. Applicant sends CA its USPTO logo or wordmark Registration Number and SVG 
file of logo to include in EV cert
CA validates:
(a) Confirm logo and/or wordmark is registered to Organization in USPTO online 
data base, and
(b) Compare USPTO image with SVG file received to confirm they are the same 
logo.
3. CA inserts (a) name of Trademark office with the logo and/or wordmark 
registration, (b) the Registration Number, and (c) the SVG file in the EV 
certificate to be available to browsers and applications to display if desired.

Adding validated logos to EV certificates has the benefit of allowing browsers and apps to choose 
to display the logo (with registered word mark, if desired) in the UI, and would solve the concern 
that some have expressed that users don't always recognize the corporate name of a familiar brand 
when it's displayed in the current EV UI.  For example, consider the EV website of the food chain 
"Subway" - www.subway.com.  The current EV UI shows "Franchise World Headquarters, 
LLC [US]" which is correct but not very friendly for users.

What if instead a browser or app displayed the verified trademark and/or word 
mark owned by Subway?  See these two records from the US Patent and Trademark 
Office:

http://tmsearch.uspto.gov/bin/showfield?f=doc=4804:6juyuh.2.20

http://tmsearch.uspto.gov/bin/showfield?f=doc=4804:6juyuh.2.14

Adding strongly verified marks to EV 

Re: Certinomis Issues

2019-05-17 Thread Jakob Bohm via dev-security-policy
On 17/05/2019 07:21, Jakob Bohm wrote:
> On 17/05/2019 01:39, Wayne Thayer wrote:
>> On Thu, May 16, 2019 at 4:23 PM Wayne Thayer  wrote:
>>
>> I will soon file a bug requesting removal of the “Certinomis - Root CA”
>>> from NSS.
>>>
>>
>> This is https://bugzilla.mozilla.org/show_bug.cgi?id=1552374
>>
> 
> To more accurately assess the impact of distrust, maybe someone with
> better crt.sh skills than me should produce a list of current
> certificates filtered as follows:
> 
> - Sort by O= (organization), thus grouping together certificates that
>   were issued to the same organization (for example, there are many
>   issued to divisions of LA POSTE).
> - Exclude certificates that expire on or before 2019-08-31, as those
>   will be unaffected by a September distrust.
> - Exclude certificates issued after 2019-05-17 (today), as Certinomis
>   should be aware of the likely distrust by tonight.
> 

To clarify, this is intended as an improvement to the statistics Andrew 
Ayer posted at https://bugzilla.mozilla.org/show_bug.cgi?id=1552374#c1 .

I am posting it in the thread to increase the chance someone with the 
skills will see it and run the query.

I expect it to show a surprisingly small number of certificate holders, 
indicating the real impact of revocation to be much smaller than one 
would expect from the raw count of 1381 certificates.

This in turn could inform the mitigations or other proactive steps to 
reduce relying party (user) impact of distrust, if distrust is accepted 
by Kathleen.


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certinomis Issues

2019-05-16 Thread Jakob Bohm via dev-security-policy

On 17/05/2019 01:39, Wayne Thayer wrote:

On Thu, May 16, 2019 at 4:23 PM Wayne Thayer  wrote:

I will soon file a bug requesting removal of the “Certinomis - Root CA”

from NSS.



This is https://bugzilla.mozilla.org/show_bug.cgi?id=1552374



To more accurately assess the impact of distrust, maybe someone with
better crt.sh skills than me should produce a list of current
certificates filtered as follows:

- Sort by O= (organization), thus grouping together certificates that
 were issued to the same organization (for example, there are many
 issued to divisions of LA POSTE).
- Exclude certificates that expire on or before 2019-08-31, as those
 will be unaffected by a September distrust.
- Exclude certificates issued after 2019-05-17 (today), as Certinomis
 should be aware of the likely distrust by tonight.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certinomis Issues

2019-05-09 Thread Jakob Bohm via dev-security-policy

On 10/05/2019 02:22, Wayne Thayer wrote:

Thank you for this response Francois. I have added it to the issues list
[1]. Because the response is not structures the same as the issues list, I
did not attempt to associate parts of the response with specific issues. I
added the complete response to the bottom of the page.

On Thu, May 9, 2019 at 9:27 AM fchassery--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


...

...

>

In response to the email from Franck that you mention, Gerv responded [1]
by quoting the plan he had approved and stating "This seems to be very
different to the plan you implemented." By cross-signing Startcom's old
roots, Certinomis did assist Startcom in circumventing the remediation
plan, and by proposing one plan then implementing a different one,
Certinomis did so without Mozilla's consent.



As can be seen from your [3] link, Certinomis cross-signed StartCom's
NEW supposedly remediated 2017 hierarchy, not the old root.

However it was still wrong.


Startcom misissued a number of certificates (e.g. [3]) under that
cross-signing relationship that Certinomis is responsible for as the
Mozilla program member.

By cross-signing Startcom's roots, Certinomis also took responsibility for
Startcom's qualified audit.

I will also add this information to the issues list.

- Wayne

[1] https://wiki.mozilla.org/CA/Certinomis_Issues
[2]
https://groups.google.com/d/msg/mozilla.dev.security.policy/RJHPWUd93xE/lyAX9Wz_AQAJ
[3] https://crt.sh/?opt=cablint=160150786




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Exclude Policy Certification Authorities from EKU Requirement

2019-05-09 Thread Jakob Bohm via dev-security-policy

On 10/05/2019 05:25, Ryan Sleevi wrote:

On Thu, May 9, 2019 at 10:44 PM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


On 09/05/2019 16:35, Ryan Sleevi wrote:

Given that the remark is that such a desire is common, perhaps you can
provide some external references documenting how one might go about
configuring such a set-up, particularly in the context of TLS trust?
Similarly, I'm not aware of any system that supports binding S/MIME
identities to a particularly CA (effectively, CA pinning) - perhaps you

can

provide documentation and reference for systems that perform this?

Thanks for helping me understand how this 'common' scenario is actually
implemented, especially given that the underlying codebases do not

support

such distinctions.



My description is based on readily available information from the
following sources, that you should also have access to:



It looks like your links to external references may have gotten stripped,
as I didn't happen to receive any.

As it relates to the topic at hand, the system you described is simply that
of internal CAs, and does not demonstrate a need to use publicly trusted
CAs. Further, going back to your previous message, to which I was replying
to make sure I did not misunderstand, given that you stated it was common,
it seemed we established that such scenarios in that message, and further
expanded upon in this, already have the capability for enterprise
management.

I wanted to make sure I did my best to understand, so that we can have
productive engagement on substance, specifically around whether there is a
technical necessity for the use of non-Root CAs to be capable of issuance
under multiple different trust purposes. It does not seem as if there's
been any external references to establish a technical necessity, so it does
not seem like the policy needs to be modified, based on the available
evidence.



There were no links, only descriptions of obvious facts that you 
willfully ignore in an effort to troll the community.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Exclude Policy Certification Authorities from EKU Requirement

2019-05-09 Thread Jakob Bohm via dev-security-policy
On 09/05/2019 16:35, Ryan Sleevi wrote:
> On Wed, May 8, 2019 at 10:36 PM Jakob Bohm via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
>> [ Note, I am arguing a neutral position on the specific proposal ]
>>
>> The common purpose of having an internally secured (managed or on-site)
>> CA in a public hierarchy is to have end certificates which are
>> simultaneously:
>>
> 
> Despite my years of close experience with the implementation and details of
> the certificate verification engines within Google Chrome and Android,
> Mozilla Firefox, Apple iOS and macOS, and Microsoft Windows, including
> extensive work with Enterprise PKIs, I must admit, I have never heard of
> the scenario you're describing or actually being supported.
> 
> Given that the remark is that such a desire is common, perhaps you can
> provide some external references documenting how one might go about
> configuring such a set-up, particularly in the context of TLS trust?
> Similarly, I'm not aware of any system that supports binding S/MIME
> identities to a particularly CA (effectively, CA pinning) - perhaps you can
> provide documentation and reference for systems that perform this?
> 
> Thanks for helping me understand how this 'common' scenario is actually
> implemented, especially given that the underlying codebases do not support
> such distinctions.
> 

My description is based on readily available information from the 
following sources, that you should also have access to:

1. The high level documentation and feature lists for enterprise PKI 
  suites (which are completely different from the software suites 
  designed for running public CA companies, because templates, 
  validation methods and policies tend to be completely different).

2. The user and configuration interfaces for common software that needs 
  to hold certificates (web servers, mail clients etc.).  Those tend to 
  be harder to use if different relying parties need different 
  certificates for the same certificate subject.

  For example it is difficult to make mail clients sign/encrypt all 
  mails if you need different personal certificates for the same e-mail 
  account depending on who the mail is sent to, while a setting to do 
  the same thing every time is typically a builtin option.  Now imagine 
  the certificate and private key being embedded in an employee ID card,
  while company workstations don't have unlocked spare ports for 
  plugging in two different card readers (one for the employee ID card 
  containing the access certificate that keeps the machine logged in, 
  another for a USB token from a public CA).

   Similarly, it can get very tricky to make a web server use different 
  certificate settings for different visitor categories accessing the 
  same DNS name.

3. The details of publicly trusted company SubCAs that emerge from time 
  to time in compliance cases.  The details revealed tend to only make 
  sense in setups like the ones described.

4. The actual number of companies needing this is an unknown that I 
  make no claims about, it's not my proposal.

5. Maybe look at how Google employees and internal servers used 
  certificates before the formation of GTS as a public CA.  I doubt 
  all your internal operational certificates chained to your publicly 
  trusted SubCAs, and I doubt that Google sensitive systems accepted 
  purported "Google authorized" certificates issued by the 3rd party 
  public CA that signed Google's SubCA.
   [ I am not asking you to reveal the information, just think about 
  it when considering the needs of companies that are never going to 
  become root program members ].




Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Exclude Policy Certification Authorities from EKU Requirement

2019-05-08 Thread Jakob Bohm via dev-security-policy
On 08/05/2019 16:05, Ryan Sleevi wrote:
> On Wed, May 8, 2019 at 6:42 AM Fotis Loukos  wrote:
> 
>> ...
> ...
> 
>> The scheme I'm proposing is the following:
>>
>> Org CA (serverAuth, emailProtection, and possibly others such as
>> clientAuth)
>>\- Org SSL CA (serverAuth and possibly clientAuth)
>>  \- End-entity cert
>>\- Org Mail CA (emailProtection and possibly others such as clientAuth)
>>  \- End-entity cert
>>
>> The organization can deploy the "Org CA" as a trusted CA in its internal
>> systems.
>>
> 
> Thanks. This clarifies what you meant, and is demonstrably something we
> should not be encouraging.
> 
> As you note from this example, the organization can deploy "Org CA" as a
> trusted CA in its internal systems. There's no reason or benefit to use a
> publicly trusted CA hierarchy for this, since you've established the
> precondition that they can already manage their own enterprise CAs.
> 
> This example - what you term PCA - is what others might call a "Managed
> CA", and there's no need to cross-certify this managed CA with the public
> hierarchy, if the situation is as you describe.
> 
> Of course, if the organization cannot effectively deploy "Org CA", thus
> incentivizing the use of public trust, then there is also no deployment
> headache for the organization; they're relying on the public trust (of the
> root) to confer it to their various applications.
> 

[ Note, I am arguing a neutral position on the specific proposal ]

The common purpose of having an internally secured (managed or on-site) 
CA in a public hierarchy is to have end certificates which are 
simultaneously:

- Trusted by the worldwide public for external communications (public 
 servers, e-mails to outside parties, signing contracts etc.).
- Trusted at a higher level by internal systems and people, based on 
 the certainty that no one outside the company security organization 
 can issue these certificates).

Examples of typical uses for such certificates:

- An extranet portal web server used by customers (WebPKI trust) and 
 traveling mobile devices (company trust checked strictly by company 
 installed apps).

- User certificates used for e-mail and system login.  E-mail 
 signatures and encryption trusted by public (S/MIME PKI) and more 
 strictly checked by internal functions (company trust).  System 
 login using the same certificate is also limited to company trust.

In both cases, the user/system using the certificate does not 
distinguish internal/external use and trust models, the distinction 
is by relying parties only.

One examples of a specific danger avoided by having such public+private 
trusted end certificates and hierarchies include the somewhat common 
"CEO fraud", where an external attacker (with an easily obtained 
external e-mail address and certificate) contacts the accounting 
department pretending to be a higher ranking company officer.

By having the real officers use company-issued certificates for their 
e-mails, accounting would automatically get an onscreen notification 
that this is an "OUTSIDE E-MAIL", and refuse the payment.


> 
>> Under the current scheme, the "Org SSL CA" and the "Org Mail CA" must be
>> issued either by the Root, or by other CAs that chain up to the Root as
>> the least common denominator. The organization would have to either
>> trust the Root as an internally trusted CA, which would mean that they
>> would also place trust in other certificates issued by the same CA (CA
>> as an organization issuing certificates) to different organizations, or
>> deploy both CAs and duplicate controls, if possible (not all software
>> supports this). Of course, they could deploy just a single CA depending
>> on the usage. This adds more management cost, and may lead to other
>> problems. For example, what if a service needs to authenticate users
>> using certificates from both the "Org SSL CA" and the "Org Mail CA"
>> (perfectly valid since both can issue certs having the clientAuth EKU)?
>>

Now if it is difficult for a publicly trusted company PKI to have 
different roots for different uses is not something I would know, as 
it obviously depends on the common limitations of current enterprise 
CA (not public CA) software packages.

However it could be argued that such companies should already have 
a second hierarchy for similar certificates that should not be 
publicly visible, leading to the following more useful minimal 
hierarchy:

Public root CA - R1
  \- Public intermediary CA - G2  (optional) [CT logged]
\- Org publicly trusted root - PCA2  [CT logged]
  \- Org public trusted intermediary - PY2019  [CT logged]
\- Org end cert with public trust  [CT logged if server EKU]

Org root CA - OC1
  +- Cross signature of Org public trusted intermediary - PY2019
  \- Org internal only intermediary - IY2019
\- Org internal only end cert without public trust

If this is done, it becomes a question if PCA2 and PY2019 should be 
duplicated for 

Re: Unretrievable CPS documents listed in CCADB

2019-05-03 Thread Jakob Bohm via dev-security-policy

The issue of identifying the proper CPS for older certificates raises
the important overall question of how this should be managed on an
industry wide basis:

  Note: All examples use numerically invalid values, such as OIDs
beginning with "5." or non-existent dates in the Gregorian calendar.

Some CCADB future policy ideas:

Option 1: The policy extension in each end cert could contain a specific
  OID for each policy version, preferably in a systematic way.  For
  example, if the OID the DV policy of a specific CA sub-hierarchy was
  5.4.3.2.1  Then the policy OID for policy release N (consisting of
  CPS version N and the CP version it references) could be 5.4.3.2.1.N .

Software running on the CCADB or in services like crt.sh could and
  should scan this data periodically to provide useful data and check
  for attempts to fraudulently rewrite history.

For this option, CCADB manual entry data would specify the latest
  OID and a URL formatting pattern to get any previous policy version by
  its number.

   TODO: This may or may not interfere with other PKI or root store
  policies that require or presume a static OID for the lifetime of the
  issuing CA.  It also wouldn't work for older policies that were not
  mapped in this way.

   TODO: This would not reflect subsequent changes in revocation policy
  affecting already issued certificates.


Option 2: At any time, the latest CPS document for each overall policy
  should contain, in a Mozilla or BR specified standard section number,
  a standard formatted table of the issuance date range covered by each
  previous CPS version (Assuming a hypothetical rule that a CPS must
  explicitly reference the exact applicable CP, if any).  Something
  like:

55.66.77.88 Past versions of this document:
  +-+--+--+---+
  | ver | eff date | document SHA-512 | Notes |
  +-+--+--+---+
  |1.0  |1996-02-30|xx| VeryOld hierarchy founded |
  | |  |xx|   |
  |1.1  |1997-02-30|xx| Timestamping CAs added|
  | . |
  |25.2 |2019-04-31|xx| Annual review, no changes |
  | |  |xx|   |
  +-+--+--+---+

Software running on the CCADB or in services like crt.sh could and
  should scan those tables periodically to provide useful data and check
  for attempts to fraudulently rewrite history.

For this option, CCADB manual entry data would specify a fixed URL
  for the latest version and a URL formatting pattern to get any
  previous version by its number.  For example
https://repository.example.com/repo/mailCPS-latest.pdf
https://repository.example.com/repo/mailCPS-%V.pdf

TODO: How to specify that a CA (root or intermediary) has been
  moved to another policy category, e.g. from "Digicert active S/MIME
  roots CPS" to "Digicert revocation-only S/MIME CAs policy" or
  "Digicert archived mail signature CAs policy".

TODO: How to specify that some past policy versions used a different
  file format (such as text/plain or text/html) for the canonical
  binding document form.

Option 3: Upon each policy change or renewal, the CA shall notify the
  root programs by appending information similar to the table in option
  2, plus a permanent URL in an appropriate CCADB table, visible to the
  public.  [ This is very similar to current policy ]

CCADB scripts would automatically extract the latest versions
  for the old single-version CSV files.  CCADB security settings should
  prevent CAs from retroactively rewriting history, and corrections
  need to be confirmed by the (CA) root program administrators.

Option 4: The repositories on each CA website must include a specific
  micro-format to allow programmatically extracting information similar
  to the table in option 2, including the URLs.

Software running on the CCADB or in services like crt.sh could and
  should scan those pages periodically to provide useful data and check
  for attempts to fraudulently rewrite history.

For this option, CCADB manual entry data would specify a URL for
  the properly formatted repository page, for example
https://repository.example.com/repo/mailCPS/history/index3.html
  (the example.com CA business uses the filenames index.html and
  index2.html for previous systematic schemes, all the page forms
  are probably generated from the same backend data).


On 03/05/2019 17:36, Ben Wilson wrote:

I'm against having to continually update the exact URL of the CP and CPS in the 
CCADB.  It's pretty easy to find the current CP and CPS from a legal 
repository.  Plus, if we point to an exact one in the CCADB, it might not be 
the one that is applicable to a given certificate 

Re: Certinomis Issues

2019-05-01 Thread Jakob Bohm via dev-security-policy

On 01/05/2019 22:29, mono.r...@gmail.com wrote:

2017 assessment report
LSTI didn't issue to Certinomis any "audit attestation" for the browsers in 2017. The 
document Wayne references is a "Conformity Assessment Report" for the eIDAS regulation.


I had a look at the 2017 report, and unless I misread, it implies conformity to ETSI 
EN 319 401 (Est vérifiée également la conformité aux normes: EN 319 401), whereas EN 
319 401 states, "The present document is aiming to meet the general 
requirements to provide trust and confidence in electronic
transactions including, amongst others, applicable requirements from Regulation (EU) 
No 910/2014 [i.2] and those from CA/Browser Forum [i.4].", so I'm not sure how 
that squares with saying it wasn't an audit taking CA/BF regulations into account?



But does EN 319 401, as it existed in 2016/2017 incorporate a clause to 
apply all "future" updates to the CAB/F regulations or otherwise cover 
all BRs applicable to the 2016/2017 timespan?


Because otherwise EN 319 401 compliance only implied compliance with
the subset of the BRs directly included in EN 319 401 or documents
incorporated by reference into EN 319 401 (the above quote is a
statement of intent to include the BR requirements that existed when
EN 319 401 was written).

That said, Mozilla policy at the time may have explicitly stated that an
EN 319 401 audit is/was sufficient for Mozilla inclusion purposes.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Require EKUs in End-Entity Certificates

2019-04-16 Thread Jakob Bohm via dev-security-policy

On 16/04/2019 08:56, Tadahiko Ito wrote:

On Tuesday, April 2, 2019 at 9:36:06 AM UTC+9, Brian Smith wrote:


I agree the requirements are already clear. The problem is not the clarity
of the requirements. Anybody can define a new EKU because EKUs are listed
in the certificate by OIDs, and anybody can make up an EKU. A standard
isn't required for a new OID. Further, not agreeing on a specific EKU OID
for a particular kind of usage is poor practice, and we should discourage
that poor practice.



It is good that anyone can make OID, so we do not need to violate policy.

However, I have following concerns with increasing private OIDs in the world.
-I think that OID should be CA’s private OID or public OID. because, in the 
case of a CA is going to out of business, and that business was cared by 
another CA, we would not want those two CA using same OID for different usage.
-In the other hand, CA’s private OIDs will reduce interoperability, which seems 
to be problematic,
-web browser might just ignore private OIDs, but I am not sure other 
certificate verification applications,
which is used for certificate of that private EKU OID.

over all, I think we should have some kind of public OIDs, at least for widely 
use purpose.

I believe if it were used for internet, we can write Internet-Draft, and ask 
OIDs on RFC3280 EKU repo.
#I am planing to try that.



The common way to create an OID is to get an OID assigned to your (CA)
business, either through a national standards organization or by getting
an "enterprise ID" from IANA.

Then append self-chosen suffixes to subdivide your new (near infinite)
OID name space.  For example, if your national standards organization
assigned your CA the OID "9.99.999." (not actually a valid OID btw),
you could subdivide like this (fun example).

9.99.999..1  - Certificate policies
9.99.999..1.1 - Your test certificates policy
9.99.999..1.1.2019.3.12 - March 12, 2019 version of that policy
9.99.999..1.2 - Your EV policy
9.99.999..2  - EKUs
9.99.999..2.1 - CA internal Database backup signatures
9.99.999..2.2 - login certificates for the secure room door lock
9.99.999..2.3 - Gateway certificates for custom protocol X
9.99.999..3  - Custom DN elements
9.99.999..3.1 - Paygrade
9.99.999..4  - Certificate/CMS/CRL/OCSP extensions
9.99.999..4.1 - Date and location of photo ID validation
9.99.999..5  - Algorithms
9.99.999..5.1 - Caesar encryption.
9.99.999..5.2 - CRC8 hash
9.99.999..9 - East company division
9.99.999..9.99.999 - This is getting silly.

Etc.

A different CA would have (by the hierarchical nature of the OID system)
a different assigned OID such as 9.88.999. thus not overlapping.

Thus no risk of conflicting uses unless someone breaks the basic OID
rules.  The actual risk (as illustrated by EV) is getting too many
different OIDs for the same thing.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Arabtec Holding public key? [Weird Digicert issued cert]

2019-04-15 Thread Jakob Bohm via dev-security-policy

According to Jeremy (see below), that was not the situation.

On 15/04/2019 14:09, Man Ho wrote:

I don't think that it's trivial for less-skilled user to obtain the CSR
of "DigiCert Global Root G2" certificate and posting it in the request
of another certificate, right?


On 15-Apr-19 6:57 PM, Jakob Bohm via dev-security-policy wrote:

Thanks for the explanation.

Is it possible that a significant percentage of less-skilled users
simply pasted in the wrong certificates by mistake, then wondered why
their new certificates newer worked?

Pasting in the wrong certificate from an installed certificate chain or
semi-related support page doesn't seem an unlikely user error with that
design.

On 12/04/2019 18:56, Jeremy Rowley wrote:

I don't mind filling in details.

We have a system that permits creation of certificates without a CSR
that works by extracting the key from an existing cert, validating
the domain/org information, and creating a new certificate based on
the contents of the old certificate. The system was supposed to do a
handshake with a server hosting the existing certificate as a form of
checking control over the private key, but that was never
implemented, slated for a phase 2 that never came. We've since
disabled that system, although we didn't file any incident report
(for the reasons discussed so far).





Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Arabtec Holding public key? [Weird Digicert issued cert]

2019-04-15 Thread Jakob Bohm via dev-security-policy

Thanks for the explanation.

Is it possible that a significant percentage of less-skilled users
simply pasted in the wrong certificates by mistake, then wondered why
their new certificates newer worked?

Pasting in the wrong certificate from an installed certificate chain or
semi-related support page doesn't seem an unlikely user error with that
design.

On 12/04/2019 18:56, Jeremy Rowley wrote:

I don't mind filling in details.

We have a system that permits creation of certificates without a CSR that works 
by extracting the key from an existing cert, validating the domain/org 
information, and creating a new certificate based on the contents of the old 
certificate. The system was supposed to do a handshake with a server hosting 
the existing certificate as a form of checking control over the private key, 
but that was never implemented, slated for a phase 2 that never came. We've 
since disabled that system, although we didn't file any incident report (for 
the reasons discussed so far).

-Original Message-
From: dev-security-policy  On 
Behalf Of Wayne Thayer via dev-security-policy
Sent: Friday, April 12, 2019 10:39 AM
To: Jakob Bohm 
Cc: mozilla-dev-security-policy 
Subject: Re: Arabtec Holding public key? [Weird Digicert issued cert]

It's not clear that there is anything for DigiCert to respond to. Are we 
asserting that the existence of this Arabtec certificate is proof that DigiCert 
violated section 3.2.1 of their CPS?

- Wayne

On Thu, Apr 11, 2019 at 6:57 PM Jakob Bohm via dev-security-policy < 
dev-security-policy@lists.mozilla.org> wrote:


On 11/04/2019 04:47, Santhan Raj wrote:

On Wednesday, April 10, 2019 at 5:53:45 PM UTC-7, Corey Bonnell wrote:

On Wednesday, April 10, 2019 at 7:41:33 PM UTC-4, Nick Lamb wrote:

(Resending after I typo'd the ML address)

At the risk of further embarrassing myself in the same week, while
working further on mimicking Firefox trust decisions I found this
pre-certificate for Arabtec Holding PJSC:

https://crt.sh/?id=926433948

Now there's nothing especially strange about this certificate,
except that its RSA public key is shared with several other
certificates



https://crt.sh/?spkisha256=8bb593a93be1d0e8a822bb887c547890c3e706aad2d
ab76254f97fb36b82fc26


... such as the DigiCert Global Root G2:

https://crt.sh/?caid=5885


I would like to understand what happened here. Maybe I have once
again made a terrible mistake, but if not surely this means either
that the Issuing authority was fooled into issuing for a key the
subscriber doesn't actually have or worse, this Arabtec Holding
outfit has the private keys for DigiCert's Global Root G2

Nick.


AFAIK there's no requirement in the BRs or Mozilla Root Policy for
CAs

to actually verify that the Applicant actually is in possession of the
corresponding private key for public keys included in CSRs (i.e.,
check the signature on the CSR), so the most likely explanation is
that the CA in question did not check the signature on the
Applicant-submitted CSR and summarily embedded the supplied public key
in the certificate (assuming Digicert's CA infrastructure wasn't
compromised, but I think that's highly unlikely).


A very similar situation was brought up on the list before, but
with

WoSign as the issuing CA:
https://groups.google.com/d/msg/mozilla.dev.security.policy/zECd9J3KBW
8/OlK44lmGCAAJ




While not a BR requirement, the CA's CPS does stipulate validating

possession of private key in section 3.2.1 (looking at the change
history, it appears this stipulation existed during the cert
issuance). So something else must have happened here.


Except for the Arabtec cert, the other certs looks like cross-sign
for

the Digicert root.




Why still no response from Digicert?  Has this been reported to them
directly?




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Arabtec Holding public key? [Weird Digicert issued cert]

2019-04-11 Thread Jakob Bohm via dev-security-policy

On 11/04/2019 04:47, Santhan Raj wrote:

On Wednesday, April 10, 2019 at 5:53:45 PM UTC-7, Corey Bonnell wrote:

On Wednesday, April 10, 2019 at 7:41:33 PM UTC-4, Nick Lamb wrote:

(Resending after I typo'd the ML address)

At the risk of further embarrassing myself in the same week, while
working further on mimicking Firefox trust decisions I found this
pre-certificate for Arabtec Holding PJSC:

https://crt.sh/?id=926433948

Now there's nothing especially strange about this certificate, except
that its RSA public key is shared with several other certificates

https://crt.sh/?spkisha256=8bb593a93be1d0e8a822bb887c547890c3e706aad2dab76254f97fb36b82fc26

... such as the DigiCert Global Root G2:

https://crt.sh/?caid=5885


I would like to understand what happened here. Maybe I have once again
made a terrible mistake, but if not surely this means either that the
Issuing authority was fooled into issuing for a key the subscriber
doesn't actually have or worse, this Arabtec Holding outfit has the
private keys for DigiCert's Global Root G2

Nick.


AFAIK there's no requirement in the BRs or Mozilla Root Policy for CAs to 
actually verify that the Applicant actually is in possession of the 
corresponding private key for public keys included in CSRs (i.e., check the 
signature on the CSR), so the most likely explanation is that the CA in 
question did not check the signature on the Applicant-submitted CSR and 
summarily embedded the supplied public key in the certificate (assuming 
Digicert's CA infrastructure wasn't compromised, but I think that's highly 
unlikely).

A very similar situation was brought up on the list before, but with WoSign as 
the issuing CA: 
https://groups.google.com/d/msg/mozilla.dev.security.policy/zECd9J3KBW8/OlK44lmGCAAJ



While not a BR requirement, the CA's CPS does stipulate validating possession 
of private key in section 3.2.1 (looking at the change history, it appears this 
stipulation existed during the cert issuance). So something else must have 
happened here.

Except for the Arabtec cert, the other certs looks like cross-sign for the 
Digicert root.



Why still no response from Digicert?  Has this been reported to them
directly?



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Clarify Section 5.1 ECDSA Curve-Hash Requirements

2019-04-04 Thread Jakob Bohm via dev-security-policy

On 04/04/2019 02:22, Wayne Thayer wrote:

A number of ECC certificates that fail to meet the requirements of policy
section 5.1 were recently reported [1]. There was a lack of awareness that
Mozilla policy is more strict than the BRs in this regard. I've attempted
to address that by adding this to the list of "known places where this
policy takes precedence over the Baseline Requirements" in section 2.3 [2].

This proposal is to clarify the 'ECDSA' language in section 5.1. That
language was introduced in version 2.4 of our policy [3]. The description
of this issue on GitHub [4] states "Section 5.1's language seems ambiguous
because it doesn't clarify whether the allowed curve-hash pair is related
to the CA key or the Subject Key." For example, does the policy permit a
P-384 key in an intermediate CA certificate to sign a P-256 key in an
end-entity certificate with SHA-256, SHA-384, or not at all? My limited
understanding of the intent is that the key and signature algorithm in each
certificate must match - e.g. it permits a P-384 key in an intermediate CA
certificate to sign a P-256 key in an end-entity certificate with SHA-256 -
but I could be wrong about that and would appreciate everyone's input on
what this is supposed to mean.

If my understanding of the desired policy is correct, then I propose the
following clarification:



This is cryptographically wrong and insecure.

The common requirement for all DSA-style algorithms is that each
private key is used with exactly one hash algorithm, of a size
matching the (sub)group used.  No other hash algorithm shall be
signed for the lifetime of the private key, even if that is not
expressed in the X.509 data structure.  This technical security
requirement applies for the lifetime of the private key.

Thus the permissible combinations under the current Mozilla policy
should be:

P-256 signing using SHA-256
P-384 signing using SHA-384

While the BRs also allow:

P-512 signing using SHA-512

In all 3 cases, the signed certificate could use any permitted
public key algorithm, EKU, name etc. subject to the general
policy restrictions on those fields.  For example a P-384+SHA-384
CA key can sign an P-256+SHA-256 cert.  Signing a stronger cert at
the next level is less useful, except to cross sign new roots with
old roots.

This is of cause completely different for RSA, because PKCS#1 RSA
incorporates the hash identifier into the signature in such a way that a
signature on one hash isn't also a valid signature on a completely
different hash that happens to have the same bit pattern.



Root certificates in our root program, and any certificate which
chains up to them, MUST use only algorithms and key sizes from the following
set:
[...]

- ECDSA keys using one of the following curve-hash pairs:
   - P‐256 signed with SHA-256
   - P‐384 signed with SHA-384


The only change is the addition of the word "signed" to signify that the
pairing represents the combination of the key and signature in a given
certificate.

An enumeration of the permitted combinations has also been suggested, but
it's not clear to me how that solves the confusion that currently exists.
It would be great if someone who prefers this approach would make a
proposal.

This is https://github.com/mozilla/pkipolicy/issues/170

I will greatly appreciate everyone's input on both the intent of the
current policy, and how best to clarify it.

- Wayne

[1]
https://groups.google.com/d/msg/mozilla.dev.security.policy/mCKvUmYUMb0/sqZVnFvKBwAJ
[2]
https://github.com/mozilla/pkipolicy/commit/3e38142acd28b152eca263e7528fac940efb20e2
[3] https://github.com/mozilla/pkipolicy/issues/5
[4] https://github.com/mozilla/pkipolicy/issues/170




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Apple: Non-Compliant Serial Numbers

2019-04-01 Thread Jakob Bohm via dev-security-policy
On 30/03/2019 22:16, certification_author...@apple.com wrote:
> On March 30, Apple submitted an update to the original incident report 
> (https://bugzilla.mozilla.org/show_bug.cgi?id=1533655), which is reposted 
> below.
> ___
> 
> We've been working our plan to revoke impacted certificates. Thus far over 
> 500,000 certificates have been revoked since the issue was identified and 
> 54,853 remain (file attached with remaining certificates [in teh Bugzilla 
> post]). Our plan will result in all impacted certificates being revoked.
> 
> Our approach to resolving this incident has been to strike a balance between 
> a compliance incident with low associated security risk and impact to 
> critical services. We have established a timeline to address the remaining 
> certificates that minimizes service impact and allows standard QA and change 
> control processes to ensure uptime is not affected.
> 
> As part of the remediation plan, a number of certificates will be migrated to 
> an internal, enterprise PKI, which will take more time.
> 
> Based on these factors, it is expected that most certificates will be revoked 
> by April 30 with less than 2% extending until July 15.
> 
> Another update will be provided next week.
> 

For the benefit of the community (including possible future creation of 
policies for mass revocation scenarios), could you detail:

1. How many of the 54,583 certificates are issued to Apple owned and 
  operated servers and services and how many not.

2. What kinds of practical issues are delaying the replacement of 
  certificates on any such Apple operated servers and services, 
  perhaps with approximate percentages.

For example is it:

2a. Security lockdown requiring specific authorized persons to oversee 
  the certificate change in person.

2b. Security lockdown requiring security staff to physically travel to 
  remote locations to authorize the change, one location at a time.

2c. Security procedures requiring some authorized persons to authorize 
  the changes one certificate at a time, with those persons now 
  inundated with a much larger than usual number of such requests 
  per day/week.

2d. Non-security procedures requiring specific people to check the 
  changes for mistakes.  Those people now being inundated with a much 
  larger than usual number of such requests per day/week.

2e. Non-security procedures requiring the changes to go through 
  automated regression tests.  The regression testing computers now 
  being inundated with a much larger than usual number of such 
  requests per day/week.

2f. Non-security procedures requiring the changes to be run on other 
  computers for a certain number of weeks before deployment on some 
  computers.

2g. Certificate checking procedures requiring certificates to remain 
  valid for a certain period of time after their last actual use.

2h. Servers managed by teams that are busy with unrelated tasks at 
  this time.

2o. Obscure servers that are rarely touched, causing practical problems 
  locating the teams responsible.

2p. Anything else.


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Clarify Meaning of "Technically Constrained"

2019-03-29 Thread Jakob Bohm via dev-security-policy
On 28/03/2019 21:52, Wayne Thayer wrote:
> Our current Root Store policy assigns two different meanings to the term
> "technically constrained":
> * in sections 1.1 and 3.1, it means 'limited by EKU'
> * in section 5.3 it means 'limited by EKU and name constraints'
> 
> The BRs already define a "Technically Constrained Subordinate CA
> Certificate" as:
> 
> A Subordinate CA certificate which uses a combination of Extended Key Usage
>> settings and Name Constraint settings to limit the scope within which the
>> Subordinate CA Certificate may issue Subscriber or additional Subordinate
>> CA Certificates.
>>
> 
> I propose aligning Mozilla policy with this definition by leaving
> "technically constrained" in section 5.3, and changing "technically
> constrained" in sections 1.1 and 3.1 to "technically capable of issuing"
> (language we already use in section 3.1.2). Here are the proposed changes:
> 
> https://github.com/mozilla/pkipolicy/commit/91fe7abdc5548b4d9a56f429e04975560163ce3c
> 
> This is https://github.com/mozilla/pkipolicy/issues/159
> 
> I will appreciate everyone's comments on this proposal. In particular,
> please consider if the change from "Such technical constraints could
> consist of either:" to "Intermediate certificates that are not considered
> to be technically capable will contain either:" will cause confusion.
> 

To further reduce confusion, perhaps change the terminology in Mozilla 
policy to "Technically Constrained to Subscriber", meaning technically 
constrained to only be capable of issuing for a fully validated 
subscriber identity (validated as if some hypothetical kind of wildcard 
EE cert).

This of cause remains applicable to all the kinds of identities 
recognized and regulated by the Mozilla root program, which currently 
happens to be server domain, EV organization, and e-mail address 
identities.

I realize that the BR meaning may be intended to be so too, but many 
discussions over the years have indicated confusion.



Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Applicability of SHA-1 Policy to Timestamping CAs

2019-03-25 Thread Jakob Bohm via dev-security-policy
On 25/03/2019 23:42, Wayne Thayer wrote:
> My general sense is that we should be doing more to discourage the use of
> SHA-1 rather than less. I've just filed an issue [1] to consider a ban on
> SHA-1 S/MIME certificates in the future.
> 
> On Mon, Mar 25, 2019 at 10:54 AM Jakob Bohm via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
>>
>> As for myself and my company, we switched to a non-Symantec CA for these
>> services before the general SHA-1 deprecation and thus the CA we use can
>> continue to update relevant intermediary CAs using the exception to
>> extend the lifetime of historic issuing CAs.  However it would probably
>> be more secure (less danger to users) if CAs routinely issued
>> sequentially named new issuing CAs for these purposes at regular
>> intervals (perhaps annually), however this is against current Mozilla
>> Policy if the root is still in the Mozilla program (as an anchor for
>> SHA2 WebPKI or e-mail certs).
>>
>>
> I do acknowledge the legacy issue that Jakob points out, but given that it
> hasn't come up before, I question if it is a problem that we need to
> address. I would be interested to hear from others who have a need to issue
> new SHA-1 subordinate CA certificates for uses beyond the scope of the BRs.
> We could consider a loosening of the section 5.1.1 requirements on
> intermediates, but I am concerned about creating loopholes and about
> contradicting the BRs (which explicitly ban SHA-1 OCSP signing certificates
> in section 7.1.3).
> 
> - Wayne
> 
> [1] https://github.com/mozilla/pkipolicy/issues/178
> 

The situation has resurfaced due to recent developments affecting the 
original workarounds.

I will have to remind everyone, that when SHA-1 was deprecated, Symantec 
handled this legacy issue by formally withdrawing a few of their many 
old (historically Microsoft trusted) roots from the Mozilla root 
program, allowing those roots to continue to run as "SHA-1-forever" 
roots completely beyond all "modern" policies.

As Digicert winds down the legacy parts of Symantec operations, Windows 
developers that didn't leave Symantec early will be hunting for 
alternatives among the CAs whose SHA-1 roots were trusted by the 
affected MS software versions.  A number of those CAs don't have such a 
stockpile of legacy roots that could be removed from the modern PKI 
ecosystem without affecting the validity of current SHA-2 certificates.

For example GlobalSign, another large CA, only has one root trusted by 
legacy SHA-1 systems, their R1 root.  That root is unfortunately also 
their forward compatibility root that provides trust to modern WebPKI 
certificates via cross-signing of later GlobalSign roots.  This means 
that anything GlobalSign does in the SHA-1 compatibility space is 
constrained by CAB/F, CASC and Mozilla policies, such as the Mozilla 
restriction to not cut new issuing compatibility CAs and the CASC 
restriction to stop all SHA-1 code signing support in 2021.

Creating new SHA-1-only roots (outside the modern PKI) for this job is 
not viable, as the roots need to be in the historic versions of the MS 
root store as bundled by affected systems.  For some code, the roots 
even need to be among the few that got a special kernel mode cross-cert 
from Microsoft.  Those legacy root stores were completely dominated by 
roots that were bought up by Symantec.

Raw data:

The full historic list of roots with kernel mode MS cross certs [Apologies if 
root transfers have sent some to different companies than indicated]

Trusted until 2023 (in alphabetical order by brand):

[GoDaddy] C=US, O=The Go Daddy Group, Inc., OU=Go Daddy Class 2 Certification 
Authority

[GoDaddy] C=US, O=Starfield Technologies, Inc., OU=Starfield Class 2 
Certification Authority

[Sectigo] C=SE, O=AddTrust AB, OU=AddTrust External TTP Network, CN=AddTrust 
External CA Root

[Sectigo] C=US, ST=UT, L=Salt Lake City, O=The USERTRUST Network, 
OU=http://www.usertrust.com, CN=UTN-USERFirst-Object



Trusted until 2021 Not Digicert/Symantec (in alphabetical order by brand):

[EnTrust] O=Entrust.net, OU=www.entrust.net/CPS_2048 incorp. by ref. (limits 
liab.), OU=(c) 1999 Entrust.net Limited, CN=Entrust.net Certification Authority 
(2048)

[GlobalSign] C=BE, O=GlobalSign nv-sa, OU=Root CA, CN=GlobalSign Root CA

[GoDaddy] C=US, ST=Arizona, L=Scottsdale, O=GoDaddy.com, Inc., CN=Go Daddy Root 
Certificate Authority - G2

[GoDaddy] C=US, ST=Arizona, L=Scottsdale, O=Starfield Technologies, Inc., 
CN=Starfield Root Certificate Authority - G2

[NetLock] C=HU, L=Budapest, O=NetLock Kft., OU=Tanúsítványkiadók (Certification 
Services), CN=NetLock Arany (Class Gold) Főtanúsítvány

[NetLock] C=HU, L=Budapest, O=NetLock Kft., OU=Tanúsítványkiadók (Certification 
Services), CN=NetLock Platina (Class Platinum) Főtanúsítv

Re: GRCA Incident: BR Compliance and Document Signing Certificates

2019-03-25 Thread Jakob Bohm via dev-security-policy

On 25/03/2019 22:29, Matthew Hardeman wrote:

On Mon, Mar 25, 2019 at 3:03 PM Ryan Hurst via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


While it may be true that the certificates in question do not contain
SANs, unfortunately, the certificates may still be trusted for SSL since
they do not have EKUs.

For an example see "The most dangerous code in the world: validating SSL
certificates in non-browser software" which is available at
https://crypto.stanford.edu/~dabo/pubs/abstracts/ssl-client-bugs.html

What you will see that hostname verification is one of the most common
areas applications have a problem getting right. Often times they silently
skip hostname verification, use libraries provide options to disable host
name verifications that are either off by default, or turned off for
testing and never enabled in production.

One of the few checks you can count on being right with any level of
predictability in my experience is the server EKU check where absence is
interpreted as an entitlement.



My ultimate intent was to try to formulate a way in which GRCA could
provide certificates for the applications that they're having to support
for their clients today without having to essentially plan to be
non-compliant for a multi-year period.

It sounds like there's one or more relying-party applications that perform
strict validation of EKU if provided, but would appear not to have a single
standard EKU that they want to see (except perhaps the AnyPurpose.)

I'm confused as to why these certificates, which seem to be utilized in
applications outside the usual WebPKI scope, need to be issued in a trust
hierarchy that chains up to a root in the Mozilla store.  It would seem
like the easiest path forward would be to have the necessary applications
include a new trust anchor and issue these certificates outside the context
of the broader WebPKI.

In essence, if there are applications in which these GRCA end-entity
certificates are being utilized where the Mozilla trust store is utilized
as a trust anchor set and for which the validation logic is clearly quite
different from the modern WebPKI validation logic and where that validation
logic effectively requires non-compliance with Mozilla root store policy,
is this even a use case that the program and/or community want to support?



I wonder (but can't be sure) if the GRCA requirements analysis is simply
incomplete.  Specifically, I see no claim they have actually found a
client tool having all these properties at the same time:

1. That tool fails to accept certificates containing more EKUs than that
  tool needs (example, the tool rejects certs with exampleRandomEKUoid,
  but accepts certs without it).  Yet it accepts certs with no EKU
  extension.  The rejection happens even if the EKU extension is not
  marked critical.

2. It is absolutely necessary for the object/document signing EE certs
  to work with that tool.

3. The tool cannot be easily fixed/upgraded to remove the problem.

If there is no such problematic tool in the target environment, GRCA
could (like other CAs in the Mozilla root program) make a list of needed
specific EKU oids and include them all in their certificate template.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Applicability of SHA-1 Policy to Timestamping CAs

2019-03-25 Thread Jakob Bohm via dev-security-policy
On 23/03/2019 02:03, Wayne Thayer wrote:
> On Fri, Mar 22, 2019 at 6:54 PM Peter Bowen  wrote:
> 
>>
>>
>> On Fri, Mar 22, 2019 at 11:51 AM Wayne Thayer via dev-security-policy <
>> dev-security-policy@lists.mozilla.org> wrote:
>>
>>> I've been asked if the section 5.1.1 restrictions on SHA-1 issuance apply
>>> to timestamping CAs. Specifically, does Mozilla policy apply to the
>>> issuance of a SHA-1 CA certificate asserting only the timestamping EKU and
>>> chaining to a root in our program? Because this certificate is not in
>>> scope
>>> for our policy as defined in section 1.1, I do not believe that this would
>>> be a violation of the policy. And because the CA would be in control of
>>> the
>>> entire contents of the certificate, I also do not believe that this action
>>> would create an unacceptable risk.
>>>
>>> I would appreciate everyone's input on this interpretation of our policy.
>>>
>>
>> Do you have any information about the use case behind this request?  Are
>> there software packages that support a SHA-2 family hash for the issuing CA
>> certificate for the signing certificate but do not support SHA-2 family
>> hashes for the timestamping CA certificate?
>>
> 
> I was simply asked if our policy does or does not permit this, so I can
> only speculate that the use case involves code signing that targets an
> older version of Windows. If the person who asked the question would like
> to send me specifics, I'd be happy to relay them to the list.
> 

As a matter of general information (I happen to have investigated MS 
"AuthentiCode" code signing in some detail):

1. Some parts of some Windows versions will only accept certificate 
  chains using the RSA-SHA1 (PKCS#1 v1.x) signature types.  Those 
  generally chain to a root of similar vintage, which may or may not 
  still be issuing SHA-2 intermediary certificates under Mozilla 
  Policy.

2. Some parts of some Windows versions will only accept EE certs using 
  RSA-SHA1, but will accept RSA-SHA2 higher in the certificate chain.

3. Recent Windows versions will accept RSA-SHA2 signatures, but may or 
  may not accept RSA-SHA1 signatures.

4. Most parts of Windows versions that accept RSA-SHA2 signatures allow 
  dual signature configurations where items are signed with both RSA-SHA2 
  and RSA-SHA1 signatures in such a way that older versions will see 
  and accept the RSA-SHA1 signature while newer versions will see and 
  check the RSA-SHA2 signature (or both signatures).

5. All post-1996 MS code signing systems expect and check the presence of 
  countersignatures by a timestamping authority with long validity period 
  (many decades).  These signatures verify that the signature made by the 
  EE certificate existed on or before a certain time within the (1 to 5 
  year) validity period of the EE cert itself.  Thus barring an existential 
  forgery of the timestamping signature, most other aspects need only 
  resist second pre-image style attacks on the EE signature.

Type 1 and 2 subsystems are still somewhat widespread as official 
attempts to backport RSA-SHA2 support have failed or never been made.

Thus there is an ongoing need for certificate subscribers to obtain and 
use both types of certificates and the corresponding timestamp 
countersignature services.

The ongoing shutdown of old Symantec infrastructure has left a lot of 
subscribers to these certificates looking for replacement CAs, thus 
making it interesting for CAs that were included in the relevant MS root 
program back in the day to begin or restart providing those services to 
former Symantec subscribers.

As for myself and my company, we switched to a non-Symantec CA for these 
services before the general SHA-1 deprecation and thus the CA we use can 
continue to update relevant intermediary CAs using the exception to 
extend the lifetime of historic issuing CAs.  However it would probably 
be more secure (less danger to users) if CAs routinely issued 
sequentially named new issuing CAs for these purposes at regular 
intervals (perhaps annually), however this is against current Mozilla 
Policy if the root is still in the Mozilla program (as an anchor for 
SHA2 WebPKI or e-mail certs).


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CFCA certificate with invalid domain

2019-03-18 Thread Jakob Bohm via dev-security-policy

On 18/03/2019 02:05, Nick Lamb wrote:

On Fri, 15 Mar 2019 19:41:58 -0400
Jonathan Rudenberg via dev-security-policy
 wrote:


I've noted this on a similar bug and asked for details:
https://bugzilla.mozilla.org/show_bug.cgi?id=1524733


I can't say that this pattern gives me any confidence that the CA
(CFCA) does CAA checks which are required by the BRs.

I mean, how do you do a CAA check for a name that can't even exist? If
you had the technology to run this check, and one possible outcome is
"name can't even exist" why would you choose to respond to that by
issuing anyway, rather than immediately halting issuance because
something clearly went badly wrong? So I end up thinking probably CFCA
does not actually check names with CAA before issuing, at least it does
not check the names actually issued.



Technically, the name can exist, if (for some bad reason) ICANN were to
create the con. TLD (which would be a major invitation to phishing).

As "not found" is a permissive CAA check result, CAA checking may be
perfectly fine in this case.

Domain control validation however obviously failed, as no one controls
the non-existent domain, and thus no one could have proven control of
that domain.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: A modest proposal for a better BR 7.1

2019-03-12 Thread Jakob Bohm via dev-security-policy
On 09/03/2019 03:43, Matthew Hardeman wrote:
> I know this isn't the place to bring a BR ballot, but I'm not presently a
> participant there.
> 
> I present alternative language along with notes and rationale which, I put
> forth, would have resulted in a far better outcome for the ecosystem than
> the issues which have arisen from the present BR 7.1 subsequent to ballot
> 164.
> 
> I humbly propose that this would have been a far better starting point, for
> reasons I discuss in notes below.
> 
> Effective as of Month Day, Year, CAs shall generate a certificate serial
> numbers as herein specified:
> 
> 
> 1. The ASN.1 signed integer encoded form of the certificate serial
> number value must be represented as not less than 9 bytes and not more 
> than
> 20 bytes.  [Note 1]
> 2. The hexadecimal value of the first byte of the certificate serial
> number shall be 0x75.  [Note 2]
> 3. The consecutive 64 bits immediately following the first byte of the
> encoded serial number shall be the first 64 bits of output of an AES-128
> random session key generation operation, said operation having been seeded
> within random data to within its design requirements. [Note 3]
> 4. The remaining bytes of the encoded serial number (the 10th through
> 20th bytes of the encoded serial number), to the extent any are desired,
> may be populated with any values. [Note 4]
> 
> Notes / Rationale:
> 
> Note 1.  The first bullet point sets out a structure which necessarily
> requires that the encoded form of the serial number for all cases be at
> least 9 bytes in length.  As many CAs would have been able to immediately
> see that their values, while random, don't reach 9 bytes, each CA in that
> case would have had an easy hint that further work to assess compliance
> with this BR change would be necessary and would definitely result in
> changes.  I believe that would have triggered the necessary investigations
> and remediations.  To the extent that it did not do so, the CAs which
> ignored the change would be quickly identifiable as a certificate with an 8
> byte serial number encoding would not have been valid after the effective
> date.
> 
> Note 2.  A fixed value was chosen for the first byte for a couple of
> reasons.  First, by virtue of not having a value of 1 in the highest order
> bit, it means that ASN.1 integer encoding issues pertaining to sign are
> mooted.  Secondarily, with each certificate issuance subsequent to the
> effective date of the proposal, a CA which has not updated their systems to
> accommodate this ballot but does use random number generation to populate
> the certificate serial has a probability of 127/128 of revealing that they
> have not implemented the changes specified in this ballot.
> 
> Note 3.  CAs and their software vendors are quite familiar with
> cryptographic primitives, cryptographic keys, key generation, etc.  Rather
> than using ambiguous definitions of randomness or bits of entropy or output
> of a CSPRNG, the administration of a CA and their software vendors should
> be able to be relied upon to understand the demands of symmetric key
> generation in actual practice.  By choosing to specify a symmetric block
> cipher type and key size in common use, the odds of an appropriate
> algorithm being selected from among the large body of appropriate
> implementations of such algorithms greatly reduces odds of low quality
> "random" data for the serial number.
> 
> Note 4.  Note 4 makes clear that plenty of space remains for the CA to
> populate other information, be it further random data or particular data of
> interest to the CA, such as sequence numbers, date/time, etc.
> 
> Further notes / rationale:
> 
> In supporting systems whose databases may support only a 64 bit serial
> number in database storage, etc, it is noteworthy that the serial number
> rules I specified here only refer to the encoded form which occurs in the
> certificate itself, not any internal representation in an issuance
> database.  Because the first byte is hard coded to 0x75 in my proposal,
> this doesn't need to be reflected in a legacy system database, it can just
> be implemented as part of the certificate encoding process.
> 
> Strategically, certificates which would conform to the proposal I've laid
> out here would obviously and facially be different from any previously
> deployed system which routinely utilized 8 byte encodings, meaning that
> every CA previously generating 8 byte serials would have an obvious signal
> that they needed to dig into their serial number generation methodologies.
> 
> By tying the generation of high quality random data to fill the serial
> number to algorithms and procedures already well known to CAs and to their
> vendors, auditors, etc, my proposal enhances the odds that the required
> amount of random unpredictable bits actually be backed by a mechanism
> appropriate for the use of cryptography.
> 
> If anyone thinks any of 

Re: The current and future role of national CAs in the root program

2019-03-08 Thread Jakob Bohm via dev-security-policy

On 08/03/2019 06:27, Peter Bowen wrote:

...

Mozilla has specifically chosen to not distinguish between "government
CAs", "national CAs", "commercial CAs", "global CAs", etc.  The same rules
apply to every CA in the program.  Therefore, the "national or other
affiliation" is not something that is relevant to the end user.

These have all been discussed before and do not appear to be relevant to
any current conversation.



Many (not me) in the recent discussion of a certain CA have called
for this to be changed one way or another.  This is the only thing
that is new.

As I wrote earlier, there were a lot of general policy ideas and
questions mixed into the discussion of that specific case, and my
post was an attempt to summarize those questions and ideas raised by
others.

Maybe the ultimate result will be no change, maybe not.  The
discussion certainly has been raised by a lot of people.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: The current and future role of national CAs in the root program

2019-03-07 Thread Jakob Bohm via dev-security-policy

On 07/03/2019 23:02, Ryan Sleevi wrote:

Do you believe there is new information or insight you’re providing from
the last time this was discussed and decided?

For example:
https://groups.google.com/forum/m/#!searchin/mozilla.dev.security.policy/Government$20CAs/mozilla.dev.security.policy/JP1gk7atwjg

https://groups.google.com/forum/m/#!searchin/mozilla.dev.security.policy/Government$20CAs/mozilla.dev.security.policy/tr_PDVsZ6-k

https://groups.google.com/forum/m/#!searchin/mozilla.dev.security.policy/Government$20CAs/mozilla.dev.security.policy/qpwFbcRfBmk

I included the search query in the URL, so that you can examine for
yourself what new insight or information is being provided. I may have
missed some salient point in your message, but I didn’t see any new insight
or information that warranted revisiting such discussion.

In the spirit of
https://www.mozilla.org/en-US/about/forums/etiquette/ , it may be best to
let sleeping dogs lie here, rather than continuing this thread. However, if
you feel there has been some significant new information that’s been
overlooked, perhaps you can clearly and succinctly highlight that new
information.



I was stating that the the very specific discussion that recently
unfolded (and which I promised not to mention by name in this thread)
has contained very many opinions on the topic.  In fact, the majority of
posts by others have circled on either the entropy issue or this very 
issue of what criteria and procedures should be used for trusting

national CAs and if those criteria should be changed.

Your own posts on Feb 28, 2019 13:54 UTC and Mar 4, 2019 16:31 UTC were
among those posts, as were posts by hackurx, Alex Gaynor, nadim, Wayne 
Thayer, Kristian Fiskerstrand and Mathew Hardeman.


I took care not to state what decisions should be made, merely to
summarize the issues in a clear and seemingly non-controversial way,
trying to be inclusive of the opinions stated by all sides.  If there
are additional points on the topic that I forgot or that may arise later
in the specific discussion, they can and should be added such that there
will be a useful basis for discussion of whatever should or should not
be done long term, once the specific single case has been handled.

I did not wake this sleeping dog, it was barking and yanking its chain
all week.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


The current and future role of national CAs in the root program

2019-03-07 Thread Jakob Bohm via dev-security-policy
Currently the Mozilla root program contains a large number of roots that 
are apparently single-nation CA programs serving their local community 
almost exclusively, including by providing certificates that they can 
use to serve content with the rest of the world.

For purposes of this, I define a national CA as a CA that has publicly 
self-declared that it serves a single geographic community almost 
exclusively, with that area generally corresponding to national borders 
of a country or territory.

As highlighted by the discussion, this raises some common concerns for 
such CAs:

1. Due to the technical way Mozilla products handle the root program 
  data, each national CA is trusted to issue certificates for anyone 
  anywhere in the world despite them not having any self-declared 
  interest to do so.  This constitutes an unintentional security risk 
  as highlighted years ago by the 2011 DigiNotar (NL) incident.

2. For a variety of reasons, the existence of all these globally trusted 
  national CAs, has made establishment of such national CAs a matter of 
  pride for governments, regardless if they currently have such CAs.

3. There is a legitimate concern that any national CA (government run or 
  not) may be used by that government as a means to project force in a 
  manner inconsistent with being trusted outside that country (as 
  reflected in current Mozilla policy), but consistent with a general 
  view of the rights of nations (as expressed in the UN charter and 
  ancient traditions).

4. Some of the greatest nations on Earth have had their official 
  national CAs rejected by the root program because of #1 or #3, 
  including the US federal bridge CA and China's CNNIC.

This in turn leads to some practical issues:

5. Should the root program policies provide rules that enforce the 
  self-declared scope restrictions on a CA.  For example if a CA 
  has declared that it only intends to issue for entities in the 
  Netherlands, should certificates for entities beyond that be 
  considered as misissuance incidents for that reason alone 
  (DigiNotar involved misissuance in a much more literal sense).

6. How should rules for the meaning of such geographical intent be 
  mapped for things like IP address certificates ?  For example 
  should the rules use the geography indicated in NRO address space 
  assignments to national ISPs?  Or perhaps some information provided 
  by ISPs themselves?  (Commercial IP-to-country databases have a too 
  high error rate for certificate policy use).

7. How should rules for the meaning of such geographical intent be 
  mapped for certificates for domains under gTLDs such as visit-
  countryname.org or countryname-government.com ?

8. Should Mozilla champion a specification for adding such geographic 
  restrictions to CA cert name constraints in a manner that is both 
  backward compatible with other clients and adaptive to the ongoing 
  movement/reassignment of name spaces to/between nations.

9. Should Mozilla attempt to enforce such intent in its clients (Firefox 
  etc.) once the technical data exists?

10. The root trust data provided in the Firefox user interface does not 
  clearly indicate the national or other affiliation of the trusted 
  roots, such that concerned users may make informed decisions 
  accordingly.   Ditto for the root program dumps provided to other 
  users of the Mozilla root program data (inside and outside the Mozilla 
  product family).  For example, few users outside Scandinavia would 
  know that "Sonera" is really a national CA for the countries in which 
  Telia-Sonera is the incumbent Telco (Finland, Sweden and Åland).


This overall issue was touched repeatedly in the thread, especially 
point 3 above, but the earliest I could find was in Message ID 
 
posted on Fri, 22 Feb 2019 23:45:39 UTC by "cooperq"

On 07/03/2019 18:59, Jakob Bohm wrote:
> This thread is intended to be a catalog of general issues that come/came
> up at various points in the DarkMatter discussions, but which are not 
> about DarkMatter specifically.
> 
> Each response in this thread should have a subject line of the single 
> issue it discusses and should not mention DarkMatter except to mention 
> the Timestamp, message-id and Author of the message in which it came up.
> 
> Further discussion of each issue should be in response to that issue.
> 
> Each new such issue should be a response directly to this introductory 
> post, and I will make a few such subject posts myself.
> 
> Once again, no further mentions of Darkmatter in this thread are
> allowed, keep those in the actual Darkmatter threads.
> 



Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy 

EJBCA defaulting to 63 bit serial numbers

2019-03-07 Thread Jakob Bohm via dev-security-policy
In the cause of the other discussion it was revealed that EJBCA by PrimeKey 
has apparently:

1. Made serial numbers with 63 bits of entropy the default.  Which is 
  not in compliance with the BRs for globally trusted CAs and SubCAs.

2. Mislead CAs to believe this setting actually provided 64 bits of 
  entropy.

3. Discouraged CAs from changing that default.

This raises 3 derived concerns:

4. Any CA using the EJBCA platform needs to manually check if they 
  have patched EJBCA to comply with the BR entropy requirement despite 
  EJBCAs publisher (PrimeKey) telling them otherwise.
   Maybe this should be added to the next quarterly mail from Mozilla to
  the CAs.

5. Is it good for the CA community that EJBCA seems to be the only 
  generally available software suite for large CAs to use?

6. Should the CA and root program community be more active in ensuring 
  compliance by critical CA infrastructure providers such as EJBCA and 
  the companies providing global OCSP network hosting.


The above issue first came up in Message ID 

posted on Mon, 25 Feb 2019 08:39:07 UTC by Scott Rea, and subsequently 
lead to a number of replies, including at least one reply from Mike 
Kushner from EJBCA and a discovery that Google Trust Services was 
also hit with this issue to the tune of 100K non-compliant certificates.

On 07/03/2019 18:59, Jakob Bohm wrote:
> This thread is intended to be a catalog of general issues that come/came
> up at various points in the DarkMatter discussions, but which are not 
> about DarkMatter specifically.
> 
> Each response in this thread should have a subject line of the single 
> issue it discusses and should not mention DarkMatter except to mention 
> the Timestamp, message-id and Author of the message in which it came up.
> 
> Further discussion of each issue should be in response to that issue.
> 
> Each new such issue should be a response directly to this introductory 
> post, and I will make a few such subject posts myself.
> 
> Once again, no further mentions of Darkmatter in this thread are
> allowed, keep those in the actual Darkmatter threads.
> 


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


General issues that came up in the DarkMatter discussion(s)

2019-03-07 Thread Jakob Bohm via dev-security-policy

This thread is intended to be a catalog of general issues that come/came
up at various points in the DarkMatter discussions, but which are not 
about DarkMatter specifically.


Each response in this thread should have a subject line of the single 
issue it discusses and should not mention DarkMatter except to mention 
the Timestamp, message-id and Author of the message in which it came up.


Further discussion of each issue should be in response to that issue.

Each new such issue should be a response directly to this introductory 
post, and I will make a few such subject posts myself.


Once again, no further mentions of Darkmatter in this thread are
allowed, keep those in the actual Darkmatter threads.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-03-05 Thread Jakob Bohm via dev-security-policy

On 05/03/2019 16:11, Benjamin Gabriel wrote:

Message Body (2 of 2)
[... continued ..]

Dear Wayne


> ...


Yours sincerely,

Benjamin Gabriel
General Counsel
DarkMatter Group





As an outside member of this community (not employed by Mozilla or any 
public CA), I would like to state the following (which is not official

by my company affiliation and cannot possibly be official on behalf of
Mozilla):

1. First of all, thank you for finally directly posting Darkmatter's
  response to the public allegations.  Many people seemingly
  inexperienced in security and policy have posted rumors and opinions
  to the discussion, while others have mentioned that Darkmatter had
  made some kind of response outside this public discussion thread.

2. It is the nature of every government CA or government sponsored CA
  in the world that the current structure of the Mozilla program can
  be easily abused in the manner speculated, and that any such abuse
  would not be admitted, but rather hidden with the full skill and
  efficiency of whatever spy agency orders such abuse.  One of the
  most notorious such cases occurred when a private company running a
  CA for the Dutch Government had a massive security failure, allowing
  someone to obtain MitM certificates for use against certain Iranian
  people.
   I have previously proposed a technical countermeasure to limit this
  risk, but it didn't seem popular.

3. The chosen name of your CA "Dark matter" unfortunately is
  associated in most English language contexts with either an obscure
  astronomical phenomenon or as a general term for any sinister and
  evil topic.  This unfortunate choice of name may have helped spread
  and enhance the public rumor that you are an organization similar
  to the US NSA or its contractors.  After all, "Dark matter is evil"
  is a headline more likely to sell newspapers than "UAE company with a
  very boring name helped UAE government attack someone".  However I
  understand that as a long established company, it is probably too late
  to change your name.

4. The United States itself has been operating a government CA (The
  federal bridge CA) for a long time, and Mozilla doesn't trust it.
  In fact when it was discovered that Symantec had used one of their
  Mozilla trusted CAs to sign the US federal bridge CA, this was one
  of the many problems that lead to Mozilla distrusting Symantec,
  even though they were the oldest and biggest CA in the root
  program.

5. While Darkmatter LLC itself may have been unaware of the discussions
  that lead to the wording of the 64 bit serial entropy requirements,
  it remains open how QuoVadis was not aware of that discussion and
  did not independently discover that you were issuing with only 63
  bits under their authority.

6. Fairly recently, a private Chinese CA (WoSign) posted many partially
  untrue statements in their defense.  The fact that their posts were
  not 100% true, and sometimes very untrue, has lead to a situation
  where some on this list routinely compare any misstatements by a
  criticized CA to the consistent dishonesty that lead to the permanent
  distrust of WoSign and it's subsidiary StartCom.  This means that you
  should be extra careful not to misstate details like the ones caught
  by Jonathan Rudenberg in hist response at 16:12 UTC today.

7. Your very public statement ended with a standard text claiming that
  the message should be treated as confidential.  This is a common
  cause of ridicule and the reason that my own postings in this forum
  use a completely different such text than my private e-mail
  communication.  As a lawyer you should be able to draft such a text
  much better than my own feeble attempt.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Public CA:certs with unregistered FQDN mis-issuance

2019-03-01 Thread Jakob Bohm via dev-security-policy

On 28/02/2019 17:48, lcchen.ci...@gmail.com wrote:

1. How your CA first became aware of the problem (e.g. via a problem report 
submitted to your Problem Reporting Mechanism, a discussion in 
mozilla.dev.security.policy, a Bugzilla bug, or internal self-audit), and the 
time and date.

  Ans:
One of our staffs in PKI group was taking samples of the issued certificates 
from crt.sh database for some reasons and found a mis-issued certificate which 
has a test (unregistered) FQDN on February 15, 2019 at 1:55 (UTC). He then 
notified us (Public CA) of the incident. We decide to make an initial 
investigation and found another mis-issued certificate which also has a test 
FQDN on February 15, 2019 at 2:30 (UTC).

2. A timeline of the actions your CA took in response. A timeline is a 
date-and-time-stamped sequence of all relevant events. This may include events 
before the incident was reported, such as when a particular requirement became 
applicable, or a document changed, or a bug was introduced, or an audit was 
done.

  Ans:
Timeline (all times are UTC):
15 February 2019 1:55: Find a mis-issued certificate with a FQDN 
www.raotest.com.tw
15 February 2019 1:59: Revoke the first mis-issued certificate
15 February 2019 2:07:  Public CA starts an action plan and initial 
investigation
15 February 2019 2:17:  Further certificate issuing suspended
15 February 2019 2:30: Find another mis-issued certificate with a FQDN 
publicca.rao.com.tw
15 February 2019 4:10: Initial investigation completed
15 February 2019 4:25: Certificate issuing restarted
15 February 2019 4:40: Hold an investigation meeting

3. Whether your CA has stopped, or has not yet stopped, issuing certificates 
with the problem. A statement that you have will be considered a pledge to the 
community; a statement that you have not requires an explanation.

Ans:
Yes, we had stopped issuing certificates before we investigated and revoked any 
certificate with an unregistered FQDN. We have asked our CA system vendor to 
include an automatic FQDN-checking function in the hotfix which will be 
released not after 30 March 2019 to avoid making the same mistakes.

4. A summary of the problematic certificates. For each problem: number of 
certs, and the date the first and last certs with that problem were issued.

Ans:
Number of certs: 2
  First cert: issued on Nov 12, 2018 at 11:53:02 (UTC)
  Last cert: issued on Jan 29, 2019 at 06:43:59 (UTC)
  
5. The complete certificate data for the problematic certificates. The recommended way to provide this is to ensure each certificate is logged to CT and then list the fingerprints or crt.sh IDs, either in the report or as an attached spreadsheet, with one list per distinct problem.


Ans:
https://crt.sh/?id=1153958924 with FQDN www.raotest.com.tw
https://crt.sh/?id=940459864 with FQDN publicca.rao.com.tw

6. Explanation about how and why the mistakes were made or bugs introduced, and 
how they avoided detection until now.

Ans:
For the certificate with FQDN www.raotest.com.tw:
One of our RAOs intended to take a screenshot of certificate application 
process for training material in test environment, but she was not aware that 
she is in the formal environment. After the certificate was issued, the RAO 
forgot to revoke it.

For the certificate with FQDN publicca.rao.com.tw:
The same as the former cause, but the mis-issued certificate had been revoked 
immediately without notifying Public CA.

The mistakes have not been detected (even by our internal self-audit) because 
the amount of mis-issued certificates is minor. The RAO involved in this 
incident has been verbally warned and needs to undergo a retraining process in 
accordance with our employment contract.

7. List of steps your CA is taking to resolve the situation and ensure such 
issuance will not be repeated in the future, accompanied with a timeline of 
when your CA expects to accomplish these things.

Ans:
To avoid making the same mistakes, the following steps will newly be introduced:

Step 1. Implementation of a two-stage manual verification by different RAOs. 
Effective 26/02/2019.
  
Step 2. Implementation of an automatic FQDN-checking function prior to issuing certificates. Effective 30/03/2019.


Step 3. Implementation of a scheduling program to periodically and 
automatically check the newly-issued certificates from our repository. 
Effective 30/05/2019.




How come your Public CA wasn't routinely using the blessed automatic
methods listed in the BRs?  Manual domain ownership or control
validation is supposed to be an exception, not the routine standard
procedure for a general public CA (a private Sub-CA technically
constrained by the parent CA to only issue for domains already
validated as belonging to the Sub-CA owner may rely exclusively on
their internal processes to authorize certificates for any FQDNs
under their own domains, because a public CA can legitimately
validate the customer higher level domain control to 

Re: Possible DigiCert in-addr.arpa Mis-issuance

2019-03-01 Thread Jakob Bohm via dev-security-policy
On 01/03/2019 01:04, Matthew Hardeman wrote:
> In addition to the GDPR concerns over WHOIS and RDAP data, reliance upon
> these data sources has a crucial differentiation from other domain
> validation methods.
> 
> Specifically, the WHOIS/RDAP data sources are entirely "off-path" with
> respect to how a browser will locate and access a given site.  To my way of
> thinking, this renders these mechanisms functionally inferior to an
> "on-path" mechanism, such as reliances upon demonstrated change control
> over an authoritative DNS record or even demonstration content change
> control over a website.
> 
> Since domain validation is, in theory, about validating that the party to
> whom a certificate is to be issued has demonstrated control over the
> subject of the desired name(s) or the name space of the desired name(s), it
> seems clear that "off-path" validation is less valuable as a security
> measure.
> 

And there, you completely misunderstood the purpose.  The purpose of 
domain validation is to determine if the certificate application is 
(directly or indirectly) authorized by the domain registrant. 
 Technical control (via the various automated methods) is a proxy to 
this information, but not as close to the truth as WHOIS validations 
(provided either is done securely).

The panic mishandling of GDPR concerns by ICANN (something that 
continues in the current process to make a "permanent" solution) has 
essentially crippled all useful purposes of the WHOIS/RDAP database.
Including making it near impossible to do domain ownership (as opposed 
to mere control) validation impossible except for a few national 
TLDs that continue to provide open WHOIS.

I sincerely hope that ICANN cleans up its misunderstandings and 
reopens WHOIS, at least for domain owners that request this (it 
currently can't be done because registrars are vary of implementing 
the temporary specification of how to do that, as a new spec is in 
the works).

Therefore WHOIS validation should remain valid, but only if the 
actual registry/registrar provides authoritative computer readable 
data for the domain in question.  (Thus having to do OCR or similar 
of a picture of text is not acceptable, and neither are manually 
entered entries).

In particular "substitute WHOIS" lookups via manual steps that 
don't result in the validation computers directly reading the data 
from the registrar/registry server should not be allowed.

There are many ways this can be done technically, ranging from a 
straightforward WHOIS lookup from multiple network vantage points 
to a process where a special web browser is used to authenticate 
through CAPTCHAs etc. until the validation system automatically 
captures and parses the "known correct" textual web response 
from a known registrar/registry url.

This in turn means that IV, OV and EV certs will often need to use 
other means of associating the domain control with the certificate 
applicant entity.  For example one or more challenge values/pins 
can be securely provided to the real world entity validated, then 
used in the protocol for validating domain control.  But this still 
only validates domain control, not legitimacy of domain authority.


> Although I'm aware that the BRs bless a number of methods, it's also clear
> that methods have been excluded by the Mozilla root program before.  Is it
> time to consider further winnowing down the accepted methods?
> 
> On Thu, Feb 28, 2019 at 5:43 AM Ryan Sleevi via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
>> On Thu, Feb 28, 2019 at 6:21 AM Nick Lamb via dev-security-policy <
>> dev-security-policy@lists.mozilla.org> wrote:
>>
>>> On Thu, 28 Feb 2019 05:52:14 +
>>> Jeremy Rowley via dev-security-policy
>>>  wrote:
>>>
>>> Hi Jeremy,
>>>
 4. The validation agent specified the approval scope as id-addr.arpa
>>>
>>> I assume this is a typo by you not the agent, for in-addr.arpa ?
>>>
>>> Meanwhile, and without prejudice to the report itself once made:
>>>
 2. The system marked the WHOIS as unavailable for automated parsing
 (generally, this happens if we are being throttled or the WHOIS info
 is behind a CAPTCHA), which allows a validation agent to manually
 upload a WHOIS document
>>>
>>> This is a potentially large hole in issuance checks based on WHOIS.
>>>
>>> Operationally the approach taken ("We can't get it to work, press on")
>>> makes sense, but if we take a step back there's obvious potential for
>>> nasty security surprises like this one.
>>>
>>> There has to be something we can do here, I will spitball something in
>>> a next paragraph just to have something to start with, but to me if it
>>> turns out we can't improve on basically "sometimes it doesn't work so
>>> we just shrug and move on" we need to start thinking about deprecating
>>> this approach altogether. Not just for DigiCert, for everybody.
>>>
>>> - Spitball: What if the CA/B went to the registries, at least the big
>>>

Re: T-Systems invalid SANs

2019-02-27 Thread Jakob Bohm via dev-security-policy

On 27/02/2019 09:54, michel.lebihan2...@gmail.com wrote:

I also found that certificates that were issued very recently have duplicate 
SANs:
https://crt.sh/?id=1231853308=cablint,x509lint,zlint
https://crt.sh/?id=1226557113=cablint,x509lint,zlint
https://crt.sh/?id=1225737388=cablint,x509lint,zlint



Are duplicate SANs forbidden by any standard? (it's obviously
wasteful, but RFC3280 seems to implicitly allow it).

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA ownership checking [DarkMatter Concerns]

2019-02-27 Thread Jakob Bohm via dev-security-policy
On 27/02/2019 01:31, Matthew Hardeman wrote:
> I'd like to take a moment to point out that determination of the beneficial
> ownership of business of various sorts (including CAs) can, in quite a
> number of jurisdictions, be difficult to impossible (short of initiating
> adverse legal proceedings) to determine.
> 
> What does this mean for Mozilla's trusted root program or any other root
> program for that matter?  I submit that it means that anyone rarely knows
> to a certainty the nature and extent of ownership and control over a given
> business to a high degree of confidence.  This is especially true when you
> start divorcing equity interest from right of control.  (Famous example,
> Zuckerberg's overall ownership of Facebook is noted at less than 30% of the
> company, yet he ultimately has personal control of more than 70% of voting
> rights over the company, the end result is that he ultimately can control
> the company and its operations in virtually any respect.)
> 
> A number of jurisdictions allow for creating of trusts, etc, for which the
> ownership and control information is not made public.  Several of those, in
> turn, can each be owners of an otherwise normal looking LLC in an innocuous
> jurisdiction elsewhere, each holding say, 10% equity and voting rights.
> Say there are 6 of those.  Well, all six of them can ultimately be proxies
> for the same hidden partner or entity.  And that partner/entity would
> secretly be in full control.  Without insider help, it would be very
> difficult to determine who that hidden party is.
> 

While the ability to adversely extract such information for random 
companies is indeed limited by various concerns (including the privacy 
of charity activists and small business owners), the ability to get 
this information willingly as audited facts is very common, and is 
something that (non-technical) auditors are well accustomed to doing.

Thus root programs could easily request that information about 
beneficial ownership etc. be included as fully audited facts in future 
audit summary letters.  Subject to an appropriate launch date so CAs and 
auditors can include the needed billable work in their WebTrust audit 
pricing, and actually perform this work (typically by sending relevant 
people from their business auditing sister operations to do this properly).

> Having said all of this, I do have a point relevant to the current case.
> Any entity already operating a WebPKI trusted root signed SubCA should be
> presumed to have all the access to the professionals and capital needed to
> create a new CA operation with cleverly obscured ownership and corporate
> governance.  You probably can not "fix" this via any mechanism.
> 
> In a sense, that DarkMatter isn't trying to create a new CA out of the
> blue, operated and controlled by them or their ultimate ownership but
> rather is being transparent about who they are is interesting.
> 
> One presumes they would expect to get caught at misissuance.  The record of
> noncompliance and misissuance bugs created, investigated, and resolved one
> way or another demonstrates quite clearly that over the history of the
> program a non-compliant CA has never been more likely to get caught and
> dealt with than they are today.
> 

- - - -

> I believe the root programs should require a list of human names with
> verifiable identities and corresponding signed declarations of all
> management and technical staff with privileged access to keys or ability to
> process signing transactions outside the normal flow.  Each of those people
> should agree to a life-long ban from trusted CAs should they be shown to
> take intentional action to produce certificates which would violate the
> rules, lead to MITM, etc.  Those people should get a free pass if they
> whistle blow immediately upon being forced, or ideally immediately
> beforehand as they hand privilege and control to someone else.
> 
> While it is unreasonable to expect to be able to track beneficial
> ownership, formal commitments from the entity and the individuals involved
> in day to day management and operations would lead to a strong assertion of
> accountable individuals whose cooperation would be required in order to
> create/provide a bad certificate.  And those individuals could have "skin
> in the game" -- the threat of never again being able to work for any CA
> that wants to remain in the trusted root programs.
> 

This proposal is disastrously bad and needs to be publicly dispelled 
by the root program administrators as quickly as possible for the 
following reasons:

1. Punishing all CA operations experts that worked at a failed CA by 
  making them essentially unemployable for the failures of their 
  (former) boss is unjust in principle.

2. If there is even a hint that such a policy may be imposed, every 
  key holding CA employee at every CA will be under duress (by the 
  root programs) to hide all problems at their CA and defend every 
  bad decision 

Re: Possible DigiCert in-addr.arpa Mis-issuance

2019-02-27 Thread Jakob Bohm via dev-security-policy

On 27/02/2019 00:10, Matthew Hardeman wrote:

Is it even proper to have a SAN dnsName in in-addr.arpa ever?

While in-addr.arpa IS a real DNS heirarchy under the .arpa TLD, it rarely
has anything other than PTR and NS records defined.



While there is no current use, and the test below was obviously
somewhat contrived (and seems to have triggered a different issue),
one cannot rule out the possibility of a need appearing in the
future.

One hypothetical use would be to secure BGP traffic, as certificates
with IpAddress SANs are less commonly supported.



Here this was clearly achieved by creating a CNAME record for
69.168.110.79.in-addr.arpa pointed to cynthia.re.

I've never seen any software or documentation anywhere attempting to
utilize a reverse-IP formatted in-addr.arpa address as though it were a
normal host name for resolution.  I wonder whether this isn't a case that
should just be treated as an invalid domain for purposes of SAN dnsName
(like .local).

On Tue, Feb 26, 2019 at 1:05 PM Jeremy Rowley via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


Thanks Cynthia. We are investigating and will report back shortly.

From: dev-security-policy 
on behalf of Cynthia Revström via dev-security-policy <
dev-security-policy@lists.mozilla.org>
Sent: Tuesday, February 26, 2019 12:02:20 PM
To: dev-security-policy@lists.mozilla.org
Cc: b...@benjojo.co.uk
Subject: Possible DigiCert in-addr.arpa Mis-issuance

Hello dev.security.policy


Apologies if I have made any mistakes in how I post, this is my first
time posting here. Anyway:


I have managed to issue a certificate with a FQDN in the SAN that I do
not have control of via Digicert.


The precert is here: https://crt.sh/?id=1231411316

SHA256: 651B68C520492A44A5E99A1D6C99099573E8B53DEDBC69166F60685863B390D1


I have notified Digicert who responded back with a generic response
followed by the certificate being revoked through OCSP. However I
believe that this should be wider investigated, since this cert was
issued by me adding 69.168.110.79.in-addr.arpa to my SAN, a DNS area
that I do control though reverse DNS.


When I verified 5.168.110.79.in-addr.arpa (same subdomain), I noticed
that the whole of in-addr.arpa became validated on my account, instead
of just my small section of it (168.110.79.in-addr.arpa at best).


To test if digicert had just in fact mis-validated a FQDN, I tested with
the reverse DNS address of 192.168.1.1, and it worked and Digicert
issued me a certificate with 1.1.168.192.in-addr.arpa on it.


Is there anything else dev.security.policy needs to do with this? This
seems like a clear case of mis issuance. It's also not clear if
in-addr.arpa should even be issuable.


I would like to take a moment to thank Ben Cartwright-Cox and igloo5
in pointing out this violation.





Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-02-25 Thread Jakob Bohm via dev-security-policy

On 25/02/2019 11:42, Scott Rea wrote:

G’day Paul,

I cannot speak for other CAs, I can only surmise what another CA that is as 
risk intolerant as we are might do. For us, we will collision test since there 
is some probability of a collision and the test is the only way to completely 
mitigate that risk.
There is a limitation in our current platform that sets the serialNumber 
bit-size globally, however we expect a future release will allow this to be 
adjusted per CA. Once that is available, we can use any of the good suggestions 
you have made below to adjust all our Public Trust offerings to move to larger 
entropy on serialNumber determination.

However, the following is the wording from Section 7.1 of the latest Baseline 
Requirements:
“Effective September 30, 2016, CAs SHALL generate non-sequential Certificate 
serial numbers greater than zero (0) containing at least 64 bits of output from 
a CSPRNG.”

Unless we are misreading this, it does not say that serialNumbers must have 
64-bit entropy as output from a CSPRNG, which appears to be the point you and 
others are making. If that was the intention, then perhaps the BRs should be 
updated accordingly?

We don’t necessarily love our current situation in respect to entropy in 
serialNumbers, we would love to be able to apply some of the solutions you have 
outlined, and we expect to be able to do that in the future. However we still 
assert that for now, our current implementation of EJBCA is still technically 
complaint with the BRs Section 7.1 as they are written. Once an update for 
migration to larger entropy serialNumbers is available for the platform, we 
will make the adjustment to remove any potential further isssues.

Regards,
  



I believe the commonly accepted interpretation of the BR requirement is
to ensure that for each new certificate generated, there are at least
2**64 possible serial numbers and no way for anyone involved to predict
or influence which one will be used.

This rule exists to prevent cryptographic attacks on the algorithms used
for signing the certificates (such as RSA+SHA256), and achieves this
protection because of the location of the serial number in the
certificate structure.

If all the serial numbers are strictly in the range 0x0001
to 0x7FFF then there is not enough protection of the signing
algorithm against these attacks.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Firefox Revocation Documentation

2019-02-20 Thread Jakob Bohm via dev-security-policy

On 20/02/2019 00:40, Wayne Thayer wrote:

I have replaced some outdated information on Mozilla's wiki about
revocation checking [1] [2] with a new page on the wiki describing how
Firefox currently performs revocation checking on TLS certificates:

https://wiki.mozilla.org/CA/Revocation_Checking_in_Firefox

It also includes a brief description of our plans for CRLite.

Please respond if you have any questions or comments about the information
on this page. I hope it is useful, and I plan to add more details in the
future.

- Wayne

[1] https://wiki.mozilla.org/index.php?title=CA:RevocationPlan=no
[2]
https://wiki.mozilla.org/index.php?title=CA:ImprovingRevocation=no



Nice write up.  Some minor issues:

1. Because generating the CRLite data will take some non-zero time,
  the time stamp used by the client to check if a certificate is too
  new for the CRLite data should be based on when the Mozilla server
  requested the underlying CT data rather then when the data was
  passed to the client.

2. While you mention the ability of attackers to omit OCSP stapling
  in spoofed responses, you forget the additional problem that there
  are still server software packages without upstream stapling support.

3. Don't forget Thunderbird (technically no longer a primary Mozilla
  product, but still a major use of Mozilla certificate infrastructure).


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Blog: Why Does Mozilla Maintain Our Own Root Certificate Store?

2019-02-18 Thread Jakob Bohm via dev-security-policy
On 19/02/2019 04:04, Ryan Sleevi wrote:
> On Mon, Feb 18, 2019 at 4:59 PM Jakob Bohm via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
>> On 14/02/2019 23:31, Wayne Thayer wrote:
>>> This may be of interest:
>>>
>>>
>> https://blog.mozilla.org/security/2019/02/14/why-does-mozilla-maintain-our-own-root-certificate-store/
>>>
>>
>> Nice write up.
>>
>> Two points only briefly covered:
>>
>> - Because many open source OS distributions use the Mozilla root store,
>>replacing the Mozilla root store by relying on the OS root store would
>>cut off its own feet.
>>
>> - Some participants in the community actively refuse to support use of
>>the Mozilla root store in other open source initiatives.
>>
> 
> It’s not productive or helpful to continue engaging like this.

I was merely suggesting that whatever view Thayer (not you) has on 
these two points was missing from the blog post.  Nothing more, 
nothing less.

The first point is yet another reason why Mozilla provides it's own 
root store rather than relying on the OS one (the main theme of 
Thayer's blog post).

The second point is something closely related that has no rationale 
in the FAQ, only a description of the situation.  The blog post 
actually mentions that those downstreams exist, so it would be 
relevant to mention any policy on this.

And I would highly appreciate if you stopped with your incessant 
bullying and trolling (and not just of me), which has been going on 
for years now.



Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Blog: Why Does Mozilla Maintain Our Own Root Certificate Store?

2019-02-18 Thread Jakob Bohm via dev-security-policy

On 14/02/2019 23:31, Wayne Thayer wrote:

This may be of interest:

https://blog.mozilla.org/security/2019/02/14/why-does-mozilla-maintain-our-own-root-certificate-store/



Nice write up.

Two points only briefly covered:

- Because many open source OS distributions use the Mozilla root store,
 replacing the Mozilla root store by relying on the OS root store would
 cut off its own feet.

- Some participants in the community actively refuse to support use of
 the Mozilla root store in other open source initiatives.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificate issued with OU > 64

2019-02-18 Thread Jakob Bohm via dev-security-policy
On 15/02/2019 19:33, Ryan Sleevi wrote:
> On Fri, Feb 15, 2019 at 12:01 PM Jakob Bohm via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
>> Indeed, the report states that the bug was in the pre-issuance checking
>> software.
>>
> 
> I believe you may have misread the report. I do not have the same
> impression - what is stated is a failure of the control, not the linting.
> 
> 

The report refers to the pre-checking code in question as the "filter".

As the certificate hasn't been signed yet, pre-checking code needs to 
either generate a dummy certificate (signed with a dummy untrusted key) 
as input to a general cert-linter, or check the elements of the intended 
certificate in a disembodied form.

>> One good community lesson from this is that it is prudent to use a
>> different brand and implementation of the software that does post-
>> issuance checking than the software doing pre-issuance checking, as a
>> bug in the latter can be quickly detected by the former due to not
>> having the same set of bugs.
> 
> 
> While I can understand the position here - and robustness is always good -
> I do not believe this is wise or prudent advice, especially from this
> incident. While it is true that post-issuance may catch things pre-issuance
> misses, the value in post-issuance is both as a means of ensuring that
> internal records are consistent (i.e. that the set of pre-issuance
> certificates match what was issued - ensuring no rogue or unauthorized
> certificates) as well as detecting if preissuance was not run.
> 
> There's significant more risk for writing a second implementation "just
> because" - and far greater value in ensuring robustness. Certainly,
> post-issuance is far less important than pre-issuance.
> 

Of cause.  I was not suggesting that approach (although it can be a 
high reliability technique).  I was merely suggesting to obtain a 
cert linter from a different vendor than the main CA suite.  And by 
all means run multiple checkers that purport to check the same 
things.

Also note that pre-issuance checking is not the same as checking CT 
pre-certs.  Pre-issuance checks need to happen before signing the 
pre-certs.


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificate issued with OU > 64

2019-02-15 Thread Jakob Bohm via dev-security-policy

On 15/02/2019 15:21, Ryan Sleevi wrote:

(Sending from the right e-mail)

On Fri, Feb 15, 2019 at 8:05 AM info--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


Hi, this is the incident report:

1.  How your CA first became aware of the problem (e.g. via a problem
report submitted to your Problem Reporting Mechanism, a discussion in
mozilla.dev.security.policy, a Bugzilla bug, or internal self-audit), and
the time and date.

We have controls to detect any misissuance before and after the issuance
of the certificate. The certificate was issued at 11:52, detected in the
following minute, and revoked at 12:07

2.  A timeline of the actions your CA took in response. A timeline is
a date-and-time-stamped sequence of all relevant events. This may include
events before the incident was reported, such as when a particular
requirement became applicable, or a document changed, or a bug was
introduced, or an audit was done.

Feb 14th 11:52 -> the certificate was issued
Feb 14th 11:53 -> the misissuance was detected
Feb 14th 12:07 -> the certificate was revoked
Feb 14th 13:28 -> reported the incident to our PKI software manufacturer
Feb 14th 15:24 -> received the answer from the manufacturer. They tell us
that there’s a bug in the preventive filter with the OU, and that they have
a hotfix to solve it.
Feb 14th 17:21 -> Izenpe reports to mozilla.dev.security.policy list

3.  Whether your CA has stopped, or has not yet stopped, issuing
certificates with the problem. A statement that you have will be considered
a pledge to the community; a statement that you have not requires an
explanation.

We’ll do a dual manual check until we have the hotfix correctly applied

4.  A summary of the problematic certificates. For each problem:
number of certs, and the date the first and last certs with that problem
were issued.

There’s just one certificate affected

5.  The complete certificate data for the problematic certificates.
The recommended way to provide this is to ensure each certificate is logged
to CT and then list the fingerprints or crt.sh IDs, either in the report or
as an attached spreadsheet, with one list per distinct problem.

https://crt.sh/?id=1202714390

6.  Explanation about how and why the mistakes were made or bugs
introduced, and how they avoided detection until now.

It was a bug in the filter of the PKI software

7.  List of steps your CA is taking to resolve the situation and
ensure such issuance will not be repeated in the future, accompanied with a
timeline of when your CA expects to accomplish these things.

We hope to have the product hotfix applied by March 3rd



Thanks for providing the additional details. However, I largely don't think
this meets the bar for the necessary level of detail - with the exception
of the timestamps and the remark your vendor had a bug, we've not really
learned anything about what went wrong or how to effectively prevent it.

Please work with your PKI software manufacturer, if necessary, to provide a
more substantive incident report - understanding when the bug was
introduced, what the bug is and how it functions, and how it's being
hotfixed, are all areas that are key to understanding how we can better
apply this knowledge across the CA ecosystem.

If you're not sure what an 'necessary' level of detail looks like, please
consider something like
https://groups.google.com/d/msg/mozilla.dev.security.policy/vl5eq0PoJxY/W1D4oZ__BwAJ
,
listed as the "Good Practice" examples in the
https://wiki.mozilla.org/CA/Responding_To_An_Incident

Consider that it was stated that controls exist before and after issuance,
and yet it was still issued - this suggests the controls before issuance
are faulty.
Consider that we don't have details about what preventitive filters exist
or are configured, how they're supposed to work, how they failed to work,
and how the hotfix is meant to correct these issues.



Indeed, the report states that the bug was in the pre-issuance checking
software.

One good community lesson from this is that it is prudent to use a
different brand and implementation of the software that does post-
issuance checking than the software doing pre-issuance checking, as a
bug in the latter can be quickly detected by the former due to not
having the same set of bugs.


I do want to acknowledge that it *is* good that y'all proactively detected
post-issuance and filed a report - this is absolutely what CAs should be
doing, and I'm glad to see Izenpe doing this. I similarly don't want to
discourage reporting as information comes in - I think that the details
provided are a good example of an 'interim' report in which the CA
continues to provide updates and transparency to the community while they
gather information. However, I don't think it represents a good 'final'
report - there's nothing of substance here other than "We found a problem
and fixed it". The goal of these is to understand the problem in
substantial technical 

Re: P-384 and ecdsa-with-SHA512: is it allowed?

2019-02-11 Thread Jakob Bohm via dev-security-policy
On 10/02/2019 02:55, Corey Bonnell wrote:
> Hello,
> Section 5.1 of the Mozilla Root Store Policy 
> (https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/)
>  specifies the allowed set of key and signature algorithms for roots and 
> certificates that chain to roots in the Mozilla Root Store. Specifically, the 
> following hash algorithms and ECDSA hash/curve pairs are allowed:
> 
> • Digest algorithms: SHA-1 (see below), SHA-256, SHA-384, or SHA-512.
> • P‐256 with SHA-256
> • P‐384 with SHA-384
> 
> Given this, if an End-Entity certificate were signed using a subordinate CA’s 
> P-384 key with ecdsa-with-SHA512 as the signature algorithm (which would be 
> reflected in the End-Entity certificate's signatureAlgorithm field), would 
> this violate Mozilla policy? As I understand it, an ECDSA signing operation 
> with a P-384 key using SHA-512 would be equivalent to using SHA-384 (due to 
> the truncation that occurs), so I am unsure if this would violate the 
> specification above (although the signatureAlgorithm field value would be 
> misleading). I believe the same situation exists if a P-256 key is used for a 
> signing operation with SHA-384.
> 
> Any insight into whether this is allowed or prohibited would be appreciated.
> 


Using the same DSA or ECDSA key with more than one hash algorithm 
violates the cryptographic design of DSA/ECDSA, because those don't 
include a hash identifier into the signature calculation.  It's 
insecure to even accept such signatures, as it would make the 
signature checking code vulnerable to 2nd pre-image attacks on the 
hash algorithm not used by the actual signer to generate 
signatures.  It would also be vulnerable to cross-hash pre-image 
attacks that are otherwise not considered weaknesses in the hash 
algorithms.

Furthermore the FIPS essentially (if not explicitly) require using 
a shortened 384-bit variant of SHA-512 as input to P-384 ECDSA, 
and the only approved such shortened version is, in fact, SHA-384.

Using the same P-384 ECDSA key pair with both SHA-384 and 
SHA-3-384 might be within some readings of the FIPS, but would 
still be vulnerable to the issue above (imagine a pre-image 
weakness being found in either hash algorithm, all signatures 
with such a key would then become suspect).


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: GoDaddy Underscore Revocation Disclosure

2019-02-08 Thread Jakob Bohm via dev-security-policy

On 09/02/2019 01:36, Santhan Raj wrote:

On Friday, February 8, 2019 at 4:09:32 PM UTC-8, Joanna Fox wrote:

I agree on the surface this bug appears to be the same, but the root cause is a 
different. The issue for bug 1462844 was a specific status not counting as 
active when it was. To mitigate this issue, we updated the query to include the 
missing status. However, we are in the process of simplifying the data 
structures to simplify these types of queries.

For the underscore certificates, these were non-active, not even considered as 
provisioned since they were not delivered to a customer and not publicly used 
for any encryption. These certificates were effectively abandoned by our system.


Is the term "certificate" accurate in this case? Assuming you embed SCTs within 
the EE cert, what you have is technically a pre-cert that was abandoned (not meant to be 
submitted to CT). Right? I ask because both the cert you linked are pre-certs, and I 
understand signing a pre-cert is intent to issue and is treated the same way, but still 
wanted to clarify.

Or by non-active certificate, are you actually referring to a fully signed EE 
that was just not delivered to the customer?



And in either case, the only reasonable sequences of events within
expectations (and possibly within requirements) would have been:

Good Scenario A:
1. Pre-certificate is logged in CT.
2. Matching certificate is signed by CA key.
3. Signed certificate is logged in CT.
4. At this point the customer _might_ retrieve the certificate from CT
  without knowledge of CA.
5. Thus certificate is in reality active, despite any records the CA
  may have of not delivering it directly to customer.

Good Scenario B:
1. Pre-certificate is logged in CT.
2. CA decides (for any reason) not to actually sign the actual
  certificate.
3. The serial number listed in the CT pre-certificate is formally
  revoked in CRL and OCSP.  This is done once the decision not to
  sign is final.
4. If possible, label these revocations differently in CRL and OCSP
  responses, to indicate to independent CT auditors that the EE was
  never signed and therefore not logged to CT.

Scenario B should be very rare, as the process from pre-cert logging to
final cert logging should typically be automatic or occur within a
single root key ceremony. Basically, something unusual would have to
happen at the very last moment, such as an incoming report that a
relied-upon external system (Global DNS, Internet routing, reliable
identity database etc.) may have been compromised during the vetting.

In neither case would a CT-logged serial number be left in a
not-active but not-revoked state beyond a relevant procedural
delay.

As pointed out in other recent cases, CA software must allow
revoking a certificate without making it publicly valid first, in
case Scenario B happens.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: AW: AW: Incident Report DFN-PKI: Non-IDNA2003 encoded international domain names

2019-01-25 Thread Jakob Bohm via dev-security-policy

On 25/01/2019 19:23, Buschart, Rufus wrote:

Hello Jakob!


-Ursprüngliche Nachricht-
Von: dev-security-policy  Im 
Auftrag von Jakob Bohm via dev-security-policy
Gesendet: Freitag, 25. Januar 2019 18:47

Example, if the subscriber fills out the human readable order form like
this:
www.example.com
chat.example.com
example.com
detteerenprøve.example.com
www.example.net
example.net
*.eXample.com
*.examPle.nEt
192.0.2.3
the best choice is probably CN=*.example.com which is one of the SANs and is a 
wildcard covering the first SAN (www.example.com).
The BRs do not require a specific choice among the 9 SANs that would go in the 
certificate (all of which must of cause be validated).
The user entered U-label detteerenprøve.example.com must of cause be converted 
to A-label xn--detteerenprve-lnb.example.com
before checking and encoding.


If a CA receives such a list and creates the CSR for the customer (how does the 
CA this without access to the customers private key?), they have of course to 
perform an IDNA translation from U-label to A-label. And as we have learned the 
BRGs (indirectly) enforce the use of IDNA2003. But if the CA receives a filled 
in CSR they don't perform (not even indirectly) an IDNA translation and has no 
obligation to check if the entries are valid IDNA2003 A-label.



I was not suggesting that the CA creates the CSR.

I was simply suggesting the common practice of the CA using the CSR
mostly as a "proof of possession" of the private key, but not as a
precise specification of the desired certificate contents.

For example, a CSR may (due to old tools or human error) contain
additional subject DN elements (such as eMailAddress).  Of cause the
CSR can't be completely different from the requested certificate, as
that would be open to a man-in-the-middle attack where an attacker
submits the victim's CSR with a completely unrelated order.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: AW: Incident Report DFN-PKI: Non-IDNA2003 encoded international domain names

2019-01-25 Thread Jakob Bohm via dev-security-policy
On 25/01/2019 16:06, Tim Hollebeek wrote:
> 
>> On 2019-01-24 20:19, Tim Hollebeek wrote:
>>> I think the assertion that the commonName has anything to do with what
>>> the user would type and expect to see is unsupported by any of the
>>> relevant standards, and as Rob noted, having it be different from the
>>> SAN strings is not in compliance with the Baseline Requirements.
>>
>> The BR do not say anything about it.
> 
> Rob already quoted it: "If present, this field MUST contain a single IP
> address
> or Fully-Qualified Domain Name that is one of the values contained in the
> Certificate's subjectAltName extension".
> 
> The only reason it's allowed at all is because certain legacy software
> implementations would choke if it were missing.
> 
>>> Requiring translation to a U-label by the CA adds a lot of additional
>>> complexity with no benefit.
>>
>> I have no idea what is so complex about that. When generating the
>> certificate, it's really just calling a function. On the other hand, when
> reading
>> a certificate you have to guess what they did.
> 
> Given that it has no benefit at all, any complexity is too much.  As I
> mentioned
> above, its existence is merely a workaround for a bug in obsolete software.
> 

Not as much a bug, as a previous specification.  In other words, 
backwards compatibility with old systems and software predating the full 
migration to SAN-only checking in modern clients.

As with all backwards compatibility, it is about interoperating with 
systems that followed the common standards and practices of their time, as 
they were.

While the Python library mentioned apparently failed to match any valid 
IDNA SAN value to any actual IDNA host name, the much more common case 
is software that will process the A-label as an ASCII string and compare 
it to either the CN or the list of SAN values.

One such example is GNU Wget, which is sometimes used to bootstrap the 
downloading of updated client software (by typing/pasting a known URL).
GNU Wget was notably slow in implementing SAN support, doing only the 
matching of host name to CN.  For some compile options, SAN checking of 
host names was implemented as recently as the year 2014.

For this and other cases this old, putting the A-label in CN (if the 
subscriber chosen "most important to match name" is a DNSname) or 
putting the IP address in CN (if the most important to match name is an 
IPname).

In either case, I suspect (but have no data) that using the lower case 
(ASCII lower case) name form will be the most compatible choice.  If the 
subscriber does not specify which name is most important, choose the 
first DNS/IP name in the subscriber input, unless a later name is a 
wildcard that also matched that first name.  Ideally, the subscriber 
would indicate all desired choices directly in the CSR, but in practice, 
most subscribers will use broken scripts to generate the CSR, not a 
carefully crafted filled in template.

Example, if the subscriber fills out the human readable order form like 
this:
   www.example.com
   chat.example.com
   example.com
   detteerenprøve.example.com
   www.example.net
   example.net
   *.eXample.com
   *.examPle.nEt
   192.0.2.3
the best choice is probably CN=*.example.com which is one of the SANs 
and is a wildcard covering the first SAN (www.example.com).  The BRs 
do not require a specific choice among the 9 SANs that would go in the 
certificate (all of which must of cause be validated).  The user entered 
U-label detteerenprøve.example.com must of cause be converted to A-label 
xn--detteerenprve-lnb.example.com before checking and encoding.

This way, out of the correct uses of the certificate, only the 
example.net names and the IP address would not be accepted by older software.

The clients of cause usually checks only CN or only SAN, it's the CA that 
deals with the complexity of trying to please everyone without harming 
anyone.


>> And if it's really to complex, just remove the CN, or is that too complex
> too?
> 
> See above.
> 
>>> What users type and see are issues that are best left to Application
>>> Software Suppliers (browsers).
>>
>> So you're saying all the other software that deals with certificates
> should
>> instead add complexity?
> 
> What they actually do is to ignore this obsolete field, and process the
> subjectAltNames.  There's no additional complexity for them because
> they already are doing the conversion of IDN names.
> 
> -Tim
> 


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Odp.: Odp.: Odp.: 46 Certificates issued with BR violations (KIR)

2019-01-21 Thread Jakob Bohm via dev-security-policy

On 18/01/2019 19:21, piotr.grabow...@kir.pl wrote:

W dniu piątek, 18 stycznia 2019 18:44:23 UTC+1 użytkownik Jakob Bohm napisał:

On 17/01/2019 21:12, Wayne Thayer wrote:

Hello Piotr,

On Thu, Jan 17, 2019 at 6:23 AM Grabowski Piotr 
wrote:


Hello Wayne,



I am very sorry for the  delay. Please find below our answers to Ryan's
questions. Regarding the question why we didn't report this misissuance
of this 1 certificate as separate incident in my opinion it is still the
same incident. I can create new incident if you want me to. Taking into
account my email and Ryan's response I assumed it is not required to
create separate incident for  misissuance of this 1 certificate.


I am not so concerned about creating a new bug and/or a new email thread.

My concern is that you did not report the new misissuance even though you
appear to have known about it. Why was it not reported as part of the
existing incident, and what is being done to ensure that future
misissuances - either in relation to existing or new incidents - are
promptly reported?

So our comments in blue:



I don't think it's reasonable to push the problem to your CA software
vendor.

- We are not pushing the problem to CA software vendor. I have just tried
to explain how it happened.



If Verizon does not provide this support, what steps will your CA take?

- We are almost sure that Verizon will provide at least policy field
validation for maximum field size which can be sufficient to eliminate
the last gap in our policy templates which in turn led us to misissuance
of this certificate. If Verizon will not provide this feature we will be 
considering
usage of another UniCERT component - ARM plug-in - which analyzes
requests but this means custom development for us. It would be a big
change in many processes and the challange to preserve current security
state as well so this should be an absolute last resort.


If you know what those steps are, is there reason not to take them now? If
you do not know what those steps are, when will you know?

The main reason why we are not taking these steps now (changing processes
and custom development) is absolute conviction that the cost and the risk of
implementing them is much, much higher that the risk related with waiting
for that feature being delivered by vendor.  Just to recall, now we have
the only gap which in the worst case can give us specific field in
certificate longer than RFC specifies. Of course we are practicing due care
and have put as much counter-measures as we can (procedure, labels above
the fields).



Your software is producing invalid and improper certificates. The
responsibility for that ultimately falls on the CA, and understanding what
steps the CA is taking to prevent that is critical. It appears that the
steps, today, rely on human factors. As the past year of incident reports
have shown, relying solely on human factors is not a reasonable practice.

-I agree entirely with you, that's why we will keep exerting pressure on
Verizon to deliver:

o   Policy field size validation – in our opinion it is simple change request 
and should be delivered ASAP.

o   native x509lint or zlint feature




When can we expect an update from you on Verizon's response to your
request? If I was the customer, I would expect a prompt response from
Verizon.



Additional questions:

- Is this the same CA software that was used on Verizon's own CA or
   SubCA, which had serious problem some time ago?

- Who else (besides KIR) is still using this software to run a public
   CA?


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded


Hello Jacob,
Could you give me the link to the incident with Verizon software you're talking 
about?



There were a number of Verizon incident related messages on this
list/newsgroup from November 2016 to January 2017, all in connection
with Digicert taking over and cleaning up the Verizon CAs.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


  1   2   3   4   5   >