Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-11-18 Thread Jakob Bohm via dev-security-policy
).


This is part of why I'm dismissive of the solution; not because it isn't
technically workable, but because it's practically non-viable when
considering the overall set of ecosystem concerns. These sorts of
considerations all factor in to the decision making and recommendation for
potential remediation for CAs: trying to consider all of the interests, but
also all of the limitations.



#2 Non-robust CRL implementations may lead to security issues

Relying parties using applications that don’t do a proper revocation check
do
have a security risk. This security risk is not introduced by our proposal
but
inherent to not implementing core functionality of public key
infrastructures.
The new solution rewards relying parties that properly follow the
standards.
Indeed, a relying party that has a robust CRL (or OCSP check)
implementation
already would not have to bear any additional security risk. They also
would
benefit from the point that our solution can be implemented very quickly
and so
they would quickly mitigate the security issue. On the other hand, relying
parties with a non-functional revocation mechanism will have to take
corrective
measures. And we can consider it a fair balancing of work and risk when the
workload and risk is on the actors with poor current implementations.




It can be noted that the new solution gives some responsibility to the
relying
parties. Whereas the original solution gives responsibility to the
subscribers
who did nothing wrong about this in the first place. The new solution can
be
considered fairer with this regard.



I think this greatly misunderstands the priority of constituencies here, in
terms of what represents a "fair" solution. The shuffling of responsibility
to relying parties is, ostensibly, an anti-goal we try to minimize. As I
explained above, the practical impact for server operators is not actually
decreased, but increased, and so the overall "risk" here is now greater
than the "current" solution.

With respect to revocation and threat models, I'm not sure this thread is
the best medium to get into it, but "core functionality of public key
infrastructure" would arguably be a misstatement. Revocation functionality
reflected in X.509/RFC 5280 is tied to a number of inherent assumptions
about particular PKI designs and capabilities (e.g. the connectivity of
clients to an X.500/LDAP directory, the failure tolerances of clients
respective to the security goals, the number and type of certificates
issued), and it's not, in fact, a generalized solution. PKI is, at best, a
series of tools to be adjusted for different situations, and those tools
are not inherently right, or their absence not inherently wrong.

As it relates to the set of clients you're considering, again, this is the
difference between the "real world" of implementations and the idealized
world of the 90s that presumed a directory would co-exist with the PKI (and
serve an inherent function). The Directory doesn't exist, nor do PKI
clients broadly implement HTTP in the first place, let alone LDAP, and so
presumptions about the availability of external datas ources to handle this
doesn't actually hold up. This is where the current approach tries to be
both practical and pragmatic, by working in a manner that complements the
existing design constraints, rather than conflicts with them.



However this solution seems to not rely on PKI clients having fail-hard
CRL/OCSP handling, but on them having a way to enforce SubCA
revocations.  This includes Mozilla OneCRL, the static CRL support
in OpenSSL and the "Untrusted Certificates" feature in MS CryptoAPI.


...



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-11-17 Thread Jakob Bohm via dev-security-policy
icate hierarchies, a better awareness and
documentation during CA ceremonies, and a plan for agility for all
certificates beneath those hierarchies. In effect, it proposes an
alternative that specifically seeks to avoid those benefits, but
unfortunately, does so without achieving the same security properties. It's
interesting that "financial strain" would be highlighted as a goal to
avoid, because that framing also seems to argue that "compliance should be
cheap" which... isn't necessarily aligned with users' security needs.



Only tangentially related to this discussion.  This entire situation
arose out of an inconsistency between policy requirements/best practices
for the OCSP EKU and for any other EKU OID.  CAs that applied the
general EKU chaining policy/best practice to SubCAs authorized to issue
their own OCSP-signing end-certs would inadvertently violate the letter
of the OCSP policy requirements and cause a potential for misuse by
authorized 3rd party SubCAs.


I realize this is almost entirely critical, and I hope it's taken as
critical of the proposal, not of the investment or interest in this space.
I think this sort of analysis is exactly the kind of analysis we can and
should hope that any and all CAs can and should be able to do. I think if a
CA had done such a fantastic write-up, as you have here, for a very
specific situation, it would have been very positively reflected on the CA,
regardless of whether it was accepted as a mitigation. We should expect CAs
to be the very best of us, to hire the very best, and to be the very best
at explaining. Unfortunately, as your post highlights, CAs have largely
optimized for reducing "financial strain", and thus, haven't really tried
to be the best or have the best. And I think, for users, that's unfortunate.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #186: Requirement to Disclose Self-signed Certificates

2020-11-12 Thread Jakob Bohm via dev-security-policy

On 2020-11-12 05:15, Ben Wilson wrote:

Here is an attempt to address the comments received thus far. In Github,
here is a markup:

https://github.com/BenWilson-Mozilla/pkipolicy/commit/ee19ee89c6101c3a6943956b91574826e34c4932

This sentence would be deleted: "These requirements include all
cross-certificates which chain to a certificate that is included in
Mozilla’s CA Certificate Program."

And the following would be added:

"A certificate is deemed to directly or transitively chain to a CA
certificate included in Mozilla’s CA Certificate Program if:

(1)   the certificate’s Issuer Distinguished Name matches (according to the
name-matching algorithm specified in RFC 5280, section 7.1) the Subject
Distinguished Name in a CA certificate or intermediate certificate that is
in scope according to section 1.1 of this Policy, and

(2)   the certificate is signed with a Private Key whose corresponding
Public Key is encoded in the SubjectPublicKeyInfo of that CA certificate or
intermediate certificate.
Thus, these requirements also apply to so-called reissued/doppelganger CA
certificates (roots and intermediates) and to cross-certificates."

I think it is important not to lose sight of the main reason for this
proposed change-- there has been confusion about whether re-issued root CA
certificates need to be disclosed in the CCADB.

I look forward to your additional comments and suggestions.

Thank you,

Ben


On Mon, Nov 2, 2020 at 11:14 AM Corey Bonnell via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


As an alternate proposal, I suggest replacing the third paragraph of
section 5.3, which currently reads:

"These requirements include all cross-certificates which chain to a
certificate that is included in Mozilla’s CA Certificate Program."

with:

"A certificate is considered to directly or transitively chain to a
certificate included in Mozilla’s CA Certificate Program if there is a CA
or Intermediate certificate in scope (as defined in section 1.1 of this
Policy) where both of the following is true:
1)  The certificate’s Issuer Distinguished Name matches (according to
the name-matching algorithm specified in RFC 5280, section 7.1) the Subject
Distinguished Name of the certificate in scope, and
2)  The certificate is signed with a Private Key whose corresponding
Public Key is encoded in the SubjectPublicKeyInfo of the certificate in
scope."

This proposal better defines the meaning of chaining to certificates
included in the Mozilla CA program and covers the various scenarios that
have caused issues historically concerning cross-certificates and
self-signed certificates.

Thanks,
Corey

On Wednesday, October 28, 2020 at 8:25:50 PM UTC-4, Ben Wilson wrote:

Issue #186 in Github <https://github.com/mozilla/pkipolicy/issues/186>
deals with the disclosure of CA certificates that directly or

transitively

chain up to an already-trusted, Mozilla-included root. A common scenario
for the situation discussed in Issue #186 is when a CA creates a second

(or

third or fourth) root certificate with the same key pair as the root

that

is already in the Mozilla Root Store. This problem exists at the
intermediate-CA-certificate level, too, where a self-signed
intermediate/subordinate CA certificate is created and not reported.

Public disclosure of such certificates is already required by section

5.3

of the MRSP, which reads, "All certificates that are capable of being

used

to issue new certificates, and which directly or transitively chain to a
certificate included in Mozilla’s CA Certificate Program, MUST be

operated

in accordance with this policy and MUST either be technically

constrained

or be publicly disclosed and audited."

There have been several instances where a CA operator has not disclosed

a

CA certificate under the erroneous belief that because it is self-signed

it

cannot be trusted in a certificate chain beneath the already-trusted,
Mozilla-included CA. This erroneous assumption is further discussed in

Issue

#186 <https://github.com/mozilla/pkipolicy/issues/186>.

The third paragraph of MRSP section 5.3 currently reads, " These
requirements include all cross-certificates which chain to a certificate
that is included in Mozilla’s CA Certificate Program."

I recommend that we change that paragraph to read as follows:

"These requirements include all cross-certificates *and self-signed
certificates (e.g. "Issuer" DN is equivalent to "Subject" DN and public

key

is signed by the private key) that* chain to a CA certificate that is
included in Mozilla’s CA Certificate Program*, and CAs must disclose

such

CA certificates in the CCADB*.

I welcome your recommendations on how we can make this language even

more

clear.



How would that phrasing cover doppelgangers of intermediary SubCAs under 
an included root CA?




Enjoy

Jakob
--
Jakob Bohm, CIO, Partne

Re: Policy 2.7.1: MRSP Issue #192: Require information about auditor qualifications in the audit report

2020-11-06 Thread Jakob Bohm via dev-security-policy
o the current document are published by the
auditor organization (e.g. E) as part of their advertising and as part
of their contractual obligations to the audited CAs.

An example of such a statement could be:

-- Begin example document --
Statement of qualifications of WebTrust auditor Jack F. Honest Esq.
(JAH2):

Jack Fictional Honest graduated with a CPA degree from Harward Law
School in 1975, grade average B+, and worked as classified documents
security inspector for the USAF, reaching the rank of Colonel in 1985.
Jack retired honorably from the army in 1990 to work for DeLoite
auditing, and is now a senior partner in Deloitte's Northern California
Office.  Jack also holds a Masters degree in Cryptology from MIT (1998)
and a Bachelors degree in computer software, also from MIT (2009).  Jack
was one of the original authors of the IETF public key certificate
standard (PKIX, RFC5280).
-- End of example document --

As previously mentioned, all these statements, if published and not
hidden on the roster website would be verified for truth by the roster-
keeping organization (CPA Canada for WebTrust, some European
organization for eldas), so Relying parties can rely on that information
to be true.  Thus Mozilla could trust that an Audit signed
by JAH2 and published on the WebTrust roster, was actually signed by a
WebTrust qualified auditor with these qualifications and not by any
other WebTrust auditor that may never have passed the requirements to
join the WebTrust program, and is not one of the few named auditors that
Mozilla has publically stated they won't accept audits from.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #153: Cradle-to-Grave Contiguous Audits

2020-11-06 Thread Jakob Bohm via dev-security-policy

On 2020-11-05 22:43, Tim Hollebeek wrote:

So, I'd like to drill down a bit more into one of the cases you discussed.
Let's assume the following:

1. The CAO [*] may or may not have requested removal of the CAC, but removal
has not been completed.  The CAC is still trusted by at least one public
root program.

2. The CAO has destroyed the CAK for that CAC.

The question we've been discussing internally is whether destruction alone
should be sufficient to get you out of audits, and we're very skeptical
that's desirable.

The problem is that destruction of the CAK does not prevent issuance by
subCAs, so issuance is still possible.  There is also the potential
possibility of undisclosed subCAs or cross relationships to consider,
especially since some of these cases are likely to be shutdown scenarios for
legacy, poorly managed hierarchies.  Removal may be occurring *precisely*
because there are doubts about the history, provenance, or scope of previous
operations and audits.

We're basically questioning whether there are any scenarios where allowing
someone to escape audits just because they destroyed the key is likely to
lead to good outcomes as opposed to bad ones.  If there aren't reasonable
scenarios where it is necessary to be able to remove CACs from audit scope
through key destruction while they are still trusted by Mozilla, it's
probably best to require audits as long as the CACs are in scope for
Mozilla.

Alternatively, if there really are cases where this needs to be done, it
would be wise to craft language that limits this exception to those
scenarios.



I believe that destruction of the Root CA Key should only end audit
requirements for the corresponding Root CA itself, not for any of its
still trusted SubCAs.

One plausible (but hypothetical) sequence of events is this:

1. Begin Root ceremony with Auditors present.

1.1 Create Root CA Key pair
1.2 Sign Root CA SelfCert
1.3 Create 5 SubCA Key pairs
1.4 Sign 5 SubCA pre-certificates
1.5 Request CT Log entries for the 5 SubCA pre-certificates
1.6 Sign 5 SubCA certificates with embedded CTs
1.7 Sign, but do not publish a set of post-dated CRLs for various 
contingencies
1.8 Sign, but do not publish a set of post-dated revocation OCSP 
responses for those contingencies
1.9 Sign, but do not yet publish, a set of post-dated non-revocation 
OCSP responses confirming that the SubCAs have not been revoked on each 
date during their validity.

1.10 Destroy Root CA Key pair.

2. Initiate audited storage of the unreleased CRL and OCSP signatures.

3. End Root ceremony, end root CAC audit period.

4. Release public audit report of this ceremony, this ends the ordinary 
audits required for the Root CA Cert.  However audit reports that only 
the correct contingency and continuation OCSP/CRL signatures were 
released from storage remain technically needed.


5. Maintain revocation servers that publish the prepared CRLs and OCSP 
answers according to their embedded dates.  Feed their publication queue

from audited batch releases from the storage.

6. Operate the 5 SubCAs under appropriate security and audit schemes 
detailed in CP/CPS document pairs.


7. Apply for inclusion in the Mozilla root program.


In the above hypothetical scenario, there would be no way for the the 
CAO to misissue new SubCAs or otherwise misuse the root CA Key Pair, but 
still the usual risks associated with the 5 SubCA operations.


Also the CAO would have no way to increase the set of top level SubCAs
or issue revocation statements in any yet-to-be-invented data formats,
even if doing so would be legitimate or even required by the root
programs.

Thus the hypothetical scenario could land the CAO in an impossible 
situation, if root program requirements or common CA protocols change,
and those changes would require even one additional signature by the 
root CA Key Pair.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #186: Requirement to Disclose Self-signed Certificates

2020-11-02 Thread Jakob Bohm via dev-security-policy

On 2020-10-30 18:45, Ryan Sleevi wrote:

On Fri, Oct 30, 2020 at 12:38 PM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


On 2020-10-30 16:29, Rob Stradling wrote:

Perhaps add: "And also include any other certificates sharing the same
private/public key pairs as certificates already included in the
requirements."  (this covers the situation you mentioned where a
self-signed certificate shares the key pair of a certificate that chains
to an included root).




Already rephrased to the following in my Friday post, as you actually 
quote below.


Perhaps add: "And also include any other certificates sharing the same
private/public key pairs as CA certificates already included in the
requirements."  (this covers the situation Rob mentioned where a
self-signed certificate shares the key pair of a certificate that chains
to an included root).


Jakob,

I agree that that would cover that situation, but your proposed language

goes way, way too far.


Any private CA could cross-certify a publicly-trusted root CA.  How

would the publicly-trusted CA Operator discover such a cross-certificate?
Why would such a cross-certificate be of interest to Mozilla anyway?  Would
it really be fair for non-disclosure of such a cross-certificate to be
considered a policy violation?



I agree with Rob that, while the intent is not inherently problematic, I
think the language proposed by Jakob is problematic, and might not be
desirable.



See above (and below) rephrasing to limit to reusing the private key 
from a CA certificate.





How would my wording include that converse situation (a CA not subject
to the Mozilla policy using their own private key to cross sign a CA
subject to the Mozilla policy)?

I do notice though that my wording accidentally included the case where
the private key of an end-entity cert is used in as the key of a private
CA, because I wrote "as certificates" instead of "as CA certificates".



Because "as certificates already included in the requirements" is ambiguous
when coupled with "any other certificates". Rob's example here, of a
privately-signed cross-certificate *is* an "any other certificate", and the
CA who was cross-signed is a CA "already included in the requirements"

I think this intent to restate existing policy falls in the normal trap of
"trying to say the same thing two different ways in policy results in two
different interpretations / two different policies"

Taking a step back, this is the general problem with "Are CA
(Organizations) subject to audits/requirements, are CA Certificates, or are
private keys", and that's seen an incredible amount of useful discussion
here on m.d.s.p. that we don't and shouldn't relitigate here. I believe
your intent is "The CA (Organization) participating in the Mozilla Root
Store shall disclose every Certificate that shares a CA Key Pair with a CA
Certificate subject to these requirements", and that lands squarely on this
complex topic.



This is precisely what I am trying to state in a concise manner, to
avoid overbloating policy with wordy sentences like the one you just
used.


A different way to achieve your goal, and to slightly tweak Ben's proposal
(since it appears many CAs do not understand how RFC 5280 is specified) is
to take a slight reword:

"""
These requirements include all cross-certificates that chain to a CA
certificate that is included in Mozilla’s CA Certificate Program, as well
as all certificates (e.g. including self-signed certificates and non-CA
certificates) issued by the CA that share the same CA Key Pair. CAs must
disclose such certificates in the CCADB.
"""



The words "issued by the CA" are problematic.  I realize that you are 
trying to limit the scope to certificates generated by the 
CA-organization, but as written it could be misconstrued as 
"certificates issued by the CA certificate that share the same CA Key Pair".


Proposed better wording of your text:

These requirements include all cross-certificates that chain to a CA
certificate that is included in Mozilla's CA Certificate Program, as
wall as all certificates  (e.g. including self-signed certificates and
non-CA certificates) created by the CA organization that share the same
CA Key Pair as any CA certificate or cross-certificate that chains to an
included CA certificate.

The intent of of my final words is to also cover reuse of keys belonging
to SubCA certificates.


However, this might be easier with a GitHub PR, as I think Wayne used to
do, to try to make sure the language is precise in context. It tries to
close the loophole Rob is pointing out about "who issued", while also
trying to address what you seem to be concerned about (re: key reuse). To
Ben's original challenge, it tries to avoid "chain to", since CAs
apparently are confus

Re: Policy 2.7.1: MRSP Issue #186: Requirement to Disclose Self-signed Certificates

2020-10-30 Thread Jakob Bohm via dev-security-policy

On 2020-10-30 16:29, Rob Stradling wrote:

Perhaps add: "And also include any other certificates sharing the same
private/public key pairs as certificates already included in the
requirements."  (this covers the situation you mentioned where a
self-signed certificate shares the key pair of a certificate that chains
to an included root).


Jakob,

I agree that that would cover that situation, but your proposed language goes 
way, way too far.

Any private CA could cross-certify a publicly-trusted root CA.  How would the 
publicly-trusted CA Operator discover such a cross-certificate?  Why would such 
a cross-certificate be of interest to Mozilla anyway?  Would it really be fair 
for non-disclosure of such a cross-certificate to be considered a policy 
violation?


How would my wording include that converse situation (a CA not subject
to the Mozilla policy using their own private key to cross sign a CA
subject to the Mozilla policy)?

I do notice though that my wording accidentally included the case where
the private key of an end-entity cert is used in as the key of a private
CA, because I wrote "as certificates" instead of "as CA certificates".




____
From: Jakob Bohm via dev-security-policy 
Sent: 29 October 2020 14:57
To: mozilla-dev-security-pol...@lists.mozilla.org 

Subject: Re: Policy 2.7.1: MRSP Issue #186: Requirement to Disclose Self-signed 
Certificates


On 2020-10-29 01:25, Ben Wilson wrote:

Issue #186 in Github <https://github.com/mozilla/pkipolicy/issues/186=04>
deals with the disclosure of CA certificates that directly or transitively
chain up to an already-trusted, Mozilla-included root. A common scenario
for the situation discussed in Issue #186 is when a CA creates a second (or
third or fourth) root certificate with the same key pair as the root that
is already in the Mozilla Root Store. This problem exists at the
intermediate-CA-certificate level, too, where a self-signed
intermediate/subordinate CA certificate is created and not reported.

Public disclosure of such certificates is already required by section 5.3
of the MRSP, which reads, "All certificates that are capable of being used
to issue new certificates, and which directly or transitively chain to a
certificate included in Mozilla’s CA Certificate Program, MUST be operated
in accordance with this policy and MUST either be technically constrained
or be publicly disclosed and audited."

There have been several instances where a CA operator has not disclosed a
CA certificate under the erroneous belief that because it is self-signed it
cannot be trusted in a certificate chain beneath the already-trusted,
Mozilla-included CA. This erroneous assumption is further discussed in Issue
#186 <https://github.com/mozilla/pkipolicy/issues/186=04 >.

The third paragraph of MRSP section 5.3 currently reads, " These
requirements include all cross-certificates which chain to a certificate
that is included in Mozilla’s CA Certificate Program."

I recommend that we change that paragraph to read as follows:

"These requirements include all cross-certificates *and self-signed
certificates (e.g. "Issuer" DN is equivalent to "Subject" DN and public key
is signed by the private key) that* chain to a CA certificate that is
included in Mozilla’s CA Certificate Program*, and CAs must disclose such
CA certificates in the CCADB*.

I welcome your recommendations on how we can make this language even more
clear.



Perhaps add: "And also include any other certificates sharing the same
private/public key pairs as certificates already included in the
requirements."  (this covers the situation you mentioned where a
self-signed certificate shares the key pair of a certificate that chains
to an included root).




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: TLS certificates for ECIES keys

2020-10-30 Thread Jakob Bohm via dev-security-policy

On 2020-10-30 01:50, Matthew Hardeman wrote:

On Thu, Oct 29, 2020 at 6:30 PM Matt Palmer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

The way I read Jacob's description of the process, the subscriber is

"misusing" the certificate because they're not going to present it to TLS
clients to validate the identity of a TLS server, but instead they (the
subscriber) presents the certificate to Apple (and other OS vendors?) when
they know (or should reasonably be expected to know) that the certificate
is
not going to be used for TLS server identity verification -- specifically,
it's instead going to be presented to Prio clients for use in some sort of
odd processor identity parallel-verification dance.



To my knowledge, caching/storing a leaf certificate isn't misuse.  While
they appear to be presenting it in some manner other than via a TLS
session, I don't believe there's any prohibition against such a thing.
Would it cure the concern if they also actually ran a TLS server that does
effectively nothing at the host name presented in the SAN dnsName?




Certainly, whatever's going on with the certificate, it most definitely
*isn't* TLS, and so absent an EKU that accurately describes that other
behaviour,
I can't see how it doesn't count as "misuse", and since the subscriber has
presented the certificate for that purpose, it seems reasonable to describe
it as "misuse by the subscriber".



Not all distribution of a leaf certificate is "use", let alone "misuse".
There are applications which certificate PIN rather than key PIN.  Is that
misuse?




Although misuse is problematic, the concerns around agility are probably
more concerning, IMO.  There's already more than enough examples where
someone has done something "clever" with the WebPKI, only to have it come
back and bite everyone *else* in the arse down the track -- we don't need
to
add another candidate at this stage of the game.  On that basis alone, I
think it's worthwhile to try and squash this thing before it gets any more
traction.



My question is by what rule do you squash this thing that doesn't also
cover a future similar use by a third-party relying party that makes
"additional" use of some subscriber's certificate.




Given that Apple is issuing another certificate for each processor anyway,
I
don't understand why they don't just embed the processor's SPKI directly in
that certificate, rather than a hash of the SPKI.  P-256 public keys (in
compressed form) are only one octet longer than a SHA-256 hash.  But
presumably there's a good reason for not doing that, and this isn't the
relevant forum for discussing such things anyway.



Presumably this is so that the data processors can choose a key for the
encryption of their data shards and bind that to a DNS name demonstrated to
be under the data processor's control via a standard CA issuance process
without abstracting the whole thing away to certificates controlled by
Apple and/or Google.  To demonstrate that the fractional data shard
holder's domain was externally validated by a party that isn't Apple or
Google.

People scrape and analyze other parties' leaf certificates all the time.
What those third parties do with those certificates (if anything) is up to
those third parties.

If a third party can do things which causes a subscriber's certificate to
be revokable for misuse without having derived or acquired the private key,
I hesitate to call that ridiculous, but it is probably unsustainable.
Extending upon that, if the mere fact that the subscriber and the author of
the relying party validation agent are part of the same corporate hierarchy
changes the decision for the same set of circumstances, that's suspect.



Cryptographically, I think the concern is this:

In this scheme, the authenticated server B will use the corresponding
private key in a mathematically complex protocol that is neither TLS nor
CMS.  It is conceivable that said protocol may have a weakness that
allows a clever opponent M to exchange traffic with B in order to 
discover B's private key.


Thus using a WebPKI "Server Authentication" certificate to bind a public
key used by party A to identify party B in the "prio" protocol creates a
potential risk of creating a family of certificates with easily
compromised keys.

Thus it makes sense for the involved CAs (such as Let's Encrypt) to 
issue these certificates with a unique EKU other than the generic 
"Server Authentication" traditionally associated with TLS.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: TLS certificates for ECIES keys

2020-10-29 Thread Jakob Bohm via dev-security-policy
in
use for this purpose.

So, mdsp folks and root programs: Can a CA or a Subscriber participate in
the above system without violating the relevant requirements?

Thanks,
Jacob




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7.1: MRSP Issue #186: Requirement to Disclose Self-signed Certificates

2020-10-29 Thread Jakob Bohm via dev-security-policy

On 2020-10-29 01:25, Ben Wilson wrote:

Issue #186 in Github <https://github.com/mozilla/pkipolicy/issues/186>
deals with the disclosure of CA certificates that directly or transitively
chain up to an already-trusted, Mozilla-included root. A common scenario
for the situation discussed in Issue #186 is when a CA creates a second (or
third or fourth) root certificate with the same key pair as the root that
is already in the Mozilla Root Store. This problem exists at the
intermediate-CA-certificate level, too, where a self-signed
intermediate/subordinate CA certificate is created and not reported.

Public disclosure of such certificates is already required by section 5.3
of the MRSP, which reads, "All certificates that are capable of being used
to issue new certificates, and which directly or transitively chain to a
certificate included in Mozilla’s CA Certificate Program, MUST be operated
in accordance with this policy and MUST either be technically constrained
or be publicly disclosed and audited."

There have been several instances where a CA operator has not disclosed a
CA certificate under the erroneous belief that because it is self-signed it
cannot be trusted in a certificate chain beneath the already-trusted,
Mozilla-included CA. This erroneous assumption is further discussed in Issue
#186 <https://github.com/mozilla/pkipolicy/issues/186>.

The third paragraph of MRSP section 5.3 currently reads, " These
requirements include all cross-certificates which chain to a certificate
that is included in Mozilla’s CA Certificate Program."

I recommend that we change that paragraph to read as follows:

"These requirements include all cross-certificates *and self-signed
certificates (e.g. "Issuer" DN is equivalent to "Subject" DN and public key
is signed by the private key) that* chain to a CA certificate that is
included in Mozilla’s CA Certificate Program*, and CAs must disclose such
CA certificates in the CCADB*.

I welcome your recommendations on how we can make this language even more
clear.



Perhaps add: "And also include any other certificates sharing the same
private/public key pairs as certificates already included in the 
requirements."  (this covers the situation you mentioned where a 
self-signed certificate shares the key pair of a certificate that chains 
to an included root).



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EJBCA performs incorrect calculation of validities

2020-10-28 Thread Jakob Bohm via dev-security-policy

On 2020-10-28 20:54, Ryan Sleevi wrote:

On Wed, Oct 28, 2020 at 10:50 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


This aspect of RFC5280 section 4.1.2.5 is quite unusual in computing,
where the ends of intervals are typically encoded such that subtracting
the interval ends (as pure numbers) yields the interval length.




= notBefore, <= notAfter is as classic as "< size - 1" in

0-indexed for-loops (i.e. a size of 1 indicates there's one element - at
index 0), or "last - first" needs to add +1 if counting elements in a
pointer range.



0-indexed for-loops typically use "< size" or "<= size - 1", 
illustrating how hard it is to get such cases right.





As a data point, the reference CA code in the OpenSSL library,
version 1.1.0



Generates a whole bunch of completely invalid badness that is completely
non-compliant, and is hardly a "reference" CA (c.f. the long time to switch
to utf8only for DNs as just one of the basic examples)


The "ca" "app" inside the openssl command line tool is officially 
defined as being "reference code only", not intended as an actual 
production CA implementation, although many people have found it

useful as a component of low volume production CA implementations.

This "reference" or "sample" code and its sample configuration contains
comments as to what choices are compatible with older Mozilla code,
including that using UTF-8 strings in DNs would cause older Mozilla code
to fail.



So this seems another detail where the old IETF working group made

things unnecessarily complicated for everybody.



https://www.youtube.com/watch?v=HMqZ2PPOLik

https://tools.ietf.org/rfcdiff?url2=draft-ietf-pkix-new-part1-01.txt dated
2000. This is 2020.

Where does that change come from?
https://www.itu.int/rec/T-REC-X.509-23-S/en (aka ITU's X.509), which in
2000, stated "TA indicates the period of validity of the certificate, and
consists of two dates, the first and last on which the certificate is
valid."

Does that mean this was a change in 2000? Nope.  It's _always been there_,
as far back as ITU-T's X.509 (11/88) -
https://www.itu.int/rec/T-REC-X.509-198811-S/en



ITU's X.509 (10/2012) doesn't seem to contain the sentence quoted and
seems to be completely silent as to the inclusive/exclusive
interpretation of the ends of the validity interval.  And this doesn't 
seem to change in corrigendum 2 from 04/2016.



It helps to do research before casting aspersions or proposing to
reinterpret meanings that are older than some members here.



The overarching problem is that some people love to do language
lawyering with the exact meanings of specifications, and strictly
enforcing every minute detail, such as the fact that RFC5280 from 2008
insists that these two ASN.1 fields should be interpreted as
"inclusive", while 398 days and 1 second is technically more than 398
days.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EJBCA performs incorrect calculation of validities

2020-10-28 Thread Jakob Bohm via dev-security-policy

On 2020-10-28 11:55, Mike Kushner wrote:

Hi all,

We were alerted to the fact that EJBCA does not calculate certificate and OCSP validities 
in accordance with RFC 5280, which has been a requirement since BR 1.7.1 The word 
"inclusive" was not caught, meaning that a certificate/response issued by EJBCA 
will have a validity of one second longer than intended by the RFC.

This will only cause an incident for certificates of a validity of exactly 398 
days - any certificates with shorter validities are still within the 
requirements.

This has been fixed in the coming EJBCA 7.4.3, and all PrimeKey customers were 
alerted a week ago and recommended to review their certificate profiles and 
responder settings to be within thresholds.

While investigating this we noticed that several non-EJBCA CAs seem to issue 
certificates with the same non RFC-compliant validity calculation (but still 
within the 398 day limit), so as a professional courtesy we wish like to alert 
other vendors to review their implementations and lessen the chance of any 
misissuance.



Any response from the Mozilla NSS team as to the correct implementation
of this detail in all related NSS code (including, but not limited to,
the client side code interpreting the validity data in received
certificates, OCSP responses etc.).

This aspect of RFC5280 section 4.1.2.5 is quite unusual in computing,
where the ends of intervals are typically encoded such that subtracting
the interval ends (as pure numbers) yields the interval length.

As a data point, the reference CA code in the OpenSSL library,
version 1.1.0 also treats the "Not after" time as exclusive when
generating certificates and the OpenSSL client code treats both
timestamps as exclusive when validating certificates.

So this seems another detail where the old IETF working group made
things unnecessarily complicated for everybody.

From a policy perspective, if enough code out there has the same
interpretation as old EJBCA versions, maybe it would make more sense
for the policy bodies to override RFC5280.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PEM of root certs in Mozilla's root store

2020-10-19 Thread Jakob Bohm via dev-security-policy

On 2020-10-17 01:38, Ryan Sleevi wrote:

On Fri, Oct 16, 2020 at 5:27 PM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


RFC4180 section 3 explicitly warns that there are other variants and
specifications of the CSV format, and thus the full generalizations in
RFC4180 should not be exploited to their extremes.



You're referring to this section, correct?

"""
Interoperability considerations:
   Due to lack of a single specification, there are considerable
   differences among implementations.  Implementors should "be
   conservative in what you do, be liberal in what you accept from
   others" (RFC 793 [8]) when processing CSV files.  An attempt at a
   common definition can be found in Section 2.

   Implementations deciding not to use the optional "header"
   parameter must make their own decision as to whether the header is
   absent or present.
"""



Splitting the input at newlines before parsing for quotes and commas is
a pretty common implementation strategy as illustrated by my examples of
common tools that actually do so.



This would appear to be at fundamental odds with "be liberal in what you
accept from others" and, more specifically, ignoring the remark that
Section 2 is an admirable effort at a "common" definition, which is so
called as it minimizes such interoperability differences.

As your original statement was the file produced was "not CSV", I believe
that's been thoroughly dispelled by highlighting that, indeed, it does
conform to the grammar set forward in RFC 4180, and is consistent with the
IANA mime registration for CSV.

Although you also raised concern that naive and ill-informed attempts at
CSV parsing, which of course fail to parse the grammar of RFC 4180, there
are thankfully alternatives for each of those concerns. With awk, you have
FPAT. With Perl, you have Text::CSV. Of course, as the cut command is too
primitive to handle a proper grammar, there are plenty of equally
reasonable alternatives to this, and which take far less time than the
concerns and misstatements raised on this thread, which I believe we can,
thankfully, end.



Please stop trolling, the section you quoted clearly states that section 
2 of the RFC was only an *attempt* at a common definition.


Ideally, a CSV parser should be liberal in what it accepts, but a CSV 
producer (which is the subject of this thread) should be conservative in 
what it provides, thus avoiding the least implemented/tested aspects

of the generalized grammar in that section 2.

Putting line feeds inside CSV fields is like using bang paths in an
RFC822 To: field.  Theoretically permitted and implemented by the most
complete e-mail parsing libraries in the world, but not something that
one can expect to work for a global audience.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PEM of root certs in Mozilla's root store

2020-10-16 Thread Jakob Bohm via dev-security-policy

On 2020-10-16 14:11, Ryan Sleevi wrote:

On Thu, Oct 15, 2020 at 7:44 PM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


On 2020-10-15 11:57, Ryan Sleevi wrote:

On Thu, Oct 15, 2020 at 1:14 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


For example, embedded new lines are discussed in 2.6 and the ABNF

therein.




The one difference from RFC4180 is that CR and LF are not part of the
alternatives for the inner part of "escaped".



Again, it would do a lot of benefit for everyone if you would be more
precise here.

For example, it seems clear and unambiguous that what you just stated is
factually wrong, because:

escaped = DQUOTE *(TEXTDATA / COMMA / CR / LF / 2DQUOTE) DQUOTE



I was stating the *difference* from RFC4180 being precisely that
"simple, traditional CSV" doesn't accept the CR and LF alternatives in
that syntax production.



Ah, that would explain my confusion: you’re using “CSV” in a manner
different than what is widely understood and standardized. The complaint
about newlines would be as technically accurate and relevant as a complaint
that “simple, traditional certificates should parse as JSON” or that
“simple, traditional HTTP should be delivered over port 23”; which is to
say, it seems like this concern is not relevant.

As the CSVs comply with RFC 4180, which is widely recognized as what “CSV”
means, I think Jakob’s concern here can be disregarded. Any implementation
having trouble with the CSVs produced is confused about what a CSV is, and
thus not a CSV parser.



RFC4180 section 3 explicitly warns that there are other variants and 
specifications of the CSV format, and thus the full generalizations in 
RFC4180 should not be exploited to their extremes.


Splitting the input at newlines before parsing for quotes and commas is
a pretty common implementation strategy as illustrated by my examples of
common tools that actually do so.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Sectigo to Be Acquired by GI Partners

2020-10-16 Thread Jakob Bohm via dev-security-policy

On 2020-10-16 12:33, Rob Stradling wrote:

...clarification of what meaning was intended.


Merely this...

"Hi Ryan.  Tim Callan posted a reply to your questions last week, but his message 
has not yet appeared on the list.  Is it stuck in a moderation queue?"



The part needing clarification started with:

> In addition to the questions posted by Wayne, I think it'd be useful
> to confirm:
> ...

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Sectigo to Be Acquired by GI Partners

2020-10-15 Thread Jakob Bohm via dev-security-policy

On 2020-10-15 16:46, Rob Stradling wrote:

Hi Jacob.  I don't believe that this list mandates any particular posting style 
[https://en.wikipedia.org/wiki/Posting_style].

Although interleaved/inline posting is my preferred style, I'm stuck using 
Outlook365 as my mail client these days.  (Sadly, Thunderbird's usability 
worsened dramatically for me after Sectigo moved corporate email to Office365 a 
few years ago).  So this is the situation I find myself in...

"This widespread policy in business communication made bottom and inline posting so 
unknown among most users that some of the most popular email programs no longer support 
the traditional posting style. For example, Microsoft Outlook, AOL, and Yahoo! make it 
difficult or impossible to indicate which part of a message is the quoted original or do 
not let users insert comments between parts of the original."
[https://en.wikipedia.org/wiki/Posting_style#Quoting_support_in_popular_mail_clients]



I realized that the problem was caused by broken client software, and
was pointing out than in this case, it had led to a specific lack of
clarity and was asking for clarification of what meaning was intended.




From: dev-security-policy  on behalf 
of Jakob Bohm via dev-security-policy 
Sent: 12 October 2020 22:41
To: mozilla-dev-security-pol...@lists.mozilla.org 

Subject: Re: Sectigo to Be Acquired by GI Partners

Hi Rob,

The e-mail you quote below seems to be inadvertently "confirming" some
suspicions that someone else posed as questions. I think the group as a
whole would love to have actual specific answers to those original
questions.

Remember to always add an extra layer of ">" indents for each level of
message quoting, so as to not misattribute text.

On 2020-10-12 10:43, Rob Stradling wrote:

Hi Ryan.  Tim Callan posted a reply to your questions last week, but his 
message has not yet appeared on the list.  Is it stuck in a moderation queue?


From: dev-security-policy  on behalf 
of Ryan Sleevi via dev-security-policy 
Sent: 03 October 2020 22:16
To: Ben Wilson 
Cc: mozilla-dev-security-policy 
Subject: Re: Sectigo to Be Acquired by GI Partners


In a recent incident report [1], a representative of Sectigo noted:

The carve out from Comodo Group was a tough time for us. We had twenty

years’ worth of completely intertwined systems that had to be disentangled
ASAP, a vast hairball of legacy code to deal with, and a skeleton crew of
employees that numbered well under half of what we needed to operate in any
reasonable fashion.



This referred to the previous split [2] of the Comodo CA business from the
rest of Comodo businesses, and rebranding as Sectigo.

In addition to the questions posted by Wayne, I think it'd be useful to
confirm:

1. Is it expected that there will be similar system and/or infrastructure
migrations as part of this? Sectigo's foresight of "no effect on its
operations" leaves it a bit ambiguous whether this is meant as "practical"
effect (e.g. requiring a change of CP/CS or effective policies) or whether
this is meant as no "operational" impact (e.g. things will change, but
there's no disruption anticipated). It'd be useful to frame this response
in terms of any anticipated changes at all (from mundane, like updating the
logos on the website, to significant, such as any procedure/equipment
changes), rather than observed effects.

2. Is there a risk that such an acquisition might further reduce the crew
of employees to an even smaller number? Perhaps not immediately, but over
time, say the next two years, such as "eliminating redundancies" or
"streamlining operations"? I recognize that there's an opportunity such an
acquisition might allow for greater investment and/or scale, and so don't
want to presume the negative, but it would be good to get a clear
commitment as to that, similar to other acquisitions in the past (e.g.
Symantec CA operations by DigiCert)

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1648717#c21
[2]
https://groups.google.com/g/mozilla.dev.security.policy/c/AvGlsb4BAZo/m/p_qpnU9FBQAJ

On Thu, Oct 1, 2020 at 4:55 PM Ben Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


   As announced previously by Rob Stradling, there is an agreement for
private investment firm GI Partners, out of San Francisco, CA, to acquire
Sectigo. Press release:
https://sectigo.com/resource-library/sectigo-to-be-acquired-by-gi-partners
.


I am treating this as a change of legal ownership covered by section 8.1
<
https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/#81-change-in-legal-ownership



of the Mozilla Root Store Policy, which states:


If the receiving or acquiring company is new to the Mozilla root program,
it must demonstrate compliance with the entirety of this policy and there
MUST be a pu

Re: PEM of root certs in Mozilla's root store

2020-10-15 Thread Jakob Bohm via dev-security-policy

On 2020-10-15 11:57, Ryan Sleevi wrote:

On Thu, Oct 15, 2020 at 1:14 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


For example, embedded new lines are discussed in 2.6 and the ABNF

therein.




The one difference from RFC4180 is that CR and LF are not part of the
alternatives for the inner part of "escaped".



Again, it would do a lot of benefit for everyone if you would be more
precise here.

For example, it seems clear and unambiguous that what you just stated is
factually wrong, because:

escaped = DQUOTE *(TEXTDATA / COMMA / CR / LF / 2DQUOTE) DQUOTE



I was stating the *difference* from RFC4180 being precisely that
"simple, traditional CSV" doesn't accept the CR and LF alternatives in
that syntax production.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PEM of root certs in Mozilla's root store

2020-10-14 Thread Jakob Bohm via dev-security-policy

On 2020-10-15 04:52, Ryan Sleevi wrote:

On Wed, Oct 14, 2020 at 7:31 PM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


Only the CSV form now contains CSV artifacts.  And it isn't really CSV
either (even if Microsoft Excel handles it).



Hi Jakob,

Could you be more precise here? Embedded new lines within quoted values is
well-defined for CSV.

Realizing that there are unfortunately many interpretations of what
“well-defined for CSV” here, perhaps you can frame your concerns in terms
set out in
https://tools.ietf.org/html/rfc4180 . This at least helps make sure we can
understand and are in the same understanding.

For example, embedded new lines are discussed in 2.6 and the ABNF therein.



The one difference from RFC4180 is that CR and LF are not part of the
alternatives for the inner part of "escaped".




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PEM of root certs in Mozilla's root store

2020-10-14 Thread Jakob Bohm via dev-security-policy

On 2020-10-15 00:16, Kathleen Wilson wrote:
The text version has been updated to have each line limited to 64 
characters.


Text:

https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEMTxt?TrustBitsInclude=Websites 



https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEMTxt?TrustBitsInclude=Email 



CSV:
https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEMCSV?TrustBitsInclude=Websites 



https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEMCSV?TrustBitsInclude=Email 




1. The reports that contain /only/ PEM data.  I argue that the
traditional format of concatenated PEM files (as used by e.g. the
openssl command line tool) without CSV embellishments would be
preferable, and that the reports in the latest post by Kathleen
lacked the PEM line wrapping while still containing CSV
artifacts. 


Jakob, please explain what you mean by "still containing CSV artifacts". 
Does the new version of the text report have the problem?




Only the CSV form now contains CSV artifacts.  And it isn't really CSV
either (even if Microsoft Excel handles it).




2. The reports that contain other data in CSV format.  ...


Opening the CSV version of the reports in Excel and Numbers works fine 
for me. So I don't understand what the problem is with having this 
report be a direct extract of the PEM data that the CCADB has for each 
root certificate.




However, it might also be reasonable that if these concerns aren't easily
addressable, perhaps not offering the feed is another option, since it
seems a lot of work for something that should be and is naturally
discouraged.


It's not a lot of work, just a matter of understanding what is most 
useful to folks.


I think it would be good to have downstreams using current data, e.g. 
data published directly via the CCADB. I think that providing the data 
in an easily consumable format is better than having folks extract the 
data from certdata.txt.


Thanks,
Kathleen




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PEM of root certs in Mozilla's root store

2020-10-14 Thread Jakob Bohm via dev-security-policy
coISjlUxFF6tdpg6jg8gbLL8bvZkSM/SAFwdakFKq0fcfPJVD0dBm
pAPrMMhe5cG3nCYsS4No41XQEMIwRHNaqbYE6gZj3LJgqcQKH0XZi/caulAGgq7YN6D
6IUtdQis4CwPAxaUWktWBiP7Zme8a7ileb2R6jWDA+wWFjbw2Y3npuRVDM30pQcakjJ
yfKl2qUMI/cjDpwyVV5xnIQFUZot/eZOKjRa3spAN2cMVCFVd9oKDMyXroDclDZK9D7
ONhMeU+SsTjoF7Nuucpw4i9A5O4kKPnf+dQIBA6OCAUQwggFAMBIGA1UdEwEB/wQIMA
YBAf8CAQwwPAYDVR0fBDUwMzAxoC+gLYYraHR0cDovL2NybC5jaGFtYmVyc2lnbi5vc
mcvY2hhbWJlcnNyb290LmNybDAdBgNVHQ4EFgQU45T1sU3p26EpW1eLTXYGduHRooow
DgYDVR0PAQH/BAQDAgEGMBEGCWCGSAGG+EIBAQQEAwIABzAnBgNVHREEIDAegRxjaGF
tYmVyc3Jvb3RAY2hhbWJlcnNpZ24ub3JnMCcGA1UdEgQgMB6BHGNoYW1iZXJzcm9vdE
BjaGFtYmVyc2lnbi5vcmcwWAYDVR0gBFEwTzBNBgsrBgEEAYGHLgoDATA+MDwGCCsGA
QUFBwIBFjBodHRwOi8vY3BzLmNoYW1iZXJzaWduLm9yZy9jcHMvY2hhbWJlcnNyb290
Lmh0bWwwDQYJKoZIhvcNAQEFBQADggEBAAxBl8IahsAifJ/7kPMa0QOx7xP5IV8EnNr
JpY0nbJaHkb5BkAFyk+cefV/2icZdp0AJPaxJRUXcLo0waLIJuvvDL8y6C98/d3tGfT
oSJI6WjzwFCm/SlCgdbQzALogi1djPHRPH8EjX1wWnz8dHnjs8NMiAT9QUu/wNUPf6s
+xCX6ndbcj0dc97wXImsQEcXCz9ek60AcUFV7nnPKoF2YjpB0ZBzu9Bga5Y34OirsrX
dx/nADydb47kMgkdTXg0eDQ8lJsm7U9xxhl6vSAiSFr+S30Dt+dYvsYyTnQeaN2oaFu
zPu5ifdmA6Ap1erfutGWaIZDgqtCYvDi1czyL+Nw="
"AC Camerfirma, S.A.","AC Camerfirma S.A.","","Chambers of Commerce
 Root - 2008","00A3DA427EA4B1AEDA","063E4AFAC491DFD332F3089B8542E94
617D893D7FE944E10A7937EE29D9693C0","849AD3279D9B805A288339468C41774
4AC1CE2758A6E283A446685384D5D6CD2","2008.08.01","2038.07.31","RSA 4
096 bits","SHA1WithRSA","Websites;Email","","","1.3.6.1.4.1.17326.1
0.14.2.1.2","https://bugzilla.mozilla.org/show_bug.cgi?id=406968",;
NSS 3.12.9","Firefox 4.0","https://server3ok.camerfirma.com","https
://server3.camerfirma.com","https://server3rv.camerfirma.com","","h
ttp://www.camerfirma.com","Spain","","https://www.camerfirma.com/pu
blico/DocumentosWeb/politicas/CPS_eidas_EN_1.2.12.pdf","https://www
..csqa.it/getattachment/Sicurezza-ICT/Documenti/Attestazione-di-Audi
t-secondo-i-requisiti-ETSI/2020-03-CSQA-Attestation-CAMERFIRMA-rev-
2-signed.pdf.aspx?lang=it-IT","https://www.csqa.it/getattachment/Si
curezza-ICT/Documenti/Attestazione-di-Audit-secondo-i-requisiti-ETS
I/2020-03-CSQA-Attestation-CAMERFIRMA-rev-2-signed.pdf.aspx?lang=it
-IT","https://www.csqa.it/getattachment/Sicurezza-ICT/Documenti/Att
estazione-di-Audit-secondo-i-requisiti-ETSI/2020-03-CSQA-Attestatio
n-CAMERFIRMA-rev-2-signed.pdf.aspx?lang=it-IT","CSQA Certificazioni
 srl","ETSI EN 319 411","2020.03.05","MIIHTzCCBTegAwIBAgIJAKPaQn6ks
a7aMA0GCSqGSIb3DQEBBQUAMIGuMQswCQYDVQQGEwJFVTFDMEEGA1UEBxM6TWFkcmlk
IChzZWUgY3VycmVudCBhZGRyZXNzIGF0IHd3dy5jYW1lcmZpcm1hLmNvbS9hZGRyZXN
zKTESMBAGA1UEBRMJQTgyNzQzMjg3MRswGQYDVQQKExJBQyBDYW1lcmZpcm1hIFMuQS
4xKTAnBgNVBAMTIENoYW1iZXJzIG9mIENvbW1lcmNlIFJvb3QgLSAyMDA4MB4XDTA4M
DgwMTEyMjk1MFoXDTM4MDczMTEyMjk1MFowga4xCzAJBgNVBAYTAkVVMUMwQQYDVQQH
EzpNYWRyaWQgKHNlZSBjdXJyZW50IGFkZHJlc3MgYXQgd3d3LmNhbWVyZmlybWEuY29
tL2FkZHJlc3MpMRIwEAYDVQQFEwlBODI3NDMyODcxGzAZBgNVBAoTEkFDIENhbWVyZm
lybWEgUy5BLjEpMCcGA1UEAxMgQ2hhbWJlcnMgb2YgQ29tbWVyY2UgUm9vdCAtIDIwM
DgwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCvAMtwNyuAWko6bHiUfaN/
Gh/2NdW928sNRHI+JrKQUrpjOyhYb6WzbZSm891kDFX29ufyIiKAXuFixrYp4YFs8r/
lfTJqVKAyGVn+H4vXPWCGhSRv4xGzdz4gljUha7MI2XAuZPeEklPWDrCQiorjh40G07
2QDuKZoRuGDtqaCrsLYVAGUvGef3bsyw/QHg3PmTA9HMRFEFis1tPo1+XqxQEHd9ZR5
gN/ikilTWh1uem8nk4ZcfUyS5xtYBkL+8ydddy/Js2Pk3g5eXNeJQ7KXOt3EgfLZEFH
cpOrUMPrCXZkNNI5t3YRCQ12RcSprj1qr7V9ZS+UWBDsXHyvfuK2GNnQm05aSd+pZgv
MPMZ4fKecHePOjlO+Bd5gD2vlGts/4+EhySnB8esHnFIbAURRPHsl18TlUlRdJQfKFi
C4reRB7noI/plvg6aRArBsNlVq5331lubKgdaX8ZSD6e2wsWsSaR6s+12pxZjptFtYe
r49okQ6Y1nUCyXeG0+95QGezdIp1Z8XGQpvvwyQ0wlf2eOKNcx5Wk0ZN5K3xMGtr/R5
JJqyAQuxr1yW84Ay+1w9mPGgP0revq+ULtlVmhduYJ1jbLhjya6BXBg14JC7vjxPNyK
5fuvPnnchpj04gftI2jE9K+OJ9dC1vX7gUMQSibMjmhAxhduub+84Mxh2EQIDAQABo4
IBbDCCAWgwEgYDVR0TAQH/BAgwBgEB/wIBDDAdBgNVHQ4EFgQU+SSsD7K1+HnA+mCIG
8TZTQKeFxkwgeMGA1UdIwSB2zCB2IAU+SSsD7K1+HnA+mCIG8TZTQKeFxmhgbSkgbEw
ga4xCzAJBgNVBAYTAkVVMUMwQQYDVQQHEzpNYWRyaWQgKHNlZSBjdXJyZW50IGFkZHJ
lc3MgYXQgd3d3LmNhbWVyZmlybWEuY29tL2FkZHJlc3MpMRIwEAYDVQQFEwlBODI3ND
MyODcxGzAZBgNVBAoTEkFDIENhbWVyZmlybWEgUy5BLjEpMCcGA1UEAxMgQ2hhbWJlc
nMgb2YgQ29tbWVyY2UgUm9vdCAtIDIwMDiCCQCj2kJ+pLGu2jAOBgNVHQ8BAf8EBAMC
AQYwPQYDVR0gBDYwNDAyBgRVHSAAMCowKAYIKwYBBQUHAgEWHGh0dHA6Ly9wb2xpY3k
uY2FtZXJmaXJtYS5jb20wDQYJKoZIhvcNAQEFBQADggIBAJASryI1wqM58C7e6bXpeH
xIvj99RZJe6dqxGfwWPJ+0W2aeaufDuV2I6A+tzyMP3iU6XsxPpcG1Lawk0lgH3qLPa
YRgM+gQDROpI9CF5Y57pp49chNyM/WqfcZjHwj0/gF/JM8rLFQJ3uIrbZLGOU8W6jx+
ekbURWpGqOt1glanq6B8aBMz9p0w8G8nOSQjKpD9kCk18pPfNKXG9/jvjA9iSnyu0/V
U+I22mlaHFoI6M6taIgj3grrqLuBHmrS1RaMFO9ncLkVAO+rcf+g769HsJtg1pDDFOq
xXnrN2pSB7+R5KBWIBpih1YJeSDW4+TTdDDZIVnBgizVGZoCkaPF+KMjNbMMeJL0eYD
6MDxvbxrN8y8NmBGuScvfaAFPDRLLmF9dijscilIeUcE5fuDr3fKanvNFNb0+RqE4QG
tjICxFKuItLcsiFCGtpA8CnJ7AoMXOLQusxI0zcKzBIKinmwPQN/aUv0NCB9szTqjkt
k9T79syNnFQ0EuPAtwQlRPLJsFfClI9eDdOTlLsn+mCdCxqvGnrDQWzilm1DefhiYtU
U79nm06PcaewaD+9CL2rvHvRirCG88gGtAPxkZumWK5r7VXNM21+9AUiRgOGcEMeyP8
4LG3rlV8zsxkVrctQgVrXYlCg17LofiDKYGvCYQbTed7N14jHyAxfDZd0jQ"



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PEM of root certs in Mozilla's root store

2020-10-13 Thread Jakob Bohm via dev-security-policy

On 2020-10-12 20:50, Kathleen Wilson wrote:

On 10/7/20 1:09 PM, Jakob Bohm wrote:
Please note that at least the first CSV download is not really a CSV 
file, as there are line feeds within each "PEM" value, and only one 
column.  It would probably be more useful as a simple concatenated PEM 
file, as used by various software packages as a root store input format.





Here's updated reports...

Text:

https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEMTxt?TrustBitsInclude=Websites 



https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEMTxt?TrustBitsInclude=Email 


These two are bad multi-pem files, as each certificate likes the
required line wrapping.

The useful multi-pem format would keep the line wrapping from the
old report, but not insert stray quotes and commas.



CSV:
https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEMCSV?TrustBitsInclude=Websites 



https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEMCSV?TrustBitsInclude=Email 



These two are like the old bad multi-pem reports, with the stray
internal line feeds after/before the - marker lines, but without the
internal linefeeds in the Base64 data.

Useful actual-csv reports like IncludedCACertificateWithPEMReport.csv
would have to omit the marker lines and their linefeeds as well as the
internal linefeeds in the Base64 data in order to keep each CSV record
entirely on one line.






If the Text reports look good, I'll add them to the wiki page, 
https://wiki.mozilla.org/CA/Included_Certificates



Thanks,
Kathleen




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Sectigo to Be Acquired by GI Partners

2020-10-12 Thread Jakob Bohm via dev-security-policy

Hi Rob,

The e-mail you quote below seems to be inadvertently "confirming" some
suspicions that someone else posed as questions. I think the group as a
whole would love to have actual specific answers to those original
questions.

Remember to always add an extra layer of ">" indents for each level of
message quoting, so as to not misattribute text.

On 2020-10-12 10:43, Rob Stradling wrote:

Hi Ryan.  Tim Callan posted a reply to your questions last week, but his 
message has not yet appeared on the list.  Is it stuck in a moderation queue?


From: dev-security-policy  on behalf 
of Ryan Sleevi via dev-security-policy 
Sent: 03 October 2020 22:16
To: Ben Wilson 
Cc: mozilla-dev-security-policy 
Subject: Re: Sectigo to Be Acquired by GI Partners


In a recent incident report [1], a representative of Sectigo noted:

The carve out from Comodo Group was a tough time for us. We had twenty

years’ worth of completely intertwined systems that had to be disentangled
ASAP, a vast hairball of legacy code to deal with, and a skeleton crew of
employees that numbered well under half of what we needed to operate in any
reasonable fashion.



This referred to the previous split [2] of the Comodo CA business from the
rest of Comodo businesses, and rebranding as Sectigo.

In addition to the questions posted by Wayne, I think it'd be useful to
confirm:

1. Is it expected that there will be similar system and/or infrastructure
migrations as part of this? Sectigo's foresight of "no effect on its
operations" leaves it a bit ambiguous whether this is meant as "practical"
effect (e.g. requiring a change of CP/CS or effective policies) or whether
this is meant as no "operational" impact (e.g. things will change, but
there's no disruption anticipated). It'd be useful to frame this response
in terms of any anticipated changes at all (from mundane, like updating the
logos on the website, to significant, such as any procedure/equipment
changes), rather than observed effects.

2. Is there a risk that such an acquisition might further reduce the crew
of employees to an even smaller number? Perhaps not immediately, but over
time, say the next two years, such as "eliminating redundancies" or
"streamlining operations"? I recognize that there's an opportunity such an
acquisition might allow for greater investment and/or scale, and so don't
want to presume the negative, but it would be good to get a clear
commitment as to that, similar to other acquisitions in the past (e.g.
Symantec CA operations by DigiCert)

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1648717#c21
[2]
https://groups.google.com/g/mozilla.dev.security.policy/c/AvGlsb4BAZo/m/p_qpnU9FBQAJ

On Thu, Oct 1, 2020 at 4:55 PM Ben Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


  As announced previously by Rob Stradling, there is an agreement for
private investment firm GI Partners, out of San Francisco, CA, to acquire
Sectigo. Press release:
https://sectigo.com/resource-library/sectigo-to-be-acquired-by-gi-partners
.


I am treating this as a change of legal ownership covered by section 8.1
<
https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/#81-change-in-legal-ownership



of the Mozilla Root Store Policy, which states:


If the receiving or acquiring company is new to the Mozilla root program,
it must demonstrate compliance with the entirety of this policy and there
MUST be a public discussion regarding their admittance to the root

program,

which Mozilla must resolve with a positive conclusion in order for the
affected certificate(s) to remain in the root program.


In order to comply with policy, I hereby formally announce the commencement
of a 3-week discussion period for this change in legal ownership of Sectigo
by requesting thoughtful and constructive feedback from the community.

Sectigo has already stated that it foresees no effect on its operations due
to this ownership change, and I believe that the acquisition announced by
Sectigo and GI Partners is compliant with Mozilla policy.

Thanks,

Ben
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: PEM of root certs in Mozilla's root store

2020-10-07 Thread Jakob Bohm via dev-security-policy

On 2020-10-06 23:47, Kathleen Wilson wrote:

All,

I've been asked to publish Mozilla's root store in a way that is easy to 
consume by downstreams, so I have added the following to 
https://wiki.mozilla.org/CA/Included_Certificates


CCADB Data Usage Terms
<https://www.ccadb.org/rootstores/usage#ccadb-data-usage-terms>

PEM of Root Certificates in Mozilla's Root Store with the Websites 
(TLS/SSL) Trust Bit Enabled (CSV)
<https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEM?TrustBitsInclude=Websites> 



PEM of Root Certificates in Mozilla's Root Store with the Email (S/MIME) 
Trust Bit Enabled (CSV)
<https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEM?TrustBitsInclude=Email> 




Please let me know if you have feedback or recommendations about this.



Please note that at least the first CSV download is not really a CSV 
file, as there are line feeds within each "PEM" value, and only one 
column.  It would probably be more useful as a simple concatenated PEM 
file, as used by various software packages as a root store input format.


I have also noted that at least one downstream root store (Debian) takes
all Mozilla-trusted certificates and labels them as simply 
"mozilla/cert-public-name", even though more useful naming can be 
extracted from the last (most complete) report, after finding a non-gui 
tool that can actually parse CSV files with embedded newlines in string 
values.





Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Temporary WebTrust Seal for COVID Issues

2020-08-24 Thread Jakob Bohm via dev-security-policy

On 2020-08-20 20:34, Ben Wilson wrote:

All,

Some CAs have inquired about Mozilla's acceptance of WebTrust's temporary,
6-month seal related to COVID19 issues.
See
https://www.cpacanada.ca/en/business-and-accounting-resources/audit-and-assurance/overview-of-webtrust-services

According to that WebTrust webpage, the temporary seal will be offered only
in situations that meet the following criteria:

- The practitioner report has been qualified,
- The qualification is directly related to government-imposed COVID-19
scope restrictions only and is disclosed in the practitioner report, and
- There are no qualifications due to control deficiencies in the period.

It also states, "When a temporary seal has been granted, it is expected
that a practitioner will be able to perform the procedures that could not
be completed initially which gave rise to the scope limitation before the
temporary seal expires. Where the practitioner is able to perform such
procedures and is able to issue subsequently an unqualified report for the
CA, the unqualified report could then be submitted to CPA Canada to obtain
the traditional seal."

For purposes of obtaining a timely audit, it appears that such a timely
filed report would satisfy Mozilla Policy 3.1.3's annual audit filing
requirements (
https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/#313-audit-parameters)
and therefore it would not be a "delay".  For context see
https://wiki.mozilla.org/CA/Audit_Statements#Audit_Delay
and https://wiki.mozilla.org/CA/Audit_Statements#WebTrust_Audits.
<https://wiki.mozilla.org/CA/Audit_Statements#WebTrust_Audits>

So as further guidance on the above page, I am proposing clarification that
the Temporary WebTrust Seal for COVID-19-related qualified reports does not
require the CA to file an Incident Report, but rather that we will create a
CA Compliance bug in Bugzilla simply to track the expiration of the
temporary seal.



As a relying party (end user of Mozilla products that use the root
store), I appreciate this, however I have a suggested simplification:

Simply mark the (early) expiry date of the temporary seal in the CCADB,
such that the usual audit-renewal procedures will trigger at the
appropriate time.

This obviously presumes that there are CCADB to mark an audit reports as
having a shorter-than-one-year validity, as public discussions in this
group have frequently mentioned 3-month audits in specific cases.

Besides, as this crisis is expected to last closer to a full year than
6 months, one must wonder if auditors would have to inspect CA
facilities wearing full disposable hazmat suits to avoid transporting
the virus between redundant backup CA offices that have been kept
separate to ensure CA operations continue even if every person at one
office become critically ill.


Thanks,
Ben Wilson
Mozilla Root Store Manager
<https://wiki.mozilla.org/CA/Audit_Statements#Audit_Delay>




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: How Certificates are Verified by Firefox

2019-12-09 Thread Jakob Bohm via dev-security-policy

On 2019-12-09 11:44, Ben Laurie wrote:

On Wed, 4 Dec 2019 at 22:13, Ryan Sleevi  wrote:


Yes, I am one of the ones who actively disputes the notion that AIA
considered harmful.

I'm (plesantly) surprised that any CA would be opposed to AIA (i.e.
supportive of "considered harmful", since it's inherently what gives them
the flexibility to make their many design mistakes in their PKI and still
have certificates work. The only way "considered harmful" would work is if
we actively remove the flexibility afforded CAs in this realm, which I'm
highly supportive of, but which definitely encourages more distinctive PKIs
(i.e. more explicitly reducing the use of Web PKI in non-Web cases)

Of course, AIA is also valuable in helping browsers push the web forward,
so I can see why "considered harmful" is useful, especially in that it
helps further the notion that root certificates are a thing of value (and
whose value should increase with age). AIA is one of the key tools to
helping prevent that, which we know is key to ensuring a more flexible, and
agile, ecosystem.

The flaw, of course, in a "considered harmful", is the notion that there's
One Chain or One Right Chain. That's not the world we have, nor have we
ever. The notion that there's One Right Chain for a TLS server to send
presumes there's One Right Set of CA Trust Anchors. And while that's
definitely a world we could pursue, I think we know from the past history
of CA incidents, there's incredible benefit to users to being able to
respond to CA security incidents differently, to remove trust in
deprecated/insecure things differently, and to set policies differently.
And so we can't expect servers to know the Right Chain because there isn't
One Right Chain, and AIA (or intermediate preloading with rapid updates)
can help address that.



It would be a whole lot more efficient and private if the servers did the
chasing.



It would also greatly help if:

1. More clients (especially non-browser client such as libraries used by
  IoT devices) supported treating the received "chain" more like a pool
  of potential intermediaries and accepted any acceptable combination of
  received certs and locally trusted roots.  This would allow servers to
  send an appropriate collection of intermediaries for different client
  needs.  Clients that detect different levels of trust (such as the
  Qualsys checkers and EV clients) also need to choose the best of the
  offered set, as the alternative certs are obviously for use by other
  clients.  In particular, clients should not panic and block on the
  presence of any "bad" certificates in the pool, if a valid chain can
  be assembled without that certificate.
   This is already in the CMS/PKCS#7 spec and apparently also in the TLS
  1.3 spec, but remains seemingly optional when TLS 1.2 or older is
  negotiated.
   I recently became aware of at least one IoT-focused TLS library that
  doesn't support the "pool" interpretation due to lack of a memory-
  efficient chain building algorithm.

2. Certain CAs made it a lot easier to get the recommended-to-send list
  of certificates ("chain") for server operators to configure.  Some
  CAs make server operators manually chase down links to each cert in
  the list, and some don't send out pre-emptive notification of changes
  to paying subscribers.

3. Certain TLS libraries didn't refuse to provide server software
  vendors with working stapling code, especially the code to collect
  and cache OCSP responses.

P.S. One commonly wilified server brand actually does use AIA to build
  the server chain.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificate OU= fields with missing O= field

2019-11-01 Thread Jakob Bohm via dev-security-policy
SL".

BR v1.6.6 § 7.1.4.2.2i has guidance on usage of the OU field: "The
CA SHALL implement a process that prevents an OU attribute from
including a name, DBA, tradename, trademark, address, location, or
other text that refers to a specific natural person or Legal Entity
unless the CA has verified this information in accordance with
Section 3.2 and the Certificate also contains
subject:organizationName, , subject:givenName, subject:surname,
subject:localityName, and subject:countryName attributes, also
verified in accordance with Section 3.2.2.1."

As the organizationName and other related attributes are not set in
many of those certificates, even though e.g. "COMODO SSL Unified
Communications" is a very strong reference to Sectigo's ssl branding
& business, I believe the referenced certificate is not issued in
line with the BR.

Is the above interpretation of BR section 7.1.4.2.2i correct?


That OU clearly doesn't have anything to do with the subject that was
validated, so I also consider that a misissue.


Kurt


A roughly-equivalent Censys.io query, excluding a couple other unambiguous "domain 
validated" OU values: "not _exists_:
parsed.subject.organization and _exists_:
parsed.subject.organizational_unit and not
parsed.subject.organizational_unit: "Domain Control Validated" and not
parsed.subject.organizational_unit: "Domain Validated Only" and not
parsed.subject.organizational_unit: "Domain Validated" and
validation.nss.valid: true" returns 17k hits.

IMO the "Hosted by .Com" certs fail 7.1.4.2.2i - the URL of a web host is definitely 
"text that refers to a specific ... Legal Entity".


Certificate also contains subject:organizationName, ,
subject:givenName, subject:surname, subject:localityName, and
subject:countryName attributes, also verified in accordance with Section 
3.2.2.1.


I'm pretty sure this isn't what the BRs intended, but this appears to forbid 
issuance with a meaningful subject:organizationalUnitName unless all of the 
above attributes are populated. EVG §9.2.9 forbids including those attributes 
in the first place. Am I reading this wrong, or was this an oversight in the 
BRs?



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Firefox removes UI for site identity

2019-10-23 Thread Jakob Bohm via dev-security-policy

On 23/10/2019 01:49, Matt Palmer wrote:

On Tue, Oct 22, 2019 at 03:35:52PM -0700, Kirk Hall via dev-security-policy 
wrote:

I also have a question for Mozilla on the removal of the EV UI.


This is a mischaracterisation.  The EV UI has not been removed, it has been
moved to a new location.



It was moved entirely off screen, and replaced with very subtle
differences in the contents of a pop-up.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAs cross-signing roots whose subjects don't comply with the BRs

2019-10-08 Thread Jakob Bohm via dev-security-policy

On 08/10/2019 13:41, Corey Bonnell wrote:

On Monday, October 7, 2019 at 10:52:36 AM UTC-4, Ryan Sleevi wrote:

I'm curious how folks feel about the following practice:

Imagine a CA, "Foo", that creates a new Root Certificate ("Root 1"). They
create this Root Certificate after the effective date of the Baseline
Requirements, but prior to Root Programs consistently requiring compliance
with the Baseline Requirements (i.e. between 2012 and 2014). This Root
Certificate does not comply with the BRs' rules on Subject: namely, it
omits the Country field.

...

> ...


Given that there is discussion about mandating the use of ISO3166 or other 
databases for location information, the profile of the subjectDN may change 
such that future cross-signs cannot be done without running afoul of policy.

With this issue and Ryan’s scenario in mind, I think there may need to be some 
sort of grandfathering allowed for roots so that cross-signs can be issued 
without running afoul of policy. What I’m less certain on, is to what extent 
this grandfathering clause would allow for non-compliance of the current 
policies, as that is a very slippery slope and hinders progress in creating a 
saner webPKI certificate profile. For the CA that Ryan brings up, I’m less 
inclined to allow for a “grandfathering” as the root certificate in question 
was originally mis-issued. But for a root certificate that was issued in 
compliance with the policy at the time but now no longer has a compliant 
subjectDN, perhaps a carve-out in Mozilla Policy to allow for a cross-sign 
(using the now non-compliant subjectDN) is warranted.



Please note the situation explained in the first paragraph of Ryan's
scenario: The (hypothetical) Root 1 without a C element may have been
issued before Vrowser Policy made BR compliance mandatory.  In other
words, BR non-compliance may not have been actual non-compliance at
that time.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAs cross-signing roots whose subjects don't comply with the BRs

2019-10-07 Thread Jakob Bohm via dev-security-policy
On 07/10/2019 17:35, Ryan Sleevi wrote:
> On Mon, Oct 7, 2019 at 11:26 AM Jakob Bohm via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
>> On 07/10/2019 16:52, Ryan Sleevi wrote:
>>> I'm curious how folks feel about the following practice:
>>>
>>> Imagine a CA, "Foo", that creates a new Root Certificate ("Root 1"). They
>>> create this Root Certificate after the effective date of the Baseline
>>> Requirements, but prior to Root Programs consistently requiring
>> compliance
>>> with the Baseline Requirements (i.e. between 2012 and 2014). This Root
>>> Certificate does not comply with the BRs' rules on Subject: namely, it
>>> omits the Country field.
>>
>> Clarification needed: Does it omit Country from the DN of the root 1
>> itself, from the DN of intermediary CA certs and/or from the DN of End
>> Entity certs?
>>
> 
> It's as I stated: The Subject of the Root Certificate omits the Country
> field.

You were unclear if Root 1 omitted the C element from it's own name
(a BR requirement for new roots), or from various aspects of the
issuance from root 1 (also BR requirements).

It is now clear that the potential BR violation is only in the DN of
Root 1 itself, and for the purpose of this hypothetical, we can assume
that all other aspects of Root 1 operation are BR compliant.

> 
> 
>>>
>>> Later, in 2019, Foo takes their existing Root Certificate ("Root 2"),
>>> included within Mozilla products, and cross-signs the Subject. This now
>>> creates a cross-signed certificate, "Root 1 signed-by Root 2", which has
>> a
>>> Subject field that does not comport with the Baseline Requirements.
>>
>> Nit: Signs the Subject => Signs Root 1
>>
> 
> Perhaps it would be helpful if you were clearer about what you believe you
> were correcting.
> 

An minor typo (nit) in your original post.  You wrote -"signs the 
Subject" instead of -"signs Root 1".


> I thought I was very precise here, so it's useful to understand your
> confusion:
> 
> Root 2, a root included in Mozilla products, cross-signs Root 1, a root
> which omits the Country field from the Subject.
> 
> This creates a certificate, whose issuer is Root 2 (a Root included in
> Mozilla Products), and whose Subject is Root 1. The Subject of Root 1 does
> not meet the BRs requirements on Subjects for intermediate/root
> certificates: namely, the certificate issued by Root 2 omits the C, because
> Root 1 omits the C.
> 

This is now clear after the clarification that C was only omitted in the
DN of Root 1 itself.


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CAs cross-signing roots whose subjects don't comply with the BRs

2019-10-07 Thread Jakob Bohm via dev-security-policy

On 07/10/2019 16:52, Ryan Sleevi wrote:

I'm curious how folks feel about the following practice:

Imagine a CA, "Foo", that creates a new Root Certificate ("Root 1"). They
create this Root Certificate after the effective date of the Baseline
Requirements, but prior to Root Programs consistently requiring compliance
with the Baseline Requirements (i.e. between 2012 and 2014). This Root
Certificate does not comply with the BRs' rules on Subject: namely, it
omits the Country field.


Clarification needed: Does it omit Country from the DN of the root 1
itself, from the DN of intermediary CA certs and/or from the DN of End
Entity certs?

Also is the omission limited to historic certs issued before some date,
or also in new certs issued in 2019 (not counting the cross cert below).



Later, in 2019, Foo takes their existing Root Certificate ("Root 2"),
included within Mozilla products, and cross-signs the Subject. This now
creates a cross-signed certificate, "Root 1 signed-by Root 2", which has a
Subject field that does not comport with the Baseline Requirements.


Nit: Signs the Subject => Signs Root 1



To me, this seems like a clear-cut violation of the Baseline Requirements,
and "Foo" could have pursued an alternative hierarchy to avoid needing to
cross-sign. However, I thought it interesting to solicit others' feedback
on this situation, before opening the CA incident for Foo.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert OCSP services returns 1 byte

2019-09-17 Thread Jakob Bohm via dev-security-policy
ual certificate to CT logs and await acknowledgement.
 15. Finally send the actual certificate to the subscriber.

My alternative 4 paragraph proposal (rest was rationale) was to accept
the existence of delays, aim for eventual consistency, maximize overlap
between the operations by allowing the initial CT submission to happen
before all revocation servers respond "good" and the delivery to
subscriber to also happen during the wait for rollout.

As for my alleged use of magic numbers, this was the result of a logical
deduction:  When revoking a misissued certificate that may have produced
stored signatures (TLS isn't everything), the revocation data should
state that the certificate was never valid and any such stored
signatures are thus invalid.  Within the limitations of existing OCSP
and CRL formats, this is most effectively done by backdating the
revocation time in responses/CRLs to before the certificate would
otherwise have become valid.  Cryptographically stating that due to any
number of failure scenarios (see the incidents in this thread) the
actual certificate was never issued, could be similarly stated by
stating an even earlier revocation of the hypothetical certificate.
Trying to think forward, I proposed leaving a reserved margin between
the two revocation timestamp ranges.  A more clumsy alternative would
be to insert a special extension in the revocation data, with a
resulting OID allocation etc.



> - Wayne
> 
> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1551390
> 
> 
>> Clearly that's impossible, which leads to the question: Which of these
>> two conflicting requirements should a CA ignore in order to be as
>> un-non-compliant as possible?  Which leads me to BR 7.1.2.5:
>> 'For purposes of clarification, a Precertificate, as described in RFC
>>  6962 – Certificate Transparency, shall not be considered to be a
>>  “certificate” subject to the requirements of RFC 5280'
>>
>> Since the first mention of "certificates" in the OCSP Protocol Overview
>> (RFC6960 section 2) cross-references RFC5280, I believe that this 'shall
>> not be considered to be a "certificate"' declaration can be assumed to
>> extend to the OCSP requirements too.  And therefore, the balance tilts
>> in favour of implementing 'SHOULD NOT respond "good"' and ignoring 'MUST
>> respond "good"'.
>>
>> I can't say I like this conclusion, but nonetheless it is the conclusion
>> that my reading of the BRs forces me to reach.  I realize that what the
>> BRs actually say may not reflect precisely what was intended by
>> CABForum; nonetheless, CAs are measured by what the BRs actually say.
>>
>> IDEAS FOR FIXING IT:
>>
>> Long-term:
>> - In CT v2 (6962-bis), precertificates are not X.509 certificates,
>> which removes Schrödinger from the equation.  :-)
>>
>> Short-term:
>> - I think BR 7.1.2.5, as written, is decidedly unhelpful and should
>> be revised to have a much smaller scope.  Surely only the serial number
>> uniqueness requirement (RFC5280 section 4.1.2.2) needs to be relaxed,
>> not the entirety of RFC5280?
>> - I would also like to see BR 4.9.10 revised to say something roughly
>> along these lines:
>> 'If the OCSP responder receives a status request for a serial number
>>  that has not been allocated by the CA, then the responder SHOULD NOT
>>  respond with a "good" status.'
>>
>> P.S. Full disclosure: Sectigo currently provides an (unsigned)
>> "unauthorized" OCSP response when a precert exists but the corresponding
>> cert doesn't, but in all honesty I'm not currently persuaded that an
>> Incident Report is warranted.
>>
>> --
>> Rob Stradling
>> Senior Research & Development Scientist
>> Email: r...@sectigo.com
>>
>>


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DigiCert OCSP services returns 1 byte

2019-09-16 Thread Jakob Bohm via dev-security-policy
the CA
  building explicitly becomes a policy violation after just a few days,
  because it is then explicitly required to be published in CT and/or
  revoked.


Practicalities:
 - CA systems (in-house of standard) will need to implement error
  handling logic to handle the various scenarios that can trigger
  the "revoke without issuing" case and the "issued-but-retroactively-
  revoked" case.  Implementations need to be robustly tested in
  simulated environments with simulated failures and timeouts, thus
  final testing would probably take weeks or months just triggering
  the real timeouts, then requesting a bug fix.

 - If a CA revokes a PreCertificate serial with a revocation time
  indicating the never-issued case, it is a policy violation incident
  for that CA to have actually signed the certificate, even if the
  certificate never left the building.

 - If a CA revokes a serial with a revocation time indicating the
  issued-then-revoked case, it can be expected to have the actual
  certificate available in all but a few cases, and to submit that
  actual certificate to CT in all but a few of the remaining cases.

 - If a CA signs a certificate, but then immediately looses it (perhaps
  a disk or server failed when trying to save it), it would be one of
  the issued-then-revoked cases that cannot submit the signed actual
  certificate to CT.

 - In practice the never-signed revocation case can use a larger
  time delta (such as 2 hours) while remaining compliant.

 - CT log watching systems can distinguish the two revocation cases,
  and report them differently.

 - The required transactional integrity may or may not use a database
  engine as an implementation technique.  The important thing is to
  securely store the fact (and data) of passing certain crucial moments
  in the issuance process such that they are not lost if a consumable
  component (such as a COTS disk system or COTS server) fails.

 - A system failure during actual certificate signing needs to be
  detected and handled within the 24 hour deadline.  But such failures
  are typically detected within the hour, thus during any business hours
  signing ceremony.  Also quickly enough for other servers in a fast CA
  cluster to automatically do the needed revocations or alert a 24/7
  operator team.



Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Question about the issuance of OCSP Responder Certificates by technically constrained CAs

2019-09-04 Thread Jakob Bohm via dev-security-policy
On 04/09/2019 17:14, Ryan Sleevi wrote:
> On Wed, Sep 4, 2019 at 11:06 AM Ben Wilson  wrote:
> 
>> I thought that the EKU "id-kp-OCSPSigning" was for the OCSP responder
>> certificate itself (not the CA that issues the OCSP responder certificate).
>> I don't think I've encountered a problem before, but I guess it would
>> depend
>> on the implementation?
> 
> 
> Correct. Mozilla does not require the EKU chaining, in technical
> implementation or in policy. The aforementioned comments, however, indicate
> CAs have reported that Microsoft does. That is, the assertion is that
> Microsoft requires that issuing CAs bear an overlapping set of EKUs that
> align with their issued certificates, whether subordinate CAs, end-entity,
> or OCSP responders. Mozilla requires the same thing with respect to
> id-kp-serverAuth, but the Mozilla code has a special carve-out for
> id-kp-OCSPSigning that both doesn't require it on intermediate CAs, but
> also allows it to be present, precisely because of the presumed Microsoft
> requirement.
> 

This Microsoft requirement is highly unfortunate, because unless there 
is an explicit RFC permission that allows OCSP clients to reject OCSP 
certificates from delegated OCSP responder certificates with CA:TRUE, 
yet accept them from issuing CA certificates with CA:TRUE; clients will 
be required by RFCs to accept bogus OCSP responses signed by SubCA 
certificates that contain the EKU for Microsoft compatibility.

This is especially bad if the SubCA is controlled by an entity other 
than its direct parent CA.


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2019.08.28 Let’s Encrypt OCSP Responder Returned “Unauthorized” for Some Precertificates

2019-09-02 Thread Jakob Bohm via dev-security-policy
On 03/09/2019 00:54, Ryan Sleevi wrote:
> On Mon, Sep 2, 2019 at 2:14 PM Alex Cohn via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
>> On Mon, Sep 2, 2019 at 12:42 PM Jakob Bohm via dev-security-policy <
>> dev-security-policy@lists.mozilla.org> wrote:
>>
>>> If an OCSP server supports returning (or always returns) properties of
>>> the actual cert, such as the CT proofs, then it really cannot do its
>>> usual "good" responses until the process of retrieving CT proofs and
>>> creating the final TBScertificate (and possibly signing it) has been
>>> completed.
>>>
>>> Thus as a practical matter, treating a sign-CT-sign-CT in-process state
>>> as "unknown serial, may issue in future" may often be the only practical
>>> solution.
>>>
>>
>> Waiting until CT submission of the final certificate is complete to return
>> "good" OCSP responses is definitely wrong. OCSP should return "good" at the
>> moment the final certificate is issued, which means in practice that there
>> might be a "good" OCSP response that doesn't contain SCTs yet.
>>
>> I don't know if any log does this, but RFC6962 allows logs to check for
>> certificate revocation before issuing a SCT; if the OCSP responder doesn't
>> return "good" the CA might never get the needed SCTs?
> 
> 
> Correct. Which is why I recommend CAs ignore Jakob Bohm’s advice here, as
> it can lead to a host of complications for CAs, Subscribers, and Relying
> Parties.
> 

I was not advising either cause of action, I was trying to explore 
conflicting requirements between the BRs, PKIX and the CT spec, which 
has apparently lead to confusion as to the what OCSP should return for 
soon-to-be-issued and not-yet-issued certificates.

This particular CT requirement contradicted a common Google practice 
across its services, which made me suspect it might be a specification 
oversight rather than intentional.


> 
>>
>> Also, if a CA is signing a precert, getting SCTs for that precert, then
>> embedding the SCTs in the final cert, they are already satisfying the
>> browsers' CT requirements. It would not be necessary for them to
>> additionally embed SCTs for the final cert in their OCSP responses.
>>
>> Now depending on interpretations, I am unsure if returning "revoked" for
>>> the general case of "unknown serial, may issue in future" would violate
>>> the ban on unrevoking certificates.
>>>
>>
>> RFC6960 section 2.2 documents a technique for indicating "unknown serial,
>> may issue in future" that involves returning "revoked" with a revocation
>> date of 1970-1-1 and a reason of certificateHold. I don't know if this
>> technique is used anywhere in practice - IIRC it requires the OCSP signing
>> key to be online and able to sign responses for arbitrary serial numbers in
>> real time.
> 
> 
> The BRs explicitly prohibit this.
> 
> You cannot unrevoke or suspend.
> 

That was my interpretation too.

> (Are any CAs even using the OCSP SCT delivery option? I haven't come across
>> this technique in the wild)
> 
> 
> Yes, several are.
> 


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2019.08.28 Let’s Encrypt OCSP Responder Returned “Unauthorized” for Some Precertificates

2019-09-02 Thread Jakob Bohm via dev-security-policy

On 02/09/2019 20:13, Alex Cohn wrote:

On Mon, Sep 2, 2019 at 12:42 PM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


If an OCSP server supports returning (or always returns) properties of
the actual cert, such as the CT proofs, then it really cannot do its
usual "good" responses until the process of retrieving CT proofs and
creating the final TBScertificate (and possibly signing it) has been
completed.

Thus as a practical matter, treating a sign-CT-sign-CT in-process state
as "unknown serial, may issue in future" may often be the only practical
solution.



Waiting until CT submission of the final certificate is complete to return
"good" OCSP responses is definitely wrong. OCSP should return "good" at the
moment the final certificate is issued, which means in practice that there
might be a "good" OCSP response that doesn't contain SCTs yet.

I don't know if any log does this, but RFC6962 allows logs to check for
certificate revocation before issuing a SCT; if the OCSP responder doesn't
return "good" the CA might never get the needed SCTs?



This seems to be an unfortunate aspect of the CT spec that wasn't
thought through properly.  In particular, it should cause unnecessary
delay if a CA updates its OCSP servers using "eventual consistency"
principles.  Maybe it should be fixed in the next update of the spec.

The BRs have the following requirements that are best satisfied by
delayed update of OCSP servers:

BR4.10.2 The OCSP servers must be up 24x7 .  Thus rolling out updates to
different servers at different times would be a typical best practice.

BR4.9.10 The OCSP servers only need to be updated twice a week (4 days) 
..  This could only satisfy the CT requirement if issued certificates 
were somehow withheld from the subscriber for up to 4 days.




Also, if a CA is signing a precert, getting SCTs for that precert, then
embedding the SCTs in the final cert, they are already satisfying the
browsers' CT requirements. It would not be necessary for them to
additionally embed SCTs for the final cert in their OCSP responses.

Now depending on interpretations, I am unsure if returning "revoked" for

the general case of "unknown serial, may issue in future" would violate
the ban on unrevoking certificates.



RFC6960 section 2.2 documents a technique for indicating "unknown serial,
may issue in future" that involves returning "revoked" with a revocation
date of 1970-1-1 and a reason of certificateHold. I don't know if this
technique is used anywhere in practice - IIRC it requires the OCSP signing
key to be online and able to sign responses for arbitrary serial numbers in
real time.



The question was if this technique would be in violation of the BRs, as
those generally prohibit the use of "certificateHold" .


(Are any CAs even using the OCSP SCT delivery option? I haven't come across
this technique in the wild)

Alex




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2019.08.28 Let’s Encrypt OCSP Responder Returned “Unauthorized” for Some Precertificates

2019-09-02 Thread Jakob Bohm via dev-security-policy
If an OCSP server supports returning (or always returns) properties of 
the actual cert, such as the CT proofs, then it really cannot do its 
usual "good" responses until the process of retrieving CT proofs and 
creating the final TBScertificate (and possibly signing it) has been 
completed.

Thus as a practical matter, treating a sign-CT-sign-CT in-process state 
as "unknown serial, may issue in future" may often be the only practical 
solution.

Now depending on interpretations, I am unsure if returning "revoked" for 
the general case of "unknown serial, may issue in future" would violate 
the ban on unrevoking certificates.

On 31/08/2019 17:07, Jeremy Rowley wrote:
> Obviously I think good is the best answer based on my previous posts. A 
> precert is still a cert. But I can see how people could disagree with me.
> 
> From: dev-security-policy  on 
> behalf of Jeremy Rowley via dev-security-policy 
> 
> Sent: Saturday, August 31, 2019 9:05:24 AM
> To: Tomas Gustavsson ; 
> mozilla-dev-security-pol...@lists.mozilla.org 
> 
> Subject: Re: 2019.08.28 Let’s Encrypt OCSP Responder Returned “Unauthorized” 
> for Some Precertificates
> 
> I dont recall the cab forum ever contemplating or discussing  ocsp for 
> precertificates. The requirement to provide responses is pretty clear, but 
> what that response should be is a little confusing imo.
> ...


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: 2019.08.28 Let’s Encrypt OCSP Responder Returned “Unauthorized” for Some Precertificates

2019-08-30 Thread Jakob Bohm via dev-security-policy
On 30/08/2019 01:36, Jacob Hoffman-Andrews wrote:
> Also filed at https://bugzilla.mozilla.org/show_bug.cgi?id=1577652
> 
> On 2019.08.28 we read Apple’s bug report at 
> https://bugzilla.mozilla.org/show_bug.cgi?id=1577014 about DigiCert’s OCSP 
> responder returning incorrect results for a precertificate. This prompted us 
> to run our own investigation. We found in an initial review that for 35 of 
> our precertificates, we were serving incorrect OCSP results (“unauthorized” 
> instead of “good”). Like DigiCert, this happened when a precertificate was 
> issued, but the corresponding certificate was not issued due to an error.
> 
> We’re taking these additional steps to ensure a robust fix:
>- For each precertificate issued according to our audit logs, verify that 
> we are serving a corresponding OCSP response (if the precertificate is 
> currently valid).
>- Configure alerting for the conditions that create this problem, so we 
> can fix any instances that arise in the short term.
>- Deploy a code change to Boulder to ensure that we serve OCSP even if an 
> error occurs after precertificate issuance.
> 

For CAs affected by OCSP misbehavior in this particular scenario (Pre-
Cert issued and submitted to CT, actual cert not issued), they should 
take a look at those error cases and subdivide them into:

1. No intent to actually issue the actual cert.  Best handling is to 
  treat as revoked in all revocation protocols and logs, but with audit 
  and incident systems reporting that the cert wasn't actually issued.
  ( *Most common case* ).

2. Intent to actually issue later, if/when something happens that just 
  takes longer than usual.  Here it makes sense for OCSP and other such 
  mechanisms to return "good", due to the ban on reporting "certificate 
  hold" or otherwise exiting a revoked state.  It of cause remains 
  possible to later revoke such a half-issued cert if there is a reason 
  to do so.

3. Intent to issue in a few minutes, somehow OCSP was queried during the 
  short processing delay between CT submission and actual issuing of 
  cert with embedded CT proofs.  Because inserting the CT proofs in the 
  OCSP responses probably awaits the same technical condition as the 
  cert issuing, it makes sense to return "unknown" for those few 
  minutes, as when delivery of the cert status to the OCSP servers is 
  itself delayed by a few minutes (up to whatever limit policies set 
  for updating revocation servers with knowledge of new certs).

Scenario 2 can be subdivided in two sub-cases for compliance purposes:

2A: Pre-cert (and thus cert) has a "valid from" date equal or near the 
  CT submission time.  Here the CT logged pre-cert provides proof that 
  this is not backdating, even though cert usage won't start until 
  later.

2B: Pre-cert (and thus cert) has a "valid from" date closer to the 
  intended issuing date for the final cert.  Here the CT logged pre-cert 
  provides proof that certain issuance decisions happened earlier, thus 
  affecting when some of the validity time limits are reached.

Note that for some cert types (such as certs for S/MIME SubCAs), it is 
trivially possible for a subscriber to use the validity before the cert 
actually exists, while in other cases it is not possible, except for the 
difficulty in proving that the cert doesn't exist.



Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-29 Thread Jakob Bohm via dev-security-policy
On 29/08/2019 19:47, Nick Lamb wrote:
> On Thu, 29 Aug 2019 17:05:43 +0200
> Jakob Bohm via dev-security-policy
>  wrote:
> 
>> The example given a few messages above was a different jurisdiction
>> than those two easily duped company registries.
> 
> I see. Perhaps Vienna, Austria has a truly exemplary registry when it
> comes to such things. Do you have evidence of that? I probably can't
> read it even if you do.
> 

I have no specific knowledge, but Austrians probably do, using empirical 
data from their life experience and press coverage of all kinds of 
business and fraud related mischief: Did fraudsters get away with 
registering misleading company names? were the registrations revoked by 
authorities soon after discovery?  Was there a serious police effort to 
prosecute?

Same for any other country as known by its citizens.

> But Firefox isn't a Viennese product, it's available all over the
> world. If only some handful of exemplary registries contain trustworthy
> information, you're going to either need to persuade the CAs to stop
> issuing for all other jurisdictions, or accept that it isn't actually
> helpful in general.
> 

The point is you keep bringing up examples from exactly two countries in 
a world with more than 100 countries.

The usefulness of knowing that a Mozilla-accepted and regularly audited 
CA has confirmed the connection to match a government record in a 
country ties directly to the trust that people can reasonably attribute 
to that part of that government.  This in turn is approximately the same 
trust applicable to those government records being reflected in other 
parts of official life, such as phone books, building permits, business 
certificates posted in offices, etc. etc.

In either case there is some residual risk of fraud, as always.


>> You keep making the logic error of concluding from a few example to
>> the general.
> 
> The IRA's threat to Margaret Thatcher applies:
> 
> We only have to be lucky once. You will have to be lucky always.
> 
> Crooks don't need to care about whether their crime is "generally"
> possible, they don't intend to commit a "general" crime, they're going
> to commit a specific crime.
> 

Almost any anti-crime effort has some probability of success and 
failure.  If a measure would stop the IRA's attacks on Thatcher 99.5% of 
the time, they would, on average, have to try 100 times to get a 50/50 
chance of getting lucky.  If another measure takes away another 80% of 
their chance, she would get 99.9% and so on.  At some point her chances 
became good enough to actually retire and die of old age having 
accumulated even more enemies.

One of the measures known to have saved her at least once was a number 
of barriers forcing an IRA rocket attack to fly just far enough to miss.  
Certainly not a perfect measure, but clearly better than nothing.


>> A user can draw conclusions from their knowledge of the legal climate
>> in a jurisdiction, such as how easy it is to register fraudulent
>> untraceable business names there, and how quickly such fraudulent
>> business registrations are shut down by the legal teams of high
>> profile companies such as MasterCard Inc.
> 
> Do you mean knowledge here, or beliefs? Because it seems to me users
> would rely on their beliefs, that may have no relationship whatsoever
> to the facts.
> 

Of cause they would use their imperfect knowledge (beliefs) about the 
country they live and survive in.  Knowing what kind of official 
paperwork to trust is a basic life skill in any society with common 
literacy (illiterates wouldn't be able to read what it says on an 
official document, nor read the words in a browser UI).

>> That opinion still is lacking in strong evidence of anything but spot
>> failures under specific, detectable circumstances.
> 
> We only have to be lucky once.
> 

When fighting a wave of similar crimes committed many times, reducing 
the number of times the criminals get lucky is a win, even if that crime 
is murder instead of theft.

>> Except that any event allowing a crook to hijack http urls to a
>> domain is generally sufficient for that crook to instantly get and
>> use a corresponding DV certificate.
> 
> If the crook hijacks the actual servers, game is over anyway,
> regardless of what type of certificate is used.
> 

Hijacking the authorized server is obviously game over.

Hijacking DNS or IP routing in any repeatable manner can be used to get 
a DV cert, then a bit later presenting that to victim browsers.  Of 
cause if the hijack ability happens to not include the view from any of 
the DV issuing CAs, then that one-two punch fails.

> Domain owners can set CAA (now that it's actually enforced) to deny
> crooks the opportunity from an IP hijack. More sophisticat

Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-29 Thread Jakob Bohm via dev-security-policy
b pages these days) then this assures me it actually links to
> what was intended.
> 
> This is enough to bootstrap WebAuthn (unphishable second factor
> credentials) and similar technologies, to safeguard authentication
> cookies and sandbox active code inside an eTLD+1 or narrower. All very
> useful even though the user isn't aware of them directly.
> 
> For end users it means bookmarks they keep and links they follow from
> outside actually lead where they should, and not somewhere else as
> would trivially happen without this verification.
> 

Except that any event allowing a crook to hijack http urls to a domain 
is generally sufficient for that crook to instantly get and use a 
corresponding DV certificate.

>> But the information that the owner of
>> somebank.eu is a incorporated company from Germany officially called
>> "Somebank AG" is more valuable. Maybe some people don't care and
>> enter their account data happily at s0m1b4nk.xyz, maybe most people
>> do. We don't know and we probably can't know how many people stopped
>> and thought if they are actually at the correct website because the
>> green bar was missing. But I am certain that it was more than zero.
> 
> Why are you certain of this? Just gut feeling?
> 

There is a plausible reason why it can happen, and a sufficiently large 
volume of global web traffic that anything possible will have most 
likely happened.

The surprise nature of this change has given researchers insufficient 
time to do large scale measurements before you remove the object to be 
measured.

>> Why not for example always open a small overlay with information when
>> someone starts entering data in a password field? Something like "You
>> are entering a password at web.page. You visited this page 5 times
>> before, first on August 4th 2019. We don't know anything about the
>> owner" or for EV "You are entering a password at web.page. You
>> visited this page 5 times before, first on August 4th 2019. This
>> server is run by "WebPage GmbH" from Vienna, Austria [fancy flag
>> picture]".

Anything inside the viewing frame belongs to the website and can be 
trivially spoofed with identically looking HTML.  Any indication of the 
trustworthiness of the content source needs to be in the screen areas 
exclusively belonging to the browser, browser plugins and other trusted 
software.

This is the basic inside/outside barrier studied by philosophers and 
psychiatrists for more than a century, and instinctive in most human 
beings.

> 
> This server is run by "Authorised Web Site" from London, UK [Union
> flag].
> 
> Sounds legitimate.
> 
> Remember, the British government doesn't care that Authorised Web Site
> is a stupid name for a company, that its named officers are the
> characters in Toy Story, that its claimed offices are a building site,
> nor even that it has never filed (and never will file) any business
> accounts. They collected their registration fee and that's all they
> ever cared about.
> 

Yes, I think you have repeatedly used the failures of UK and US company 
registries as reason to dismiss all other governments.



Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-27 Thread Jakob Bohm via dev-security-policy
On 27/08/2019 08:03, Peter Gutmann wrote:
> Jakob Bohm via dev-security-policy  
> writes:
> 
>> <https://www.typewritten.net/writer/ev-phishing/> and
>> <https://stripe.ian.sh/> both took advantage of weaknesses in two
>> government registries
> 
> They weren't "weaknesses in government registries", they were registries
> working as designed, and as intended.  The fact that they don't work in
> they way EV wishes they did is a flaw in EV, not a problem with the
> registries.
> 

"Working as designed" doesn't mean "working as it should".

The confusion that could be created online by getting EV certificates 
matching those company registrations were almost the same as those that 
could be created in the offline world by the registrations directly.


>> Both demonstrations caused the researchers real name and identity to become
>> part of the CA record, which was hand waved away by claiming that could
>> have been avoided by criminal means.
> 
> It wasn't "wished away", it's avoided without too much trouble by criminals,
> see my earlier screenshot of just one of numerous black-market sites where
> you can buy fraudulent EV certs from registered companies.  Again, EV may
> wish this wasn't the case, but that's not how the real world works.
> 

The screenshots you showed were for code signing EV certificates, not 
TLS EV certificates.  They seem related to a report a few years ago that 
spurned work to check the veracity of those screenshots and create 
appropriate countermeasures.

>> 12 years old study involving en equally outdated browser.
> 
> So you've published a more recent peer-reviewed academic study that
> refutes the earlier work?  Could you send us the reference?
> 

These two studies are outdated because they study the effects in a 
different overall situation (they were both made when the TLS EV concept 
had not yet been globally deployed).  They are thus based on entirely 
different facts (measured and unmeasured) than the situation in 2019.

Very early in this thread someone quoted from a very recent study 
published at usenix, comparing the prevalence of malicious sites with 
different types of certificates.  The only response was platitudes, 
such as a emphasizing a small number being nonzero.

Someone is trying very hard to create a fait acompli without going 
through proper debate and voting in relevant organizations such as 
the CAB/F.  So when challenged they play very dirty, using every 
rhetorical trick they can find to overpower criticism of the action.



Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-26 Thread Jakob Bohm via dev-security-policy
On 26/08/2019 21:49, Jonathan Rudenberg wrote:
> On Mon, Aug 26, 2019, at 15:01, Jakob Bohm via dev-security-policy wrote:
>> <https://www.typewritten.net/writer/ev-phishing/> and
>> <https://stripe.ian.sh/> both took advantage of weaknesses in two
>> government registries to create actual dummy companies with misleading
>> names, then trying to get EV certs for those (with mixed success, as at
>> least some CAs rejected or revoked the certs despite the government
>> failures).
> 
> There were no "weaknesses" or "government failures" here, everything was 
> operating exactly as designed.
> 

The weakness is that those two government registries don't prevent 
conflicting or obviously bad registrations, not even by retroactively 
aborting the process in a few business days.

Even without the Internet this constitutes an obvious avenue for frauds.

> 
>> At least the first of those demonstrations involved a no
>> longer trusted CA (Symantec).
> 
> This doesn't appear to be relevant. The process followed was compliant with 
> the EVGLs, and Symantec was picked because they were one of the most popular 
> CAs at the time.
> 

Symantec was distrusted for sloppy operation, that document version 
(which we have since been informed was not the final version) claimed 
that the only other CA tried did in fact reject the cert application, 
indicating that issuing may not have been following "best current 
practice" at the time.  The revised link posted tonight reverses this 
information.

> 
>> Both demonstrations caused the
>> researchers real name and identity to become part of the CA record,
>> which was hand waved away by claiming that could have been avoided by
>> criminal means.
> 
> It's not handwaving to make the assertion that a fraudster would be willing 
> to commit fraud while committing fraud. Can you explain why you think this 
> argument is flawed?
> 

The EVG requires the CA to attempt to verify the personal identity 
information.  Stating without evidence that this verification is easily 
defrauded is hand waving it away.

> 
>> Studies quoted by Tom Ritter on 24/08/2019:
>>
>>>
>>> "By dividing these users into three groups, our controlled study
>>> measured both the effect of extended validation certificates that
>>> appear only at legitimate sites and the effect of reading a help file
>>> about security features in Internet Explorer 7. Across all groups, we
>>> found that picture-in-picture attacks showing a fake browser window
>>> were as effective as the best other phishing technique, the homograph
>>> attack. Extended validation did not help users identify either
>>> attack."
>>>
>>> https://www.adambarth.com/papers/2007/jackson-simon-tan-barth.pdf
>>>
>>
>> 12 years old study involving en equally outdated browser.
> 
> Can you explain why you believe the age this study is disqualifying? What 
> components of the study do you believe are no longer valid due to their age? 
> Are you aware of subsequent studies showing different results?
> 

IE7 may have had a bad UI since changed.  12 years ago, there had not 
been any big outreach campaigns telling users to look for the green bar, 
nor a 10 year build up of user expectation that it would be there for 
such sites.


> 
>>> "Our results showed that the identity indicators used in the
>>> unmodified FF3browser did not influence decision-making for the
>>> participants in our study interms of user trust in a web site. These
>>> new identity indicators were ineffectivebecause none of the
>>> participants even noticed their existence."
>>>
>>> http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.543.2117=rep1=pdf
>>>
>>
>> An undated(!) study involving highly outdated browsers.  No indication
>> this was ever in a peer reviewed journal.
> 
> This is a peer-reviewed paper that was published in the proceedings of 
> ESORICS 2008: 13th European Symposium on Research in Computer Security, 
> Málaga, Spain, October 6-8, 2008. Dates are actually (unfortunately) uncommon 
> on CS papers unless the publication metadata/frontmatter is intact.
> 

The link posted on Saturday did not in any way provide that publication 
data, attempting to remove the "type=pdf" parameter from the link just 
provided a 404, rather than the expected metadata page or link, which is 
probably a failure of the citeseerx software.

Once again, the study is more than 10 years old, not reflecting the 
public consciousness after years of outreach and user experience.


> 
>>> DV is sufficient. Why pay for something you d

Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-26 Thread Jakob Bohm via dev-security-policy

On 24/08/2019 05:55, Tom Ritter wrote:

On Fri, 23 Aug 2019 at 22:53, Daniel Marschall via dev-security-policy
 wrote:


Am Freitag, 23. August 2019 00:50:35 UTC+2 schrieb Ronald Crane:

On 8/22/2019 1:43 PM, kirkhalloregon--- via dev-security-policy wrote:

Whatever the merits of EV (and perhaps there are some -- I'm not
convinced either way) this data is negligible evidence of them. A DV
cert is sufficient for phishing, so there's no reason for a phisher to
obtain an EV cert, hence very few phishing sites use them, hence EV
sites are (at present) mostly not phishing sites.


Can you proove that your assumption "very few phishing sites use EV (only) because 
DV is sufficient" is correct?


As before, the first email in the thread references the studies performed.


The (obviously outdated) studies quoted below were NOT referenced by the
first message in this thread.  The first message only referenced two
highly unpersuasive demonstrations of the mischief possible in
controlled experiments.

<https://www.typewritten.net/writer/ev-phishing/> and
<https://stripe.ian.sh/> both took advantage of weaknesses in two
government registries to create actual dummy companies with misleading
names, then trying to get EV certs for those (with mixed success, as at
least some CAs rejected or revoked the certs despite the government
failures).  At least the first of those demonstrations involved a no
longer trusted CA (Symantec).  Both demonstrations caused the
researchers real name and identity to become part of the CA record,
which was hand waved away by claiming that could have been avoided by
criminal means.


Studies quoted by Tom Ritter on 24/08/2019:



"By dividing these users into three groups, our controlled study
measured both the effect of extended validation certificates that
appear only at legitimate sites and the effect of reading a help file
about security features in Internet Explorer 7. Across all groups, we
found that picture-in-picture attacks showing a fake browser window
were as effective as the best other phishing technique, the homograph
attack. Extended validation did not help users identify either
attack."

https://www.adambarth.com/papers/2007/jackson-simon-tan-barth.pdf



12 years old study involving en equally outdated browser.


"Our results showed that the identity indicators used in the
unmodified FF3browser did not influence decision-making for the
participants in our study interms of user trust in a web site. These
new identity indicators were ineffectivebecause none of the
participants even noticed their existence."

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.543.2117=rep1=pdf



An undated(!) study involving highly outdated browsers.  No indication
this was ever in a peer reviewed journal.


DV is sufficient. Why pay for something you don't need?



Unproven claim, especially by studies from before free DV without
traceable credit card payments became the norm.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Jurisdiction of incorporation validation issue

2019-08-23 Thread Jakob Bohm via dev-security-policy
ed each sub-entity by an EAN number (as in the 13 digit 
number system for product barcodes!), which was subsequently changed to 
many of the larger entities instead getting numbers from the companies 
registry (currently up to 8 digits, with older registrants having 
shorter numbers).  However there is an online database for mapping 
numbers in both systems to entity names (but not the other way!), and 
of cause the full searchability of the companies database.


> 
>> 3. The fact that a government data source lists the incorporation
>>locality of a company, doesn't mean that this locality detail is
>>actually a relevant part of the jurisdictionOfIncorporation.  This
>>essentially depends if the rules in that country ensure uniqueness of
>>both the company number and company name at a higher jurisdiction
>>level (national or state) to the same degree as at the lower level.
>> For example, in the US the company name "Stripe" is not unique
>>nationwide.
> 
> Right - this depends on where the formation/registration occurs. That's
> captured in the EV guidelines.
> 

Unfortunately, there is no consistent mapping between the general words 
of the EVG and the variable practice of various governments.

Again for C=DK, there is an old tradition that incorporation paperwork 
states the county of incorporation, even though for many decades now the 
registration is actually done in country level computer systems, that 
capture the text of that paperwork.  Thus someone reading the wording of 
company bylaws, would assume all companies are registered and incorporated 
at the county level, because the bylaws will usually not even mention the 
country (or the registration number, as the initial bylaws must be 
submitted to get a number).





Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Jurisdiction of incorporation validation issue

2019-08-23 Thread Jakob Bohm via dev-security-policy

On 23/08/2019 04:29, Jeremy Rowley wrote:

I posted this tonight: https://bugzilla.mozilla.org/show_bug.cgi?id=1576013. It's sort of 
an extension of the "some-state" issue, but with the incorporation information 
of an EV cert.  The tl;dr of the bug is that sometimes the information isn't perfect 
because of user entry issues.

What I was hoping to do is have the system automatically populate the 
jurisdiction information based on the incorporation information. For example, 
if you use the Delaware secretary of state as the source, then the system 
should auto-populate Delaware as the State and US as the jurisdiction. And it 
does...with some.

However, you do you have jurisdictions like Germany that consolidate incorporation 
information to www.handelsregister.de<http://www.handelsregister.de> so you 
can't actually tell which area is the incorporation jurisdiction until you do a 
search. Thus, the fields to allow some user input. That user input is what hurts.   
In the end, we're implementing an address check that verifies the 
locality/state/country combination.

The more interesting part (in my opinion) is how to find and address these 
certs. Right now, every time we have an issue or whenever a guideline changes 
we write a lot of code, pull a lot of certs, and spend a lot of time reviewing. 
Instead of doing this every time, we're going to develop a tool that will run 
automatically every time we change a validation rule to find everything else 
that will fail the new update rules. IN essence, building unit tests on the 
data. What I like about this approach is it ends up building a system that lets 
us see how all the rule changes interplay since sometimes they may intercept in 
weird ways. It'll also let us easier measure impact of changes on the system. 
Anyway, I like the idea. Thought I'd share it here to get feedback and 
suggestions for improvement. Still in spec phase, but I can share more info as 
it gets developed.

Thanks for listening.



Additional issues seen at some CAs (not necessarily Digicert):

1. I believe the BRs and/or underlying technical standards are very
  clear if the ST field should be a full name ("California") or an
  abbreviation ("CA").

2. The fact that a country has subdivisions listed in the general ISO
  standard for country codes doesn't mean that those are always part of
  the jurisdiction of incorporation and/or address.

3. The fact that a government data source lists the incorporation
  locality of a company, doesn't mean that this locality detail is
  actually a relevant part of the jurisdictionOfIncorporation.  This
  essentially depends if the rules in that country ensure uniqueness of
  both the company number and company name at a higher jurisdiction
  level (national or state) to the same degree as at the lower level.
   For example, in the US the company name "Stripe" is not unique
  nationwide.

In practice this means that validation specialists need to draw up
various common facts for each country served by a CA, and keep those up 
to date.


As a non-expert citizen, I believe the proper details for my own country 
(C=DK) are:


1. ST= should be Greenland or Grønland (Greenland self-governing
  territory aka .gl), Faeroe Islands or Færøerne (Faeroe Islands self-
  governing territory aka .fo) or omitted (main country, under the
  central government).  Other territories were lost more than 100 years
  ago and can't occur in current certificate subjects.

2. Company numbers for the main country are numbers from the online CVR
  database, they are the same as VAT numbers except: No leading DK, not
  all companies have a VAT registration and not all VAT registrations
  are companies (some are actual the social security numbers of the
  owners of sole proprietorships).  Other private organizations are
  often listed in CVR too.

3. Government institutions at all levels have numbers from a database
  used for electronic billing in OIO UBL XML formats.  Some of those
  numbers are CVR numbers (like for companies), some are SE numbers and
  some are EAN/GLN numbers.

4. Postal codes are 4 digits (leading 0 only occurs in some special
  cases, DK prefix is added on international physical mail, but is not
  actually part of the postal code).

5. The new code number types added in EV 1.7.0 require additional
  research on how they officially map to Danish public administration.

But a CA validation team should research this further to set up proper
templates and scripts for validating EV/OV/IV applicants claiming C=DK.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
h

Re: CA handling of contact information when reporting problems

2019-08-19 Thread Jakob Bohm via dev-security-policy

On 20/08/2019 03:15, Corey Bonnell wrote:

On Monday, August 19, 2019 at 10:26:06 AM UTC-4, Mathew Hodson wrote:

Tom Wassenberg on Twitter reported an experience he had with Sectigo
when reporting a compromised private key.

https://twitter.com/tomwas54/status/1162114413148725248
https://twitter.com/tomwas54/status/1162114465065840640
https://twitter.com/tomwas54/status/1162114495017299976

"So a few weeks ago, I came across a private key used for a TLS
certificate, posted online. These should never be public (hence the
"private"), and every trusted CA is obliged to revoke any certificate
they issued when they become aware its private key is compromised.

"So when I informed the issuing CA (@SectigoHQ) about this, they
promptly revoked the cert. Two weeks later however, I receive an angry
email from the company using the cert (cc'd to their lawyer), blaming
me for a disruption in the services they provide.

"The company explicitly mentioned @SectigoHQ "was so kind" to give
them my contact info! It was a complete surprise for me that
@SectigoHQ would do this without my consent. Especially seeing how the
info was used to badger me."

If these situations were common, it could create a chilling effect on
problem reporting that would hurt the WebPKI ecosystem. Are specific
procedures and handling of contact information in these situations
covered by the BRs or Mozilla policy?


Many CAs disclose the reporter's name and email address as part of their 
response to item 1 of the incident report template [1]. So this information is 
already publicly available if the Subscriber were so inclined to look for it.

Section 9.6.3 of the BRs list the provisions that must be included in the 
Subscriber Agreement that every Applicant must agree to. Notably, one of them 
is protection of the private key. The Subscriber in this case materially 
violated the Subscriber Agreement by disclosing their private key, so I don't 
think they have much footing to go badgering others for problems that they 
brought on themselves.

Thanks,
Corey

[1] https://wiki.mozilla.org/CA/Responding_To_An_Incident#Incident_Report



The question was if this is appropriate behavior, when the incident is
not at the CA, but at a subscriber.  This is typically different from
the case of security researchers filing systemic CA issues for which
they typically want public recognition.

Specificially, the question is one of whistleblower protection for the
reporter (in the general sense of whistleblower protection, not that of
any single national or other whistleblower protection legal regime).

On the other hand there is the question of subscribers having a right
to face their accuser when there might be a question of trust of
subjectivity (example: Someone with trusted subscriber private key
access maliciously sending it to the CA to cause revocation for failure
to protect said key).

Situation would get much more complicated when the report is one of
claiming a subscriber violates a subjective rule, such as malicious cert
use or name ownership conflicts.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-17 Thread Jakob Bohm via dev-security-policy
e. So, at least for me, the UI
change is very negative (except if you color the pad lock in a different
color, that would be OK for me). We cannot say that all users don't care
about the EV indicator. For some users like me, it is important.



Note: Google, as a major proponent of this change, has (almost?) no
website EV certs.


(3) Also, I wanted to ask, if you want to remove the UI indicator,

because

you think that EV certificates give the feeling of false security, then
please tell me: What is the alternative? Removing the UI bling without
giving any alternative solution is just wrong in my opinion. Yes, there
might be a tiny amount of phishing sites that use EV certificates, but

the

EV indicator bar is still better than just nothing. AntiPhishing filters
are not a good alternative because they only protect when the harm is
already done to some users.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-16 Thread Jakob Bohm via dev-security-policy

On 17/08/2019 03:15, Peter Gutmann wrote:

Corey Bonnell via dev-security-policy  
writes:


the effectiveness of the EV UI treatment is predicated on whether or not the
user can memorize which websites always use EV certificates *and* no longer
proceed with using the website if the EV treatment isn't shown. That's a huge
cognitive overhead for everyday web browsing


In any case things like Perspectives and Certificate Patrol already do this
for you, with no overhead for the user, and it's not dependent on whether the
cert is EV or not.  They're great add-ons for detecting sudden cert changes.

Like EV certs though, they have no effect on phishing.  They do very
effectively detect MITM, but for most users it's phishing that's the real
killer.



Your legendary dislike for all things X.509 is showing.  You are
constantly arguing that because they are not perfect, they are useless,
while ignoring any and all improvements since you original write ups.

You really should look at the long term agendas at work here and
reconsider what you may be inadvertently supporting.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-14 Thread Jakob Bohm via dev-security-policy

On 14/08/2019 18:18, Peter Bowen wrote:

On Tue, Aug 13, 2019 at 4:24 AM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


A policy of switching from positive to negative indicators of security
differences is no justification to switch to NO indication.  And it
certainly doesn't help user understanding of any indicator to
arbitrarily change it with 3 days of no meaningful discussion.

The only thing that was insecure with Firefox EV has been that the
original EV indicator only displayed the O= and C= field without enough
context (ST, L).  The change fixes nothing, but instead removes the direct
indication of
the validation strength (low-effort DV vs. EV) AND removes the one piece
of essential context that was previously there (country).

If something should be done, it would be to merge the requirements for
EV and OV with an appropriate transition period to cause the distinction
to disappear (so at least 2 years from new issuance policy).  UI
indication should continue to distinguish between properly validated OV
and the mere "enable encryption with no real checks" DV certificates.



I have to admit that I'm a little confused by this whole discussion.  While
I've been involved with PKI for a while, I've never been clear on the
problem(s) that need to be solved that drove the browser UIs and creation
of EV certificates.


EV was originally an initiative to make the CAs properly vet OV
certificates, and to mark those CAs that had done a proper job.
EV issuing CAs were permitted to still sell the sloppily validated
OV certs to compete against the CAs that hadn't yet cleaned up their
act.

This was before the BRs took effect, meaning that the bar for issuing OV
certs was very low.

To heavihandidly pressure the bad CAs to get in line, Firefox 
simultaneously started to display exaggerated and untruthful warnings 
for OV certificates, essentially telling users they were merely DV 
certificates.


So the intended long term benefit would be that less reliable CAs would
exit the market, making the certificate information displayed more
reliable for users.

The intended short term benefit would be to prevent users from believing
unvalidated certificate information from CAs that didn't check things 
properly.


As BRs and audits for OV certs have been ramped up, the difference 
between OV and EV has become less significant, while the difference 
between DV and OV has massively increased.


Thus blurring the line between OV and EV could now be justified, but 
blurring the line between DV and EV can not.






On thing I've found really useful in working on user experience is to
discuss things using problem & solution statements that show the before and
after.  For example, "It used to take 10 minutes for the fire sprinklers to
activate after sensing excessive heat in our building.  With the new
sprinkler heads we installed they will activate within 15 seconds of
detecting heat above 200ºC, which will enable fire suppression long before
it spreads."



It used to be easy for fraudsters to get an OV certificate with untrue 
company information from smaller CAs.  By only displaying company 
information for more strictly checked EV certificates, it now becomes 
much more difficult for fraudsters to pretend to be someone else, making 
fewer users fall for such scams.


Displaying an overly truncated form of the company information, combined 
with genuine high-trust companies (banks, credit card companies) often 
using obscure subsidiary names instead of their user trusted company 
names for their EV certs has greatly reduced this benefit.





If we assume for a minute that Firefox had no certificate information
anywhere in the UI (no subject info, no issuer info, no way to view chains,
etc), what user experience problem would you be solving by adding
information about certificates to the UI?


This hasn't been the case since before Mozilla was founded.

But lets assume we started from there, the benefit would be to tell
users when they were dealing with the company they know from the
physical world versus someone almost quite unlike them.

Making this visible with as few (maybe 0) extra user actions increases
the likelihood that users will spot the problem when there is one.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Fwd: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-13 Thread Jakob Bohm via dev-security-policy

DO NOT SHIP THIS.  Revert the change immediately and request a CVE
number for the nightlies with this change included.

That Chrome does something harmful is not surprising, and is no
justification for a supposedly independent browser to do the same.

A policy of switching from positive to negative indicators of security
differences is no justification to switch to NO indication.  And it
certainly doesn't help user understanding of any indicator to
arbitrarily change it with 3 days of no meaningful discussion.

The only thing that was insecure with Firefox EV has been that the
original EV indicator only displayed the O= and C= field without enough
context (ST, L).  This was used to create tons of uninformed debate
in order to later present that noise as "extensive discusison [SIC] in
the security community about the usefulness of EV certificates".

The change fixes nothing, but instead removes the direct indication of
the validation strength (low-effort DV vs. EV) AND removes the one piece
of essential context that was previously there (country).

If something should be done, it would be to merge the requirements for
EV and OV with an appropriate transition period to cause the distinction
to disappear (so at least 2 years from new issuance policy).  UI
indication should continue to distinguish between properly validated OV
and the mere "enable encryption with no real checks" DV certificates.

On 12/08/2019 20:30, Wayne Thayer wrote:

Mozilla has announced that we plan to relocate the EV UI in Firefox 70,
which is expected to be released on 22-October. Details below.

If the before and after images are stripped from the email, you can view
them here:

Before:
https://lh4.googleusercontent.com/pSX4OAbkPCu2mhBfeleKKe842DgW28-xAIlRjhtBlwFdTzNhtNE7R43nqBS1xifTuB0L8LO979yhpPpLUIOtDdfJd3UwBmdxFBl7eyX_JihYi7FqP-2LQ5xw4FFvQk2bEObdKQ9F

After:
https://lh5.googleusercontent.com/kL-WUskmTnKh4vepfU3cSID_ooTXNo9BvBOmIGR1RPvAN7PGkuPFLsSMdN0VOqsVb3sAjTsszn_3LjRf4Q8eoHtkrNWWmmxOo3jBRoEJV--XJndcXiCeTTAmE4MuEfGy8RdY_h5u

- Wayne

-- Forwarded message -
From: Johann Hofmann 
Date: Mon, Aug 12, 2019 at 1:05 AM
Subject: Intent to Ship: Move Extended Validation Information out of the
URL bar
To: Firefox Dev 
Cc: dev-platform , Wayne Thayer <
wtha...@mozilla.com>


In desktop Firefox 70, we intend to remove Extended Validation (EV)
indicators from the identity block (the left hand side of the URL bar which
is used to display security / privacy information). We will add additional
EV information to the identity panel instead, effectively reducing the
exposure of EV information to users while keeping it easily accessible.

Before:


After:


The effectiveness of EV has been called into question numerous times over
the last few years, there are serious doubts whether users notice the
absence of positive security indicators and proof of concepts have been pitting
EV against domains <https://www.typewritten.net/writer/ev-phishing/> for
phishing.

More recently, it has been shown <https://stripe.ian.sh/> that EV
certificates with colliding entity names can be generated by choosing a
different jurisdiction. 18 months have passed since then and no changes
that address this problem have been identified.

The Chrome team recently removed EV indicators from the URL bar in Canary
and announced their intent to ship this change in Chrome 77
<https://groups.google.com/a/chromium.org/forum/#!topic/security-dev/h1bTcoTpfeI>.
Safari is also no longer showing the EV entity name instead of the domain
name in their URL bar, distinguishing EV only by the green color. Edge is
also no longer showing the EV entity name in their URL bar.



On our side a pref for this
(security.identityblock.show_extended_validation) was added in bug 1572389
<https://bugzilla.mozilla.org/show_bug.cgi?id=1572389> (thanks :evilpie for
working on it!). We're planning to flip this pref to false in bug 1572936
<https://bugzilla.mozilla.org/show_bug.cgi?id=1572936>.

Please let us know if you have any questions or concerns,

Wayne & Johann




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: How to use Cross Certificates to support Root rollover

2019-08-05 Thread Jakob Bohm via dev-security-policy
t stores update (which, in the case 
of browsers, can be quite fast), your customer could chose to begin omitting 
that intermediate, and rely on intermediate preloading (Firefox) or AIA 
(everyone else). In this model, the AIA for your 'issuing intermediate' would 
point to a URL that contained your cross-signed intermediate, which would then 
allow them to build the path to the legacy root. Clients with your new root 
would build and prefer the shorter path, because they'd have a trust anchor 
matching that (root, key) combination, while legacy clients could still build 
the legacy path.

Can you explain the logic in your last statement above about AIA?  What is the 
Issuing Intermediate in this example, is it the one that is signed by the new 
root but has an AIA pointing to the cross certificate?  Interesting – I never 
thought about including a certIssuer link in a AIA of a CA that was signed by a 
root.

  


Your last statement (building the shorter path) doesn’t seem to be how Chrome 
does it, or at least that is not how the Chrome certificate viewer displays the 
chain, as discussed above (unless you’re assuming the not-before dates are 
adjusted to be identical)

  


Yea, this is the hard part.  We’re assuming the web server operator understands 
this and is capable of making the tradeoffs you outlined above.  Generally they 
want the CA to tell them how to install the certificate.

  


When we change the roots under which we issue SSL certificates, we need to say: 
If you’re not sure if you need the complete interoperability of the old root, 
then install the cross certificate. Even if we say the old root has support for 
a few legacy platforms that are no longer in meaningful use so you should be OK 
without them (Android 1-2, FF 3.0 and earlier, Mozilla 1-2, Safari 1-3, etc.) 
they are likely to install the cross certificate just to be safe.   While we 
can provide them the pros and cons, they don’t want to think about this and 
just want to install their certificate and move forward without impacting any 
current or possible visitors.

  


I’m curious how Google would handle this.  At what point will you start using the 
Google "GTS Root R1” created in 2016 with a cross certificate back to your 
current Root?  It uses 4K and SHA 384 vs. 2K and SHA-1 in your current root, so 
there seem to be clear advantages for using it.

  

  

  

  

  

  


Do you view these as meaningful issues?  Do you know of any CAs that have
taken this approach?

  


Definitely! I don't want to sound dismissive of these issues, but I do want to 
suggest it's good if we as an industry start tackling these a bit head-on. I'm 
particularly keen to understand more about how and when we can 'sunset' roots. For 
example, if the desire is to introduce a new root in order to transition to 
stronger cryptography, I'd like to understand more about how and when clients get 
the 'strong' chain or the 'weaker' chain and how that selection may change over 
time. I'm understanding to 4K roots - while I'd rather we were in a world where 2K 
roots were viable because we were rotating roots more frequently (hence the 
above), 4K roots may make sense given the pragmatic realities that these end up 
being used much longer than anticipated. If that's the case, though, it's 
reasonable to think we'd retire roots <4K, and it's reasonable to think we 
don't need multiple 4K roots. That's why I wanted to flesh out these 
considerations and have that discussion, because I'm not sure that just allowing 
folks to select '2K vs 4K' for a particular CA really helps move the needle far 
enough in user benefit (it does, somewhat, but not as much as 'all 4K', for 
example)

  

  

  


My understanding is that both Symantec / DigiCert and Sectigo have pursued 
paths like this, and can speak more. ISRG / Let's Encrypt pursued something 
similar-but-different, but which had the functional goal of reducing their 
dependency on the IdenTrust root in favor of the ISRG root.

  

  




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Comodo password exposed in GitHub allowed access to internal Comodo files

2019-07-29 Thread Jakob Bohm via dev-security-policy

On 28/07/2019 00:41, Nick Lamb wrote:

On Sun, 28 Jul 2019 00:06:38 +0200
Ángel via dev-security-policy 
wrote:


A set of credentials mistakenly exposed in a public GitHub repository
owned by a Comodo software developer allowed access to internal Comodo
documents stored in OneDrive and SharePoint:

https://techcrunch.com/2019/07/27/comodo-password-access-data/


It doesn't seem that it affected the certificate issuance system, but
it's an ugly security incident nevertheless.


What was once the Comodo CA is named Sectigo these days, so conveniently
for us this makes it possible to simply ask whether the incident
affected Sectigo at all:

- Does Sectigo in practice share systems with Comodo such that this
   account would have access to Sectigo internal materials ?



Alternative problem scenario (and thus additional question):

- Did the Comodo systems or data compromised include sensitive
 information about Sectigo systems or operations, such as yet-to-be
 fixed security issues?

This could of cause be an effect of this information being present in
files that remained in Comodo's possession under promise of secrecy or
deletion after the operation split.


In passing it's probably a good time to remind all programme
participants that Multi-factor Authentication as well as being
mandatory for some elements of the CA function itself (BR 6.5.1), is a
best practice for any security sensitive business like yours to be using
across ordinary business functions in 2019. Don't let embarrassing
incidents like this happen to you.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Nation State MITM CA's ?

2019-07-20 Thread Jakob Bohm via dev-security-policy

On 21/07/2019 03:21, My1 wrote:

Hello everyone,

I am new here but also want to share my opinion about some posts here, I know 
it's a lot of text but I hope it's not too bad.

Am Freitag, 19. Juli 2019 23:42:47 UTC+2 schrieb dav...@gmail.com:

Wouldn't it be easier to just decree that HTTPS is illegal and block all 
outbound 443 (only plain-text readable comms are allowed)?  Then you would not 
have the decrypt-encrypt/decrypt-encrypt slowdown from the MITM.


if you want to block like half the internet or more which is on the way of 
HTTPS-only, go ahead.


If you don't want to make everyone install a certificate:
Issue a double-wildcard certificate (*.*) that can impersonate any site, load 
it on a BlueCoat system, and sell it to a repressive regime:
https://www.dailydot.com/news/blue-coat-syria-iran-spying-software/

Both scenarios end up in the same place: Nobody trusts encryption/SSL or CAs 
anymore.


As far as I remember, certs only allow one wildcard and that only on the very 
left so they would need at least *.tld and *.common-sub.tld for all eTLDs (*.jp 
doesnt cover *.co.jp).

Also, you say that this is for not wanting everyone to install a certificate. 
which Trusted CA in their right mind would actually do this? CT is becoming FAR 
bigger as time goes on and if any CA is catched going and issuing wildcards for 
public suffixes, that CA would be dead instantly.

Also browsers could just drop certificates with *.(public-suffix) entirely and 
not trust them, no matter the source.



I believe this is either done, or easy to add.




On Friday, July 19, 2019 at 1:27:17 PM UTC-7, Jakob Bohm wrote:

On 19/07/2019 21:13, andrey...@gmail.com wrote:

I am confused. Since when Mozilla is under obligation to provide customized 
solutions for corporate MITM? IMHO, corporations, if needed, can hire someone 
else to develop their own forks of Chrome/Firefox to do snooping on HTTPS 
connections.

In regular browsers, developed by community effort and with public funds, ALL 
MiTM certificates should be just permanently banned, no?



As others (and I) have mentioned, MitM is also how many ordinary
antivirus programs protect users from attacks.  The hard part is
how to distinguish between malicious and user-helping systems.


I fully agree that this is hard but one also needs to be aware that antiviruses 
while not unhelpful also provide risks by being VERY deep in the system. an 
unmonitored MITM action by that could end in disastrous results, in the worst 
case bank phishing could occur since the user cannot verify the certificate via 
the browser.



Hence my breakdown and suggestions below, which seem to agree with yours
in most cases.


one suggestion I think might be helpful is to have the entire data available, while 
keeping the HTTPS signature, and there could be a machanism that allows anti-virus 
software to check content before it is executed/loaded and if needed, put a big "do 
not execute" flag for certain things like script blocks that are clearly malicious 
or whatever, and the browser can check the signature that no content has been actually 
changed, but remove that flagged content, while displaying a notification that content 
has been blocked.

that way anti-viruses could only remove elements but not actually change 
anything.



Am Freitag, 19. Juli 2019 22:23:00 UTC+2 schrieb Jakob Bohm:

As someone actually running a corporate network, I would like to
emphasize that it is essential that such mechanisms try to clearly
distinguish the 5 common cases (listed by decreasing harmfulness).

1. A known malicious actor is intercepting communication (such as the
nation state here discussed).

2. An unknown actor is intercepting communication (hard to identify
safely, but there are meaningful heuristic tests).

3. A local/site/company network firewall is intercepting communications
for well-defined purposes known to the user, such as blocking virus
downloads, blocking surreptitious access to malicious sites or
scanning all outgoing data for known parts of site secrets (for
example the Coca-Cola company could block all HTTPS posts containing
their famous recipe, or a hospital could block posts of patient
records to unauthorized parties).  This case justifies a non-blocking
notification such as a different-color HTTPS icon.

4. An on-device security program, such as a local antivirus, does MitM
for local scanning between the browser and the network.  Mozilla could
work with the AV community to have a way to explicitly recognize the
per machine MitM certs of reputable AV vendors (regardless of
political sanctions against some such companies).  For example,
browsers could provide a common cross-browser cross-platform API for
passing the decoded traffic to local antivirus products, without each
AV-vendor writing (sometimes unreliable) plugins for each browser
brand and version, while also not requiring browser vendor

Re: Nation State MITM CA's ?

2019-07-20 Thread Jakob Bohm via dev-security-policy

On 20/07/2019 09:31, simc...@gmail.com wrote:

I think it must be quickly blacklisted by Google, Mozilla and Microsoft all together, 
because it is known as a state scale MITM affecting citizen "real" life.

The purpose of https is being defeated and such companies who tried to improve 
network security for past decade have to react (yes, security and privacy on 
which they work on are political).
If browser editors do blacklist, citizen will be able to rise against this 
privacy attack.

PS:When a MITM CA is known to be at a company scale, it is not that harmfull 
imho because citizen still have privacy at home.



Note that on a technical level, there is no difference between a (small)
company and a home (employees versus family members).

A small company to consider would be doctor or lawyer office, handling
other people's deepest secrets and under an obligation of secrecy from
even the authorities.

A large home to consider could be 4 generations living together, with
8 to 10 children and 4 spouses for each in each generation, but in
relative poverty.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Nation State MITM CA's ?

2019-07-19 Thread Jakob Bohm via dev-security-policy

On 19/07/2019 21:13, andrey.at.as...@gmail.com wrote:

I am confused. Since when Mozilla is under obligation to provide customized 
solutions for corporate MITM? IMHO, corporations, if needed, can hire someone 
else to develop their own forks of Chrome/Firefox to do snooping on HTTPS 
connections.

In regular browsers, developed by community effort and with public funds, ALL 
MiTM certificates should be just permanently banned, no?



As others (and I) have mentioned, MitM is also how many ordinary
antivirus programs protect users from attacks.  The hard part is
how to distinguish between malicious and user-helping systems.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Nation State MITM CA's ?

2019-07-19 Thread Jakob Bohm via dev-security-policy

On 19/07/2019 16:52, Troy Cauble wrote:

On Thursday, July 18, 2019 at 8:26:43 PM UTC-4, wolfgan...@gmail.com wrote:


Even on corporate hardware I would like at least a notification that this is
happening.



I like the consistency of a reminder in all cases, but this
might lead to corporate policies to use other browsers.



As someone actually running a corporate network, I would like to
emphasize that it is essential that such mechanisms try to clearly
distinguish the 5 common cases (listed by decreasing harmfulness).

1. A known malicious actor is intercepting communication (such as the
  nation state here discussed).

2. An unknown actor is intercepting communication (hard to identify
  safely, but there are meaningful heuristic tests).

3. A local/site/company network firewall is intercepting communications
  for well-defined purposes known to the user, such as blocking virus
  downloads, blocking surreptitious access to malicious sites or
  scanning all outgoing data for known parts of site secrets (for
  example the Coca-Cola company could block all HTTPS posts containing
  their famous recipe, or a hospital could block posts of patient
  records to unauthorized parties).  This case justifies a non-blocking
  notification such as a different-color HTTPS icon.

4. An on-device security program, such as a local antivirus, does MitM
  for local scanning between the browser and the network.  Mozilla could
  work with the AV community to have a way to explicitly recognize the
  per machine MitM certs of reputable AV vendors (regardless of
  political sanctions against some such companies).  For example,
  browsers could provide a common cross-browser cross-platform API for
  passing the decoded traffic to local antivirus products, without each
  AV-vendor writing (sometimes unreliable) plugins for each browser
  brand and version, while also not requiring browser vendors to write
  specific code for each AV product.  Maybe the ICAP protocol used for
  virus scanning in firewalls, but run against 127.0.0.1 / ::1 (RFC3507
  only discusses its use for HTTP filtering, but it was widely used for
  scanning content from mail protocols etc. and a lot less for
  insertion of advertising which is in the RFC).

5. A site, organization or other non-member CA that issues only non-MitM
  certificates according to a user-accepted policy.  Those would
  typically only issue for domains that request this or are otherwise
  closely aligned with the user organization.  Such a CA would
  (obviously) not be bound by Mozilla or CAB/F policies, but may need to
  do some specific token gestures to programmatically clarify their
  harmlessness, such as not issuing certs for browser pinned domains,
  only issue for domains listing them in CAA records or outside public
  DNS or similar.

I am aware of at least one system being overly alarmist about our
internal type 5 situation, making it impossible to distinguish from a
type 1 or 2 attack situation.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Expired Root CA in certdata.txt

2019-07-15 Thread Jakob Bohm via dev-security-policy

As Mozilla has stopped caring about code signatures, e-mails are much
more relevant for checking old certificates as of a known date:

Most e-mail systems provide the reader with a locally verified record of
when exactly the mail contents reached a trusted mail server and/or a
POP3 client.  Thus validating e-mail signatures as of that date is
perfectly sensible, either via a code improvement in Thunderbird or via
a knowledgeable user comparing the expiry time in error messages to the
mail server timestamp visible via "view source".

Some CAs also offer (as a paid service) to provide anonymous timestamp
signatures for use on signed documents, these should be technically
compatibible with use on e-mail signatures or entire e-mails, though
that procedure isn't very common.

I don't know if gMail implements these features in their web and app
interfaces, but they certainly could if they so wanted, and with 
Google's habit of keeping stored data, the features could probably be 
retroactively applied to mails received long before the feature was

implemented.

On 15/07/2019 03:11, Samuel Pinder wrote:

The way I understand it is, generally speaking, Root CAs may be kept in a
root store for as long as the root key material is not compromised in any
way. In practice Root CA certificates are removed at the operator's request
when they believe it is no longer needed, or the root store operator
believes it should be removed due to the key material being vulnerable to
brute forcing, or some other policy reason (e.g. violation or misuse). Of
course it is possible for renewed Root CA certificates to replace older
ones as long as they are signed with the same key material- allowing
everything else chained to it still be valid (I've seen this happen once
with the "GlobalSign Root CA" originally published back in 1998.)
While keeping expired Root CAs may not useful for web server certificates
chaining up to an expired CA certificate, it is useful for codesigning
certificates. Timestamped codesigned objects can continue to work after
their certificate has expired as the "timestamp" shows that a certificate
was valid *at the time it performed the signature*. Have a look here:
https://serverfault.com/questions/878919/what-happens-to-code-sign-certificates-when-when-root-ca-expires
  .
Samuel Pinder

On Mon, Jul 15, 2019 at 1:32 AM Vincent Lours via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


On Monday, 15 July 2019 04:41:12 UTC+10, Ryan Sleevi  wrote:

Thanks for mentioning this here.

Could you explain why you see it as an issue? RFC 5280 defines a trust
anchor as a subject and a public key. Everything else is optional, and

the

delivery of a trust anchor as a certificate does not necessarily imply

the

constraints of that certificate, including expiration, should apply.



Hi Ryan,

Thanks for your message.

First of all, I never said that was an issue. I was just reporting some
expired Root CA, as I was thinking that may impact peoples.

Secondly, I was not aware of the RFC 5280 defining a trust anchor as a
subject and a public key.
However, if you referrer to the same RFC, they defined a Public Key in the
"4.1.2.5. Validity" Section:

   "When the issuer will not be able to maintain status information until
the notAfter date (including when the notAfter date is
1231235959Z), the issuer MUST ensure that no valid certification
path exists for the certificate after maintenance of status information
is terminated.
This may be accomplished by expiration or
revocation of all CA certificates containing the public key used to
verify the signature on the certificate and discontinuing use of the
public key used to verify the signature on the certificate as a trust
anchor."

In the case of the "certdata.txt", this file is including Public CA
certificates. So an expired certificate means that the key cannot be used
anymore.

I'm still not expressing this message as an issue, but an suggestion to
update/remove those expired Public Keys from your certdata.txt.





Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Logotype extensions

2019-06-18 Thread Jakob Bohm via dev-security-policy
On 14/06/2019 18:54, Ryan Sleevi wrote:
> On Fri, Jun 14, 2019 at 4:12 PM Jakob Bohm via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
>> In such a case, there are two obvious solutions:
>>
>> A. Trademark owner (prompted by applicant) provides CA with an official
>> permission letter stating that Applicant is explicitly licensed to
>> mark the EV certificate for a specific list of SANs and and Subject
>> DNs with their specific trademark (This requires the CA to do some
>> validation of that letter, similar to what is done for domain
>> letters).
> 
> 
> This process has been forbidden since August 2018, as it is fundamentally
> insecure, especially as practiced by a number of CAs. The Legal Opinion
> Letter (LOL) has also been discussed at length with respect to a number of
> problematic validations that have occurred, due to CAs failing to exercise
> due diligence or their obligations under the NetSec requirements to
> adequately secure and authenticate the parties involved in validating such
> letters.
> 

Well, that was unfortunate for the case where it is not straing parent-
child (e.g. trademark owned by a foundation and licensed to the company) 
But ok, in that case the option is gone, and what follows below is moot:

> 
> Letter needs to be reissued for end-of-period cert
>> renewals, but not for unchanged early reissue where the cause is not
>> applicant loss of rights to items.  For example, the if the Heartbleed
>> incident had occurred mid-validity, the web server security teams
>> could get reissued certificates with uncompromised private keys
>> without repeating this time consuming validation step.
> 
> 
> EV certificates require explicit authorization by an authorized
> representative for each and every certificate issued. A key rotation event
> is one to be especially defensive about, as an attacker may be attempting
> to bypass the validation procedures to rotate to an attacker-supplied key.
> This was an intentional design by CAs, in an attempt to provide some value
> over DV and OV certificates by the presumed difficulty in substituting them.
> 

I was considering the trademark as a validated property of the subject 
(similar to e.g. physical address), thus normally subject to the 825 day 
reuse limit.  My wording was intended to require stricter than current 
BR revalidation for renewal within that 825 day limit.


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Logotype extensions

2019-06-14 Thread Jakob Bohm via dev-security-policy
.2.14

Adding strongly verified marks to EV certificates would be a great advance, and 
is something that enterprises very much want.  They believe that identity 
information verified by a trusted third party (such as a CA) protects their 
customers and protects their brand.  They would very much like logos to be 
included.

Let's work together cooperatively on this project to resolve any issues.


Hi Kirk,
In the case of Subway that you mentioned, how would the CA go about confirming that the 
Applicant-provided trademark serial number is correctly registered to their Organization? Both 
trademarks you provided list the registrant as "Subway IP Inc." on the corresponding 
USPTO pages, not "Franchise World Headquarters LLC" (which is the O RDN value in the EV 
certificate). Would the CA merely match on address in the case of a mismatch, or is there some 
other method to rectify this name difference?



In such a case, there are two obvious solutions:

A. Trademark owner (prompted by applicant) provides CA with an official
  permission letter stating that Applicant is explicitly licensed to
  mark the EV certificate for a specific list of SANs and and Subject
  DNs with their specific trademark (This requires the CA to do some
  validation of that letter, similar to what is done for domain
  letters).  Letter needs to be reissued for end-of-period cert
  renewals, but not for unchanged early reissue where the cause is not
  applicant loss of rights to items.  For example, the if the Heartbleed
  incident had occurred mid-validity, the web server security teams
  could get reissued certificates with uncompromised private keys
  without repeating this time consuming validation step.

B. Applicant conglomerate uses the same legal entity for the certificate
  application and trademark registration (not always possible if other
  legal issues take priority for the applicant, such as keeping their
  online webshop legally separate from their core IP assets).


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certinomis Issues

2019-05-17 Thread Jakob Bohm via dev-security-policy
On 17/05/2019 07:21, Jakob Bohm wrote:
> On 17/05/2019 01:39, Wayne Thayer wrote:
>> On Thu, May 16, 2019 at 4:23 PM Wayne Thayer  wrote:
>>
>> I will soon file a bug requesting removal of the “Certinomis - Root CA”
>>> from NSS.
>>>
>>
>> This is https://bugzilla.mozilla.org/show_bug.cgi?id=1552374
>>
> 
> To more accurately assess the impact of distrust, maybe someone with
> better crt.sh skills than me should produce a list of current
> certificates filtered as follows:
> 
> - Sort by O= (organization), thus grouping together certificates that
>   were issued to the same organization (for example, there are many
>   issued to divisions of LA POSTE).
> - Exclude certificates that expire on or before 2019-08-31, as those
>   will be unaffected by a September distrust.
> - Exclude certificates issued after 2019-05-17 (today), as Certinomis
>   should be aware of the likely distrust by tonight.
> 

To clarify, this is intended as an improvement to the statistics Andrew 
Ayer posted at https://bugzilla.mozilla.org/show_bug.cgi?id=1552374#c1 .

I am posting it in the thread to increase the chance someone with the 
skills will see it and run the query.

I expect it to show a surprisingly small number of certificate holders, 
indicating the real impact of revocation to be much smaller than one 
would expect from the raw count of 1381 certificates.

This in turn could inform the mitigations or other proactive steps to 
reduce relying party (user) impact of distrust, if distrust is accepted 
by Kathleen.


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certinomis Issues

2019-05-16 Thread Jakob Bohm via dev-security-policy

On 17/05/2019 01:39, Wayne Thayer wrote:

On Thu, May 16, 2019 at 4:23 PM Wayne Thayer  wrote:

I will soon file a bug requesting removal of the “Certinomis - Root CA”

from NSS.



This is https://bugzilla.mozilla.org/show_bug.cgi?id=1552374



To more accurately assess the impact of distrust, maybe someone with
better crt.sh skills than me should produce a list of current
certificates filtered as follows:

- Sort by O= (organization), thus grouping together certificates that
 were issued to the same organization (for example, there are many
 issued to divisions of LA POSTE).
- Exclude certificates that expire on or before 2019-08-31, as those
 will be unaffected by a September distrust.
- Exclude certificates issued after 2019-05-17 (today), as Certinomis
 should be aware of the likely distrust by tonight.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certinomis Issues

2019-05-09 Thread Jakob Bohm via dev-security-policy

On 10/05/2019 02:22, Wayne Thayer wrote:

Thank you for this response Francois. I have added it to the issues list
[1]. Because the response is not structures the same as the issues list, I
did not attempt to associate parts of the response with specific issues. I
added the complete response to the bottom of the page.

On Thu, May 9, 2019 at 9:27 AM fchassery--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


...

...

>

In response to the email from Franck that you mention, Gerv responded [1]
by quoting the plan he had approved and stating "This seems to be very
different to the plan you implemented." By cross-signing Startcom's old
roots, Certinomis did assist Startcom in circumventing the remediation
plan, and by proposing one plan then implementing a different one,
Certinomis did so without Mozilla's consent.



As can be seen from your [3] link, Certinomis cross-signed StartCom's
NEW supposedly remediated 2017 hierarchy, not the old root.

However it was still wrong.


Startcom misissued a number of certificates (e.g. [3]) under that
cross-signing relationship that Certinomis is responsible for as the
Mozilla program member.

By cross-signing Startcom's roots, Certinomis also took responsibility for
Startcom's qualified audit.

I will also add this information to the issues list.

- Wayne

[1] https://wiki.mozilla.org/CA/Certinomis_Issues
[2]
https://groups.google.com/d/msg/mozilla.dev.security.policy/RJHPWUd93xE/lyAX9Wz_AQAJ
[3] https://crt.sh/?opt=cablint=160150786




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Exclude Policy Certification Authorities from EKU Requirement

2019-05-09 Thread Jakob Bohm via dev-security-policy

On 10/05/2019 05:25, Ryan Sleevi wrote:

On Thu, May 9, 2019 at 10:44 PM Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


On 09/05/2019 16:35, Ryan Sleevi wrote:

Given that the remark is that such a desire is common, perhaps you can
provide some external references documenting how one might go about
configuring such a set-up, particularly in the context of TLS trust?
Similarly, I'm not aware of any system that supports binding S/MIME
identities to a particularly CA (effectively, CA pinning) - perhaps you

can

provide documentation and reference for systems that perform this?

Thanks for helping me understand how this 'common' scenario is actually
implemented, especially given that the underlying codebases do not

support

such distinctions.



My description is based on readily available information from the
following sources, that you should also have access to:



It looks like your links to external references may have gotten stripped,
as I didn't happen to receive any.

As it relates to the topic at hand, the system you described is simply that
of internal CAs, and does not demonstrate a need to use publicly trusted
CAs. Further, going back to your previous message, to which I was replying
to make sure I did not misunderstand, given that you stated it was common,
it seemed we established that such scenarios in that message, and further
expanded upon in this, already have the capability for enterprise
management.

I wanted to make sure I did my best to understand, so that we can have
productive engagement on substance, specifically around whether there is a
technical necessity for the use of non-Root CAs to be capable of issuance
under multiple different trust purposes. It does not seem as if there's
been any external references to establish a technical necessity, so it does
not seem like the policy needs to be modified, based on the available
evidence.



There were no links, only descriptions of obvious facts that you 
willfully ignore in an effort to troll the community.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Exclude Policy Certification Authorities from EKU Requirement

2019-05-09 Thread Jakob Bohm via dev-security-policy
On 09/05/2019 16:35, Ryan Sleevi wrote:
> On Wed, May 8, 2019 at 10:36 PM Jakob Bohm via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
>> [ Note, I am arguing a neutral position on the specific proposal ]
>>
>> The common purpose of having an internally secured (managed or on-site)
>> CA in a public hierarchy is to have end certificates which are
>> simultaneously:
>>
> 
> Despite my years of close experience with the implementation and details of
> the certificate verification engines within Google Chrome and Android,
> Mozilla Firefox, Apple iOS and macOS, and Microsoft Windows, including
> extensive work with Enterprise PKIs, I must admit, I have never heard of
> the scenario you're describing or actually being supported.
> 
> Given that the remark is that such a desire is common, perhaps you can
> provide some external references documenting how one might go about
> configuring such a set-up, particularly in the context of TLS trust?
> Similarly, I'm not aware of any system that supports binding S/MIME
> identities to a particularly CA (effectively, CA pinning) - perhaps you can
> provide documentation and reference for systems that perform this?
> 
> Thanks for helping me understand how this 'common' scenario is actually
> implemented, especially given that the underlying codebases do not support
> such distinctions.
> 

My description is based on readily available information from the 
following sources, that you should also have access to:

1. The high level documentation and feature lists for enterprise PKI 
  suites (which are completely different from the software suites 
  designed for running public CA companies, because templates, 
  validation methods and policies tend to be completely different).

2. The user and configuration interfaces for common software that needs 
  to hold certificates (web servers, mail clients etc.).  Those tend to 
  be harder to use if different relying parties need different 
  certificates for the same certificate subject.

  For example it is difficult to make mail clients sign/encrypt all 
  mails if you need different personal certificates for the same e-mail 
  account depending on who the mail is sent to, while a setting to do 
  the same thing every time is typically a builtin option.  Now imagine 
  the certificate and private key being embedded in an employee ID card,
  while company workstations don't have unlocked spare ports for 
  plugging in two different card readers (one for the employee ID card 
  containing the access certificate that keeps the machine logged in, 
  another for a USB token from a public CA).

   Similarly, it can get very tricky to make a web server use different 
  certificate settings for different visitor categories accessing the 
  same DNS name.

3. The details of publicly trusted company SubCAs that emerge from time 
  to time in compliance cases.  The details revealed tend to only make 
  sense in setups like the ones described.

4. The actual number of companies needing this is an unknown that I 
  make no claims about, it's not my proposal.

5. Maybe look at how Google employees and internal servers used 
  certificates before the formation of GTS as a public CA.  I doubt 
  all your internal operational certificates chained to your publicly 
  trusted SubCAs, and I doubt that Google sensitive systems accepted 
  purported "Google authorized" certificates issued by the 3rd party 
  public CA that signed Google's SubCA.
   [ I am not asking you to reveal the information, just think about 
  it when considering the needs of companies that are never going to 
  become root program members ].




Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Exclude Policy Certification Authorities from EKU Requirement

2019-05-08 Thread Jakob Bohm via dev-security-policy
 end cert with public trust  [CT logged if server EKU]

Org root CA - OC1
  +- Cross signature of Org public trusted intermediary - PY2019
  \- Org internal only intermediary - IY2019
\- Org internal only end cert without public trust

If this is done, it becomes a question if PCA2 and PY2019 should be 
duplicated for e-mail versus server certs, like this:

Public root CA - R1
  +- Public intermediary WebPKI CA - G2  (optional) [CT logged]
  | \- Org publicly trusted server root - PCA2  (optional) [CT logged]
  |   \- Org public trusted server intermediary - PY2019S  [CT logged]
  | \- Org server end cert with public trust  [CT logged]
  \- Public intermediary e-mail CA (optional) - G3  (optional) 
\- Org publicly trusted mail root - PCA3  (optional) 
  \- Org public trusted mail intermediary - PY2019M
\- Org mail end cert with public trust

Org root CA - OC1
  +- Cross signature of Org public trusted server intermediary - PY2019S
  +- Cross signature of Org public trusted mail intermediary - PY2019M
  \- Org internal only intermediary - IY2019
\- Org internal only end cert without public trust

Distributing the internal cross certificates to internal relying 
parties (so they get used in preference to the publicly trusted 
intermediaries) would also depend on the abilities of corporate 
software packages.

>> Does this clarify why having a single "Org CA" would help in deployment
>> in some enterprise environments?
>>
> 
> Yes. Hopefully my response demonstrates why, based on the preconditions,
> there is no necessity for the solution you propose, and if we alter those
> conditions, then alternatives such as currently required are better suited.
> 


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Unretrievable CPS documents listed in CCADB

2019-05-03 Thread Jakob Bohm via dev-security-policy
hat is applicable to a given certificate that was issued prior to the 
most current CPS.  In other words, you should look at when the certificate was 
issued and then figure out which CPS is applicable.

-Original Message-
From: dev-security-policy  On 
Behalf Of Andrew Ayer via dev-security-policy
Sent: Thursday, May 2, 2019 8:16 PM
To: mozilla-dev-security-pol...@lists.mozilla.org
Subject: Re: Unretrievable CPS documents listed in CCADB

On Thu, 2 May 2019 18:53:39 -0700 (PDT)
Corey Bonnell via dev-security-policy
 wrote:


As an aside, I noticed that several URLs listed in CCADB are “Legal
Repository” web page URLs that contain a list of many CP/CPS
documents. My recommendation is to slightly amend CCADB Policy to
require CAs to provide URLs to the specific document in question
rather than a general “Legal Repository” page, where it is left up to
the reader to decide which hyperlink on the page is the correct
document.


+1.  It's often a real hassle to find the CP/CPS for a CA.  Linking
directly to the document would help a lot.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certinomis Issues

2019-05-01 Thread Jakob Bohm via dev-security-policy

On 01/05/2019 22:29, mono.r...@gmail.com wrote:

2017 assessment report
LSTI didn't issue to Certinomis any "audit attestation" for the browsers in 2017. The 
document Wayne references is a "Conformity Assessment Report" for the eIDAS regulation.


I had a look at the 2017 report, and unless I misread, it implies conformity to ETSI 
EN 319 401 (Est vérifiée également la conformité aux normes: EN 319 401), whereas EN 
319 401 states, "The present document is aiming to meet the general 
requirements to provide trust and confidence in electronic
transactions including, amongst others, applicable requirements from Regulation (EU) 
No 910/2014 [i.2] and those from CA/Browser Forum [i.4].", so I'm not sure how 
that squares with saying it wasn't an audit taking CA/BF regulations into account?



But does EN 319 401, as it existed in 2016/2017 incorporate a clause to 
apply all "future" updates to the CAB/F regulations or otherwise cover 
all BRs applicable to the 2016/2017 timespan?


Because otherwise EN 319 401 compliance only implied compliance with
the subset of the BRs directly included in EN 319 401 or documents
incorporated by reference into EN 319 401 (the above quote is a
statement of intent to include the BR requirements that existed when
EN 319 401 was written).

That said, Mozilla policy at the time may have explicitly stated that an
EN 319 401 audit is/was sufficient for Mozilla inclusion purposes.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Require EKUs in End-Entity Certificates

2019-04-16 Thread Jakob Bohm via dev-security-policy

On 16/04/2019 08:56, Tadahiko Ito wrote:

On Tuesday, April 2, 2019 at 9:36:06 AM UTC+9, Brian Smith wrote:


I agree the requirements are already clear. The problem is not the clarity
of the requirements. Anybody can define a new EKU because EKUs are listed
in the certificate by OIDs, and anybody can make up an EKU. A standard
isn't required for a new OID. Further, not agreeing on a specific EKU OID
for a particular kind of usage is poor practice, and we should discourage
that poor practice.



It is good that anyone can make OID, so we do not need to violate policy.

However, I have following concerns with increasing private OIDs in the world.
-I think that OID should be CA’s private OID or public OID. because, in the 
case of a CA is going to out of business, and that business was cared by 
another CA, we would not want those two CA using same OID for different usage.
-In the other hand, CA’s private OIDs will reduce interoperability, which seems 
to be problematic,
-web browser might just ignore private OIDs, but I am not sure other 
certificate verification applications,
which is used for certificate of that private EKU OID.

over all, I think we should have some kind of public OIDs, at least for widely 
use purpose.

I believe if it were used for internet, we can write Internet-Draft, and ask 
OIDs on RFC3280 EKU repo.
#I am planing to try that.



The common way to create an OID is to get an OID assigned to your (CA)
business, either through a national standards organization or by getting
an "enterprise ID" from IANA.

Then append self-chosen suffixes to subdivide your new (near infinite)
OID name space.  For example, if your national standards organization
assigned your CA the OID "9.99.999." (not actually a valid OID btw),
you could subdivide like this (fun example).

9.99.999..1  - Certificate policies
9.99.999..1.1 - Your test certificates policy
9.99.999..1.1.2019.3.12 - March 12, 2019 version of that policy
9.99.999..1.2 - Your EV policy
9.99.999..2  - EKUs
9.99.999..2.1 - CA internal Database backup signatures
9.99.999..2.2 - login certificates for the secure room door lock
9.99.999..2.3 - Gateway certificates for custom protocol X
9.99.999..3  - Custom DN elements
9.99.999..3.1 - Paygrade
9.99.999..4  - Certificate/CMS/CRL/OCSP extensions
9.99.999..4.1 - Date and location of photo ID validation
9.99.999..5  - Algorithms
9.99.999..5.1 - Caesar encryption.
9.99.999..5.2 - CRC8 hash
9.99.999..9 - East company division
9.99.999..9.99.999 - This is getting silly.

Etc.

A different CA would have (by the hierarchical nature of the OID system)
a different assigned OID such as 9.88.999. thus not overlapping.

Thus no risk of conflicting uses unless someone breaks the basic OID
rules.  The actual risk (as illustrated by EV) is getting too many
different OIDs for the same thing.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Arabtec Holding public key? [Weird Digicert issued cert]

2019-04-15 Thread Jakob Bohm via dev-security-policy

According to Jeremy (see below), that was not the situation.

On 15/04/2019 14:09, Man Ho wrote:

I don't think that it's trivial for less-skilled user to obtain the CSR
of "DigiCert Global Root G2" certificate and posting it in the request
of another certificate, right?


On 15-Apr-19 6:57 PM, Jakob Bohm via dev-security-policy wrote:

Thanks for the explanation.

Is it possible that a significant percentage of less-skilled users
simply pasted in the wrong certificates by mistake, then wondered why
their new certificates newer worked?

Pasting in the wrong certificate from an installed certificate chain or
semi-related support page doesn't seem an unlikely user error with that
design.

On 12/04/2019 18:56, Jeremy Rowley wrote:

I don't mind filling in details.

We have a system that permits creation of certificates without a CSR
that works by extracting the key from an existing cert, validating
the domain/org information, and creating a new certificate based on
the contents of the old certificate. The system was supposed to do a
handshake with a server hosting the existing certificate as a form of
checking control over the private key, but that was never
implemented, slated for a phase 2 that never came. We've since
disabled that system, although we didn't file any incident report
(for the reasons discussed so far).





Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Arabtec Holding public key? [Weird Digicert issued cert]

2019-04-15 Thread Jakob Bohm via dev-security-policy

Thanks for the explanation.

Is it possible that a significant percentage of less-skilled users
simply pasted in the wrong certificates by mistake, then wondered why
their new certificates newer worked?

Pasting in the wrong certificate from an installed certificate chain or
semi-related support page doesn't seem an unlikely user error with that
design.

On 12/04/2019 18:56, Jeremy Rowley wrote:

I don't mind filling in details.

We have a system that permits creation of certificates without a CSR that works 
by extracting the key from an existing cert, validating the domain/org 
information, and creating a new certificate based on the contents of the old 
certificate. The system was supposed to do a handshake with a server hosting 
the existing certificate as a form of checking control over the private key, 
but that was never implemented, slated for a phase 2 that never came. We've 
since disabled that system, although we didn't file any incident report (for 
the reasons discussed so far).

-Original Message-
From: dev-security-policy  On 
Behalf Of Wayne Thayer via dev-security-policy
Sent: Friday, April 12, 2019 10:39 AM
To: Jakob Bohm 
Cc: mozilla-dev-security-policy 
Subject: Re: Arabtec Holding public key? [Weird Digicert issued cert]

It's not clear that there is anything for DigiCert to respond to. Are we 
asserting that the existence of this Arabtec certificate is proof that DigiCert 
violated section 3.2.1 of their CPS?

- Wayne

On Thu, Apr 11, 2019 at 6:57 PM Jakob Bohm via dev-security-policy < 
dev-security-policy@lists.mozilla.org> wrote:


On 11/04/2019 04:47, Santhan Raj wrote:

On Wednesday, April 10, 2019 at 5:53:45 PM UTC-7, Corey Bonnell wrote:

On Wednesday, April 10, 2019 at 7:41:33 PM UTC-4, Nick Lamb wrote:

(Resending after I typo'd the ML address)

At the risk of further embarrassing myself in the same week, while
working further on mimicking Firefox trust decisions I found this
pre-certificate for Arabtec Holding PJSC:

https://crt.sh/?id=926433948

Now there's nothing especially strange about this certificate,
except that its RSA public key is shared with several other
certificates



https://crt.sh/?spkisha256=8bb593a93be1d0e8a822bb887c547890c3e706aad2d
ab76254f97fb36b82fc26


... such as the DigiCert Global Root G2:

https://crt.sh/?caid=5885


I would like to understand what happened here. Maybe I have once
again made a terrible mistake, but if not surely this means either
that the Issuing authority was fooled into issuing for a key the
subscriber doesn't actually have or worse, this Arabtec Holding
outfit has the private keys for DigiCert's Global Root G2

Nick.


AFAIK there's no requirement in the BRs or Mozilla Root Policy for
CAs

to actually verify that the Applicant actually is in possession of the
corresponding private key for public keys included in CSRs (i.e.,
check the signature on the CSR), so the most likely explanation is
that the CA in question did not check the signature on the
Applicant-submitted CSR and summarily embedded the supplied public key
in the certificate (assuming Digicert's CA infrastructure wasn't
compromised, but I think that's highly unlikely).


A very similar situation was brought up on the list before, but
with

WoSign as the issuing CA:
https://groups.google.com/d/msg/mozilla.dev.security.policy/zECd9J3KBW
8/OlK44lmGCAAJ




While not a BR requirement, the CA's CPS does stipulate validating

possession of private key in section 3.2.1 (looking at the change
history, it appears this stipulation existed during the cert
issuance). So something else must have happened here.


Except for the Arabtec cert, the other certs looks like cross-sign
for

the Digicert root.




Why still no response from Digicert?  Has this been reported to them
directly?




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Arabtec Holding public key? [Weird Digicert issued cert]

2019-04-11 Thread Jakob Bohm via dev-security-policy

On 11/04/2019 04:47, Santhan Raj wrote:

On Wednesday, April 10, 2019 at 5:53:45 PM UTC-7, Corey Bonnell wrote:

On Wednesday, April 10, 2019 at 7:41:33 PM UTC-4, Nick Lamb wrote:

(Resending after I typo'd the ML address)

At the risk of further embarrassing myself in the same week, while
working further on mimicking Firefox trust decisions I found this
pre-certificate for Arabtec Holding PJSC:

https://crt.sh/?id=926433948

Now there's nothing especially strange about this certificate, except
that its RSA public key is shared with several other certificates

https://crt.sh/?spkisha256=8bb593a93be1d0e8a822bb887c547890c3e706aad2dab76254f97fb36b82fc26

... such as the DigiCert Global Root G2:

https://crt.sh/?caid=5885


I would like to understand what happened here. Maybe I have once again
made a terrible mistake, but if not surely this means either that the
Issuing authority was fooled into issuing for a key the subscriber
doesn't actually have or worse, this Arabtec Holding outfit has the
private keys for DigiCert's Global Root G2

Nick.


AFAIK there's no requirement in the BRs or Mozilla Root Policy for CAs to 
actually verify that the Applicant actually is in possession of the 
corresponding private key for public keys included in CSRs (i.e., check the 
signature on the CSR), so the most likely explanation is that the CA in 
question did not check the signature on the Applicant-submitted CSR and 
summarily embedded the supplied public key in the certificate (assuming 
Digicert's CA infrastructure wasn't compromised, but I think that's highly 
unlikely).

A very similar situation was brought up on the list before, but with WoSign as 
the issuing CA: 
https://groups.google.com/d/msg/mozilla.dev.security.policy/zECd9J3KBW8/OlK44lmGCAAJ



While not a BR requirement, the CA's CPS does stipulate validating possession 
of private key in section 3.2.1 (looking at the change history, it appears this 
stipulation existed during the cert issuance). So something else must have 
happened here.

Except for the Arabtec cert, the other certs looks like cross-sign for the 
Digicert root.



Why still no response from Digicert?  Has this been reported to them
directly?



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Clarify Section 5.1 ECDSA Curve-Hash Requirements

2019-04-04 Thread Jakob Bohm via dev-security-policy

On 04/04/2019 02:22, Wayne Thayer wrote:

A number of ECC certificates that fail to meet the requirements of policy
section 5.1 were recently reported [1]. There was a lack of awareness that
Mozilla policy is more strict than the BRs in this regard. I've attempted
to address that by adding this to the list of "known places where this
policy takes precedence over the Baseline Requirements" in section 2.3 [2].

This proposal is to clarify the 'ECDSA' language in section 5.1. That
language was introduced in version 2.4 of our policy [3]. The description
of this issue on GitHub [4] states "Section 5.1's language seems ambiguous
because it doesn't clarify whether the allowed curve-hash pair is related
to the CA key or the Subject Key." For example, does the policy permit a
P-384 key in an intermediate CA certificate to sign a P-256 key in an
end-entity certificate with SHA-256, SHA-384, or not at all? My limited
understanding of the intent is that the key and signature algorithm in each
certificate must match - e.g. it permits a P-384 key in an intermediate CA
certificate to sign a P-256 key in an end-entity certificate with SHA-256 -
but I could be wrong about that and would appreciate everyone's input on
what this is supposed to mean.

If my understanding of the desired policy is correct, then I propose the
following clarification:



This is cryptographically wrong and insecure.

The common requirement for all DSA-style algorithms is that each
private key is used with exactly one hash algorithm, of a size
matching the (sub)group used.  No other hash algorithm shall be
signed for the lifetime of the private key, even if that is not
expressed in the X.509 data structure.  This technical security
requirement applies for the lifetime of the private key.

Thus the permissible combinations under the current Mozilla policy
should be:

P-256 signing using SHA-256
P-384 signing using SHA-384

While the BRs also allow:

P-512 signing using SHA-512

In all 3 cases, the signed certificate could use any permitted
public key algorithm, EKU, name etc. subject to the general
policy restrictions on those fields.  For example a P-384+SHA-384
CA key can sign an P-256+SHA-256 cert.  Signing a stronger cert at
the next level is less useful, except to cross sign new roots with
old roots.

This is of cause completely different for RSA, because PKCS#1 RSA
incorporates the hash identifier into the signature in such a way that a
signature on one hash isn't also a valid signature on a completely
different hash that happens to have the same bit pattern.



Root certificates in our root program, and any certificate which
chains up to them, MUST use only algorithms and key sizes from the following
set:
[...]

- ECDSA keys using one of the following curve-hash pairs:
   - P‐256 signed with SHA-256
   - P‐384 signed with SHA-384


The only change is the addition of the word "signed" to signify that the
pairing represents the combination of the key and signature in a given
certificate.

An enumeration of the permitted combinations has also been suggested, but
it's not clear to me how that solves the confusion that currently exists.
It would be great if someone who prefers this approach would make a
proposal.

This is https://github.com/mozilla/pkipolicy/issues/170

I will greatly appreciate everyone's input on both the intent of the
current policy, and how best to clarify it.

- Wayne

[1]
https://groups.google.com/d/msg/mozilla.dev.security.policy/mCKvUmYUMb0/sqZVnFvKBwAJ
[2]
https://github.com/mozilla/pkipolicy/commit/3e38142acd28b152eca263e7528fac940efb20e2
[3] https://github.com/mozilla/pkipolicy/issues/5
[4] https://github.com/mozilla/pkipolicy/issues/170




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Apple: Non-Compliant Serial Numbers

2019-04-01 Thread Jakob Bohm via dev-security-policy
On 30/03/2019 22:16, certification_author...@apple.com wrote:
> On March 30, Apple submitted an update to the original incident report 
> (https://bugzilla.mozilla.org/show_bug.cgi?id=1533655), which is reposted 
> below.
> ___
> 
> We've been working our plan to revoke impacted certificates. Thus far over 
> 500,000 certificates have been revoked since the issue was identified and 
> 54,853 remain (file attached with remaining certificates [in teh Bugzilla 
> post]). Our plan will result in all impacted certificates being revoked.
> 
> Our approach to resolving this incident has been to strike a balance between 
> a compliance incident with low associated security risk and impact to 
> critical services. We have established a timeline to address the remaining 
> certificates that minimizes service impact and allows standard QA and change 
> control processes to ensure uptime is not affected.
> 
> As part of the remediation plan, a number of certificates will be migrated to 
> an internal, enterprise PKI, which will take more time.
> 
> Based on these factors, it is expected that most certificates will be revoked 
> by April 30 with less than 2% extending until July 15.
> 
> Another update will be provided next week.
> 

For the benefit of the community (including possible future creation of 
policies for mass revocation scenarios), could you detail:

1. How many of the 54,583 certificates are issued to Apple owned and 
  operated servers and services and how many not.

2. What kinds of practical issues are delaying the replacement of 
  certificates on any such Apple operated servers and services, 
  perhaps with approximate percentages.

For example is it:

2a. Security lockdown requiring specific authorized persons to oversee 
  the certificate change in person.

2b. Security lockdown requiring security staff to physically travel to 
  remote locations to authorize the change, one location at a time.

2c. Security procedures requiring some authorized persons to authorize 
  the changes one certificate at a time, with those persons now 
  inundated with a much larger than usual number of such requests 
  per day/week.

2d. Non-security procedures requiring specific people to check the 
  changes for mistakes.  Those people now being inundated with a much 
  larger than usual number of such requests per day/week.

2e. Non-security procedures requiring the changes to go through 
  automated regression tests.  The regression testing computers now 
  being inundated with a much larger than usual number of such 
  requests per day/week.

2f. Non-security procedures requiring the changes to be run on other 
  computers for a certain number of weeks before deployment on some 
  computers.

2g. Certificate checking procedures requiring certificates to remain 
  valid for a certain period of time after their last actual use.

2h. Servers managed by teams that are busy with unrelated tasks at 
  this time.

2o. Obscure servers that are rarely touched, causing practical problems 
  locating the teams responsible.

2p. Anything else.


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Policy 2.7 Proposal: Clarify Meaning of "Technically Constrained"

2019-03-29 Thread Jakob Bohm via dev-security-policy
On 28/03/2019 21:52, Wayne Thayer wrote:
> Our current Root Store policy assigns two different meanings to the term
> "technically constrained":
> * in sections 1.1 and 3.1, it means 'limited by EKU'
> * in section 5.3 it means 'limited by EKU and name constraints'
> 
> The BRs already define a "Technically Constrained Subordinate CA
> Certificate" as:
> 
> A Subordinate CA certificate which uses a combination of Extended Key Usage
>> settings and Name Constraint settings to limit the scope within which the
>> Subordinate CA Certificate may issue Subscriber or additional Subordinate
>> CA Certificates.
>>
> 
> I propose aligning Mozilla policy with this definition by leaving
> "technically constrained" in section 5.3, and changing "technically
> constrained" in sections 1.1 and 3.1 to "technically capable of issuing"
> (language we already use in section 3.1.2). Here are the proposed changes:
> 
> https://github.com/mozilla/pkipolicy/commit/91fe7abdc5548b4d9a56f429e04975560163ce3c
> 
> This is https://github.com/mozilla/pkipolicy/issues/159
> 
> I will appreciate everyone's comments on this proposal. In particular,
> please consider if the change from "Such technical constraints could
> consist of either:" to "Intermediate certificates that are not considered
> to be technically capable will contain either:" will cause confusion.
> 

To further reduce confusion, perhaps change the terminology in Mozilla 
policy to "Technically Constrained to Subscriber", meaning technically 
constrained to only be capable of issuing for a fully validated 
subscriber identity (validated as if some hypothetical kind of wildcard 
EE cert).

This of cause remains applicable to all the kinds of identities 
recognized and regulated by the Mozilla root program, which currently 
happens to be server domain, EV organization, and e-mail address 
identities.

I realize that the BR meaning may be intended to be so too, but many 
discussions over the years have indicated confusion.



Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Applicability of SHA-1 Policy to Timestamping CAs

2019-03-25 Thread Jakob Bohm via dev-security-policy
On 25/03/2019 23:42, Wayne Thayer wrote:
> My general sense is that we should be doing more to discourage the use of
> SHA-1 rather than less. I've just filed an issue [1] to consider a ban on
> SHA-1 S/MIME certificates in the future.
> 
> On Mon, Mar 25, 2019 at 10:54 AM Jakob Bohm via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
>>
>> As for myself and my company, we switched to a non-Symantec CA for these
>> services before the general SHA-1 deprecation and thus the CA we use can
>> continue to update relevant intermediary CAs using the exception to
>> extend the lifetime of historic issuing CAs.  However it would probably
>> be more secure (less danger to users) if CAs routinely issued
>> sequentially named new issuing CAs for these purposes at regular
>> intervals (perhaps annually), however this is against current Mozilla
>> Policy if the root is still in the Mozilla program (as an anchor for
>> SHA2 WebPKI or e-mail certs).
>>
>>
> I do acknowledge the legacy issue that Jakob points out, but given that it
> hasn't come up before, I question if it is a problem that we need to
> address. I would be interested to hear from others who have a need to issue
> new SHA-1 subordinate CA certificates for uses beyond the scope of the BRs.
> We could consider a loosening of the section 5.1.1 requirements on
> intermediates, but I am concerned about creating loopholes and about
> contradicting the BRs (which explicitly ban SHA-1 OCSP signing certificates
> in section 7.1.3).
> 
> - Wayne
> 
> [1] https://github.com/mozilla/pkipolicy/issues/178
> 

The situation has resurfaced due to recent developments affecting the 
original workarounds.

I will have to remind everyone, that when SHA-1 was deprecated, Symantec 
handled this legacy issue by formally withdrawing a few of their many 
old (historically Microsoft trusted) roots from the Mozilla root 
program, allowing those roots to continue to run as "SHA-1-forever" 
roots completely beyond all "modern" policies.

As Digicert winds down the legacy parts of Symantec operations, Windows 
developers that didn't leave Symantec early will be hunting for 
alternatives among the CAs whose SHA-1 roots were trusted by the 
affected MS software versions.  A number of those CAs don't have such a 
stockpile of legacy roots that could be removed from the modern PKI 
ecosystem without affecting the validity of current SHA-2 certificates.

For example GlobalSign, another large CA, only has one root trusted by 
legacy SHA-1 systems, their R1 root.  That root is unfortunately also 
their forward compatibility root that provides trust to modern WebPKI 
certificates via cross-signing of later GlobalSign roots.  This means 
that anything GlobalSign does in the SHA-1 compatibility space is 
constrained by CAB/F, CASC and Mozilla policies, such as the Mozilla 
restriction to not cut new issuing compatibility CAs and the CASC 
restriction to stop all SHA-1 code signing support in 2021.

Creating new SHA-1-only roots (outside the modern PKI) for this job is 
not viable, as the roots need to be in the historic versions of the MS 
root store as bundled by affected systems.  For some code, the roots 
even need to be among the few that got a special kernel mode cross-cert 
from Microsoft.  Those legacy root stores were completely dominated by 
roots that were bought up by Symantec.

Raw data:

The full historic list of roots with kernel mode MS cross certs [Apologies if 
root transfers have sent some to different companies than indicated]

Trusted until 2023 (in alphabetical order by brand):

[GoDaddy] C=US, O=The Go Daddy Group, Inc., OU=Go Daddy Class 2 Certification 
Authority

[GoDaddy] C=US, O=Starfield Technologies, Inc., OU=Starfield Class 2 
Certification Authority

[Sectigo] C=SE, O=AddTrust AB, OU=AddTrust External TTP Network, CN=AddTrust 
External CA Root

[Sectigo] C=US, ST=UT, L=Salt Lake City, O=The USERTRUST Network, 
OU=http://www.usertrust.com, CN=UTN-USERFirst-Object



Trusted until 2021 Not Digicert/Symantec (in alphabetical order by brand):

[EnTrust] O=Entrust.net, OU=www.entrust.net/CPS_2048 incorp. by ref. (limits 
liab.), OU=(c) 1999 Entrust.net Limited, CN=Entrust.net Certification Authority 
(2048)

[GlobalSign] C=BE, O=GlobalSign nv-sa, OU=Root CA, CN=GlobalSign Root CA

[GoDaddy] C=US, ST=Arizona, L=Scottsdale, O=GoDaddy.com, Inc., CN=Go Daddy Root 
Certificate Authority - G2

[GoDaddy] C=US, ST=Arizona, L=Scottsdale, O=Starfield Technologies, Inc., 
CN=Starfield Root Certificate Authority - G2

[NetLock] C=HU, L=Budapest, O=NetLock Kft., OU=Tanúsítványkiadók (Certification 
Services), CN=NetLock Arany (Class Gold) Főtanúsítvány

[NetLock] C=HU, L=Budapest, O=NetLock Kft., OU=Tanúsítványkiadók (Certification 
Services), CN=NetLock Platina (Class Platinum) Főtanúsítv

Re: GRCA Incident: BR Compliance and Document Signing Certificates

2019-03-25 Thread Jakob Bohm via dev-security-policy

On 25/03/2019 22:29, Matthew Hardeman wrote:

On Mon, Mar 25, 2019 at 3:03 PM Ryan Hurst via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


While it may be true that the certificates in question do not contain
SANs, unfortunately, the certificates may still be trusted for SSL since
they do not have EKUs.

For an example see "The most dangerous code in the world: validating SSL
certificates in non-browser software" which is available at
https://crypto.stanford.edu/~dabo/pubs/abstracts/ssl-client-bugs.html

What you will see that hostname verification is one of the most common
areas applications have a problem getting right. Often times they silently
skip hostname verification, use libraries provide options to disable host
name verifications that are either off by default, or turned off for
testing and never enabled in production.

One of the few checks you can count on being right with any level of
predictability in my experience is the server EKU check where absence is
interpreted as an entitlement.



My ultimate intent was to try to formulate a way in which GRCA could
provide certificates for the applications that they're having to support
for their clients today without having to essentially plan to be
non-compliant for a multi-year period.

It sounds like there's one or more relying-party applications that perform
strict validation of EKU if provided, but would appear not to have a single
standard EKU that they want to see (except perhaps the AnyPurpose.)

I'm confused as to why these certificates, which seem to be utilized in
applications outside the usual WebPKI scope, need to be issued in a trust
hierarchy that chains up to a root in the Mozilla store.  It would seem
like the easiest path forward would be to have the necessary applications
include a new trust anchor and issue these certificates outside the context
of the broader WebPKI.

In essence, if there are applications in which these GRCA end-entity
certificates are being utilized where the Mozilla trust store is utilized
as a trust anchor set and for which the validation logic is clearly quite
different from the modern WebPKI validation logic and where that validation
logic effectively requires non-compliance with Mozilla root store policy,
is this even a use case that the program and/or community want to support?



I wonder (but can't be sure) if the GRCA requirements analysis is simply
incomplete.  Specifically, I see no claim they have actually found a
client tool having all these properties at the same time:

1. That tool fails to accept certificates containing more EKUs than that
  tool needs (example, the tool rejects certs with exampleRandomEKUoid,
  but accepts certs without it).  Yet it accepts certs with no EKU
  extension.  The rejection happens even if the EKU extension is not
  marked critical.

2. It is absolutely necessary for the object/document signing EE certs
  to work with that tool.

3. The tool cannot be easily fixed/upgraded to remove the problem.

If there is no such problematic tool in the target environment, GRCA
could (like other CAs in the Mozilla root program) make a list of needed
specific EKU oids and include them all in their certificate template.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Applicability of SHA-1 Policy to Timestamping CAs

2019-03-25 Thread Jakob Bohm via dev-security-policy
On 23/03/2019 02:03, Wayne Thayer wrote:
> On Fri, Mar 22, 2019 at 6:54 PM Peter Bowen  wrote:
> 
>>
>>
>> On Fri, Mar 22, 2019 at 11:51 AM Wayne Thayer via dev-security-policy <
>> dev-security-policy@lists.mozilla.org> wrote:
>>
>>> I've been asked if the section 5.1.1 restrictions on SHA-1 issuance apply
>>> to timestamping CAs. Specifically, does Mozilla policy apply to the
>>> issuance of a SHA-1 CA certificate asserting only the timestamping EKU and
>>> chaining to a root in our program? Because this certificate is not in
>>> scope
>>> for our policy as defined in section 1.1, I do not believe that this would
>>> be a violation of the policy. And because the CA would be in control of
>>> the
>>> entire contents of the certificate, I also do not believe that this action
>>> would create an unacceptable risk.
>>>
>>> I would appreciate everyone's input on this interpretation of our policy.
>>>
>>
>> Do you have any information about the use case behind this request?  Are
>> there software packages that support a SHA-2 family hash for the issuing CA
>> certificate for the signing certificate but do not support SHA-2 family
>> hashes for the timestamping CA certificate?
>>
> 
> I was simply asked if our policy does or does not permit this, so I can
> only speculate that the use case involves code signing that targets an
> older version of Windows. If the person who asked the question would like
> to send me specifics, I'd be happy to relay them to the list.
> 

As a matter of general information (I happen to have investigated MS 
"AuthentiCode" code signing in some detail):

1. Some parts of some Windows versions will only accept certificate 
  chains using the RSA-SHA1 (PKCS#1 v1.x) signature types.  Those 
  generally chain to a root of similar vintage, which may or may not 
  still be issuing SHA-2 intermediary certificates under Mozilla 
  Policy.

2. Some parts of some Windows versions will only accept EE certs using 
  RSA-SHA1, but will accept RSA-SHA2 higher in the certificate chain.

3. Recent Windows versions will accept RSA-SHA2 signatures, but may or 
  may not accept RSA-SHA1 signatures.

4. Most parts of Windows versions that accept RSA-SHA2 signatures allow 
  dual signature configurations where items are signed with both RSA-SHA2 
  and RSA-SHA1 signatures in such a way that older versions will see 
  and accept the RSA-SHA1 signature while newer versions will see and 
  check the RSA-SHA2 signature (or both signatures).

5. All post-1996 MS code signing systems expect and check the presence of 
  countersignatures by a timestamping authority with long validity period 
  (many decades).  These signatures verify that the signature made by the 
  EE certificate existed on or before a certain time within the (1 to 5 
  year) validity period of the EE cert itself.  Thus barring an existential 
  forgery of the timestamping signature, most other aspects need only 
  resist second pre-image style attacks on the EE signature.

Type 1 and 2 subsystems are still somewhat widespread as official 
attempts to backport RSA-SHA2 support have failed or never been made.

Thus there is an ongoing need for certificate subscribers to obtain and 
use both types of certificates and the corresponding timestamp 
countersignature services.

The ongoing shutdown of old Symantec infrastructure has left a lot of 
subscribers to these certificates looking for replacement CAs, thus 
making it interesting for CAs that were included in the relevant MS root 
program back in the day to begin or restart providing those services to 
former Symantec subscribers.

As for myself and my company, we switched to a non-Symantec CA for these 
services before the general SHA-1 deprecation and thus the CA we use can 
continue to update relevant intermediary CAs using the exception to 
extend the lifetime of historic issuing CAs.  However it would probably 
be more secure (less danger to users) if CAs routinely issued 
sequentially named new issuing CAs for these purposes at regular 
intervals (perhaps annually), however this is against current Mozilla 
Policy if the root is still in the Mozilla program (as an anchor for 
SHA2 WebPKI or e-mail certs).


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CFCA certificate with invalid domain

2019-03-18 Thread Jakob Bohm via dev-security-policy

On 18/03/2019 02:05, Nick Lamb wrote:

On Fri, 15 Mar 2019 19:41:58 -0400
Jonathan Rudenberg via dev-security-policy
 wrote:


I've noted this on a similar bug and asked for details:
https://bugzilla.mozilla.org/show_bug.cgi?id=1524733


I can't say that this pattern gives me any confidence that the CA
(CFCA) does CAA checks which are required by the BRs.

I mean, how do you do a CAA check for a name that can't even exist? If
you had the technology to run this check, and one possible outcome is
"name can't even exist" why would you choose to respond to that by
issuing anyway, rather than immediately halting issuance because
something clearly went badly wrong? So I end up thinking probably CFCA
does not actually check names with CAA before issuing, at least it does
not check the names actually issued.



Technically, the name can exist, if (for some bad reason) ICANN were to
create the con. TLD (which would be a major invitation to phishing).

As "not found" is a permissive CAA check result, CAA checking may be
perfectly fine in this case.

Domain control validation however obviously failed, as no one controls
the non-existent domain, and thus no one could have proven control of
that domain.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: A modest proposal for a better BR 7.1

2019-03-12 Thread Jakob Bohm via dev-security-policy
 procedures already well known to CAs and to their
> vendors, auditors, etc, my proposal enhances the odds that the required
> amount of random unpredictable bits actually be backed by a mechanism
> appropriate for the use of cryptography.
> 
> If anyone thinks any of this has merit, by all means run with it.  I
> disclaim any proprietary interest (copyright, etc) that I might otherwise
> have had by statute and declare that I'm releasing this to the public
> domain.
> 

This overspecifies too far.  For example, there is no stated reason to 
force the value 117 (0x75).

Nor is there a stated reason why the "encrypted actual serial" should be 
used in place of genuinely reandom entropy.  The encryption would only 
provide the entropy of its key (256 bits max) spread over all the serial 
numbers of a CA, typically resulting in less than 1 bit of entropy per 
serial number.

Here is a better, simpler form:


 1. The ASN.1 DER signed integer encoded form of the certificate serial 
   number value must be represented as not less than n+1 bytes and not 
   more than 20 bytes.
 Note: The first encoded byte shall have a value between 0x01 and 
   0x7F to make the serial number a valid DER encoding of a number > 0.

 2. The serial number shall included a contiguous sequence of at lest 
   n*8 random bits generated from a fresh invocation of a CSRNG with no 
   filtering of the resulting bits.  These bits shall not be known by 
   any person or system before all other parts of the TbsCertificate 
   have been chosen and committed to the signing process.  The location 
   of these n*8 random bits within the serial number shall be specified 
   in the associated CPS, such that both the CA's auditors and the 
   community can check that the scheme is being followed.

 3. The value n shall be at least 8 for any certificates issued with 
   historic hash algorithms such as SHA-1 (such certificates are not 
   currently BR compliant anyway, but is relevant for no-longer-trusted 
   CAs intended to be trusted only be legacy systems.).

 4. For current hash algorithms, n shall be at least 12 (96 bits of 
   entropy).  A future BR may increase that value.

 5. As a special exception, systems that require signing certificates 
   with the same serial-number more than once (such as CT and CA 
   validity adjustments) are not required to change the serial number 
   after initial selection).

 6. As a special exception due to widespread technical failures, 
   certificates issued on or prior to 2019-03-31 UTC may instead use 
   serial numbers consisting only of 63 random bits chosen as per #2, 
   but checked to reject any value that would be encoded to 7 or fewer 
   bytes or be the same as a previously issued serial number.

Note 1: The BR requirement that OCSP responders must reject never 
   issued serial numbers implies that any compliant CA must maintain 
   a full database of all certificates ever signed.

Note 2: The following are some (not all possible) compliant schemes (If 
   any schemes on this list are patented, the patent should be noted 
   on that example):

  2A: Generate sequential "internal" serial numbers between 0x0100 
 and 0x7fff and append 96 to 128 CSRNG bits.  Database lookup 
 can use the "internal" part, then compare the random bits to the 
 value or the actually issued serial number (as stored separately 
 in the database or as part of the full certificate).

  2B: Generate sequential "internal" serial numbers between 0 and 
 0x.  Generate 96 to 120 CSRNG bits.  Combine the random 
 bits and an internal cryptographic key to produce a 32 bit value 
 xored onto the internal serial number.  Final serial number is a 
 fixed byte between 0x01 and 0x7f (this might indicate which of 
 multiple signing machines were used), followed by the CSRNG bits 
 then the encrypted internal serial OR the internal serial followed 
 by the CSRNG bits.

  2C: Generate 104 to 152 CSRNG bits.  Choose a random prefix byte 
between 0x01 and 0x7F, then check if the concatenation collides with 
a previously issued serial number.  If so, change the random prefix 
byte until an unused serial number is found, if all 127 prefix bytes 
result in collision choose a new set of CSRNG bits.  Provide auditors 
and root programs with a statistical proof that these retries will 
not drop the total entropy below 96 bits.

Note 3: An appropriate external auditing scheme is to collect all / most 
  of the issued serial numbers, extract the bit positions that are random 
  according the the CPS, then run statistical tests to check if they do 
  indeed form a plausible output from a CSRNG.

Note 4: In addition to external statistical tests, the auditor of the 
  CA shall inspect the actual implementation to ensure it does what the 
  CA says it does.

This too is released to the public

Re: The current and future role of national CAs in the root program

2019-03-08 Thread Jakob Bohm via dev-security-policy

On 08/03/2019 06:27, Peter Bowen wrote:

...

Mozilla has specifically chosen to not distinguish between "government
CAs", "national CAs", "commercial CAs", "global CAs", etc.  The same rules
apply to every CA in the program.  Therefore, the "national or other
affiliation" is not something that is relevant to the end user.

These have all been discussed before and do not appear to be relevant to
any current conversation.



Many (not me) in the recent discussion of a certain CA have called
for this to be changed one way or another.  This is the only thing
that is new.

As I wrote earlier, there were a lot of general policy ideas and
questions mixed into the discussion of that specific case, and my
post was an attempt to summarize those questions and ideas raised by
others.

Maybe the ultimate result will be no change, maybe not.  The
discussion certainly has been raised by a lot of people.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: The current and future role of national CAs in the root program

2019-03-07 Thread Jakob Bohm via dev-security-policy

On 07/03/2019 23:02, Ryan Sleevi wrote:

Do you believe there is new information or insight you’re providing from
the last time this was discussed and decided?

For example:
https://groups.google.com/forum/m/#!searchin/mozilla.dev.security.policy/Government$20CAs/mozilla.dev.security.policy/JP1gk7atwjg

https://groups.google.com/forum/m/#!searchin/mozilla.dev.security.policy/Government$20CAs/mozilla.dev.security.policy/tr_PDVsZ6-k

https://groups.google.com/forum/m/#!searchin/mozilla.dev.security.policy/Government$20CAs/mozilla.dev.security.policy/qpwFbcRfBmk

I included the search query in the URL, so that you can examine for
yourself what new insight or information is being provided. I may have
missed some salient point in your message, but I didn’t see any new insight
or information that warranted revisiting such discussion.

In the spirit of
https://www.mozilla.org/en-US/about/forums/etiquette/ , it may be best to
let sleeping dogs lie here, rather than continuing this thread. However, if
you feel there has been some significant new information that’s been
overlooked, perhaps you can clearly and succinctly highlight that new
information.



I was stating that the the very specific discussion that recently
unfolded (and which I promised not to mention by name in this thread)
has contained very many opinions on the topic.  In fact, the majority of
posts by others have circled on either the entropy issue or this very 
issue of what criteria and procedures should be used for trusting

national CAs and if those criteria should be changed.

Your own posts on Feb 28, 2019 13:54 UTC and Mar 4, 2019 16:31 UTC were
among those posts, as were posts by hackurx, Alex Gaynor, nadim, Wayne 
Thayer, Kristian Fiskerstrand and Mathew Hardeman.


I took care not to state what decisions should be made, merely to
summarize the issues in a clear and seemingly non-controversial way,
trying to be inclusive of the opinions stated by all sides.  If there
are additional points on the topic that I forgot or that may arise later
in the specific discussion, they can and should be added such that there
will be a useful basis for discussion of whatever should or should not
be done long term, once the specific single case has been handled.

I did not wake this sleeping dog, it was barking and yanking its chain
all week.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


The current and future role of national CAs in the root program

2019-03-07 Thread Jakob Bohm via dev-security-policy
Currently the Mozilla root program contains a large number of roots that 
are apparently single-nation CA programs serving their local community 
almost exclusively, including by providing certificates that they can 
use to serve content with the rest of the world.

For purposes of this, I define a national CA as a CA that has publicly 
self-declared that it serves a single geographic community almost 
exclusively, with that area generally corresponding to national borders 
of a country or territory.

As highlighted by the discussion, this raises some common concerns for 
such CAs:

1. Due to the technical way Mozilla products handle the root program 
  data, each national CA is trusted to issue certificates for anyone 
  anywhere in the world despite them not having any self-declared 
  interest to do so.  This constitutes an unintentional security risk 
  as highlighted years ago by the 2011 DigiNotar (NL) incident.

2. For a variety of reasons, the existence of all these globally trusted 
  national CAs, has made establishment of such national CAs a matter of 
  pride for governments, regardless if they currently have such CAs.

3. There is a legitimate concern that any national CA (government run or 
  not) may be used by that government as a means to project force in a 
  manner inconsistent with being trusted outside that country (as 
  reflected in current Mozilla policy), but consistent with a general 
  view of the rights of nations (as expressed in the UN charter and 
  ancient traditions).

4. Some of the greatest nations on Earth have had their official 
  national CAs rejected by the root program because of #1 or #3, 
  including the US federal bridge CA and China's CNNIC.

This in turn leads to some practical issues:

5. Should the root program policies provide rules that enforce the 
  self-declared scope restrictions on a CA.  For example if a CA 
  has declared that it only intends to issue for entities in the 
  Netherlands, should certificates for entities beyond that be 
  considered as misissuance incidents for that reason alone 
  (DigiNotar involved misissuance in a much more literal sense).

6. How should rules for the meaning of such geographical intent be 
  mapped for things like IP address certificates ?  For example 
  should the rules use the geography indicated in NRO address space 
  assignments to national ISPs?  Or perhaps some information provided 
  by ISPs themselves?  (Commercial IP-to-country databases have a too 
  high error rate for certificate policy use).

7. How should rules for the meaning of such geographical intent be 
  mapped for certificates for domains under gTLDs such as visit-
  countryname.org or countryname-government.com ?

8. Should Mozilla champion a specification for adding such geographic 
  restrictions to CA cert name constraints in a manner that is both 
  backward compatible with other clients and adaptive to the ongoing 
  movement/reassignment of name spaces to/between nations.

9. Should Mozilla attempt to enforce such intent in its clients (Firefox 
  etc.) once the technical data exists?

10. The root trust data provided in the Firefox user interface does not 
  clearly indicate the national or other affiliation of the trusted 
  roots, such that concerned users may make informed decisions 
  accordingly.   Ditto for the root program dumps provided to other 
  users of the Mozilla root program data (inside and outside the Mozilla 
  product family).  For example, few users outside Scandinavia would 
  know that "Sonera" is really a national CA for the countries in which 
  Telia-Sonera is the incumbent Telco (Finland, Sweden and Åland).


This overall issue was touched repeatedly in the thread, especially 
point 3 above, but the earliest I could find was in Message ID 
 
posted on Fri, 22 Feb 2019 23:45:39 UTC by "cooperq"

On 07/03/2019 18:59, Jakob Bohm wrote:
> This thread is intended to be a catalog of general issues that come/came
> up at various points in the DarkMatter discussions, but which are not 
> about DarkMatter specifically.
> 
> Each response in this thread should have a subject line of the single 
> issue it discusses and should not mention DarkMatter except to mention 
> the Timestamp, message-id and Author of the message in which it came up.
> 
> Further discussion of each issue should be in response to that issue.
> 
> Each new such issue should be a response directly to this introductory 
> post, and I will make a few such subject posts myself.
> 
> Once again, no further mentions of Darkmatter in this thread are
> allowed, keep those in the actual Darkmatter threads.
> 



Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 

EJBCA defaulting to 63 bit serial numbers

2019-03-07 Thread Jakob Bohm via dev-security-policy
In the cause of the other discussion it was revealed that EJBCA by PrimeKey 
has apparently:

1. Made serial numbers with 63 bits of entropy the default.  Which is 
  not in compliance with the BRs for globally trusted CAs and SubCAs.

2. Mislead CAs to believe this setting actually provided 64 bits of 
  entropy.

3. Discouraged CAs from changing that default.

This raises 3 derived concerns:

4. Any CA using the EJBCA platform needs to manually check if they 
  have patched EJBCA to comply with the BR entropy requirement despite 
  EJBCAs publisher (PrimeKey) telling them otherwise.
   Maybe this should be added to the next quarterly mail from Mozilla to
  the CAs.

5. Is it good for the CA community that EJBCA seems to be the only 
  generally available software suite for large CAs to use?

6. Should the CA and root program community be more active in ensuring 
  compliance by critical CA infrastructure providers such as EJBCA and 
  the companies providing global OCSP network hosting.


The above issue first came up in Message ID 

posted on Mon, 25 Feb 2019 08:39:07 UTC by Scott Rea, and subsequently 
lead to a number of replies, including at least one reply from Mike 
Kushner from EJBCA and a discovery that Google Trust Services was 
also hit with this issue to the tune of 100K non-compliant certificates.

On 07/03/2019 18:59, Jakob Bohm wrote:
> This thread is intended to be a catalog of general issues that come/came
> up at various points in the DarkMatter discussions, but which are not 
> about DarkMatter specifically.
> 
> Each response in this thread should have a subject line of the single 
> issue it discusses and should not mention DarkMatter except to mention 
> the Timestamp, message-id and Author of the message in which it came up.
> 
> Further discussion of each issue should be in response to that issue.
> 
> Each new such issue should be a response directly to this introductory 
> post, and I will make a few such subject posts myself.
> 
> Once again, no further mentions of Darkmatter in this thread are
> allowed, keep those in the actual Darkmatter threads.
> 


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


General issues that came up in the DarkMatter discussion(s)

2019-03-07 Thread Jakob Bohm via dev-security-policy

This thread is intended to be a catalog of general issues that come/came
up at various points in the DarkMatter discussions, but which are not 
about DarkMatter specifically.


Each response in this thread should have a subject line of the single 
issue it discusses and should not mention DarkMatter except to mention 
the Timestamp, message-id and Author of the message in which it came up.


Further discussion of each issue should be in response to that issue.

Each new such issue should be a response directly to this introductory 
post, and I will make a few such subject posts myself.


Once again, no further mentions of Darkmatter in this thread are
allowed, keep those in the actual Darkmatter threads.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-03-05 Thread Jakob Bohm via dev-security-policy

On 05/03/2019 16:11, Benjamin Gabriel wrote:

Message Body (2 of 2)
[... continued ..]

Dear Wayne


> ...


Yours sincerely,

Benjamin Gabriel
General Counsel
DarkMatter Group





As an outside member of this community (not employed by Mozilla or any 
public CA), I would like to state the following (which is not official

by my company affiliation and cannot possibly be official on behalf of
Mozilla):

1. First of all, thank you for finally directly posting Darkmatter's
  response to the public allegations.  Many people seemingly
  inexperienced in security and policy have posted rumors and opinions
  to the discussion, while others have mentioned that Darkmatter had
  made some kind of response outside this public discussion thread.

2. It is the nature of every government CA or government sponsored CA
  in the world that the current structure of the Mozilla program can
  be easily abused in the manner speculated, and that any such abuse
  would not be admitted, but rather hidden with the full skill and
  efficiency of whatever spy agency orders such abuse.  One of the
  most notorious such cases occurred when a private company running a
  CA for the Dutch Government had a massive security failure, allowing
  someone to obtain MitM certificates for use against certain Iranian
  people.
   I have previously proposed a technical countermeasure to limit this
  risk, but it didn't seem popular.

3. The chosen name of your CA "Dark matter" unfortunately is
  associated in most English language contexts with either an obscure
  astronomical phenomenon or as a general term for any sinister and
  evil topic.  This unfortunate choice of name may have helped spread
  and enhance the public rumor that you are an organization similar
  to the US NSA or its contractors.  After all, "Dark matter is evil"
  is a headline more likely to sell newspapers than "UAE company with a
  very boring name helped UAE government attack someone".  However I
  understand that as a long established company, it is probably too late
  to change your name.

4. The United States itself has been operating a government CA (The
  federal bridge CA) for a long time, and Mozilla doesn't trust it.
  In fact when it was discovered that Symantec had used one of their
  Mozilla trusted CAs to sign the US federal bridge CA, this was one
  of the many problems that lead to Mozilla distrusting Symantec,
  even though they were the oldest and biggest CA in the root
  program.

5. While Darkmatter LLC itself may have been unaware of the discussions
  that lead to the wording of the 64 bit serial entropy requirements,
  it remains open how QuoVadis was not aware of that discussion and
  did not independently discover that you were issuing with only 63
  bits under their authority.

6. Fairly recently, a private Chinese CA (WoSign) posted many partially
  untrue statements in their defense.  The fact that their posts were
  not 100% true, and sometimes very untrue, has lead to a situation
  where some on this list routinely compare any misstatements by a
  criticized CA to the consistent dishonesty that lead to the permanent
  distrust of WoSign and it's subsidiary StartCom.  This means that you
  should be extra careful not to misstate details like the ones caught
  by Jonathan Rudenberg in hist response at 16:12 UTC today.

7. Your very public statement ended with a standard text claiming that
  the message should be treated as confidential.  This is a common
  cause of ridicule and the reason that my own postings in this forum
  use a completely different such text than my private e-mail
  communication.  As a lawyer you should be able to draft such a text
  much better than my own feeble attempt.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Public CA:certs with unregistered FQDN mis-issuance

2019-03-01 Thread Jakob Bohm via dev-security-policy
 to issue for FQDNs
under the validated domain).

FQDN existence is not a requirement, only validation that the
certificate applicant has the authority to create and change that
FQDN at will.  For example, automated validation that raotest.com.tw is
a registered domain fully controlled by the applicant would be enough
to give that applicant a certificate for www.raotest.com.tw, even
before that applicant creates www.raotest.com.tw.  Some applicants
may even prefer to obtain certificates before making their FQDN live.

Of cause the fact that a registry such as the one for .com.tw can
create any non-existent domain does not count, because such domains
would generally belong to a 3rd party.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Possible DigiCert in-addr.arpa Mis-issuance

2019-03-01 Thread Jakob Bohm via dev-security-policy
ere, I will spitball something in
>>> a next paragraph just to have something to start with, but to me if it
>>> turns out we can't improve on basically "sometimes it doesn't work so
>>> we just shrug and move on" we need to start thinking about deprecating
>>> this approach altogether. Not just for DigiCert, for everybody.
>>>
>>> - Spitball: What if the CA/B went to the registries, at least the big
>>>ones, and said we need this, strictly for this defined purpose, give
>>>us either reliable WHOIS, or RDAP, or direct database access or
>>>_something_ we can automate to do these checks ? The nature of CA/B
>>>may mean that it's not appropriate to negotiate paying for this
>>>(pressuring suppliers to all agree to offer members the same rates is
>>>just as much a problem as all agreeing what you'll charge customers)
>>>but it should be able to co-ordinate making sure members get access,
>>>and that it isn't opened up to dubious data resellers that the
>>>registries don't want rifling through their database.
>>>
>>
>> Unfortunately, this is not really viable. The CA/Browser Forum maintains
>> relationships with ICANN, as do individual members. While this, on its
>> face, seems reasonable, there are practical, business, and legal concerns
>> that prevent this from being viable. Further, proposals which would require
>> membership in the CA/Browser Forum should, on their face, be rejected - a
>> CA should not have to join the Forum in order to be a CA.
>>
>> I do agree, however, that the use of WHOIS data continues to show
>> problematic incidents - whether it's with OCR issues or manual entry - and
>> suspect a more meaningful solution is to move away from this model
>> entirely. The recently approved methods to the BRs for expressing contact
>> information via the DNS directly is one such approach. The GDPR issues
>> surrounding WHOIS and RDAP have already led it to be compelling in its own
>> right.
>>
>> Most importantly, you are on the right path of questions, though - which is
>> we should examine such incidents systemically and look for improvements,
>> and not convince ourselves that the status quo is the best possible
>> solution :)


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: T-Systems invalid SANs

2019-02-27 Thread Jakob Bohm via dev-security-policy

On 27/02/2019 09:54, michel.lebihan2...@gmail.com wrote:

I also found that certificates that were issued very recently have duplicate 
SANs:
https://crt.sh/?id=1231853308=cablint,x509lint,zlint
https://crt.sh/?id=1226557113=cablint,x509lint,zlint
https://crt.sh/?id=1225737388=cablint,x509lint,zlint



Are duplicate SANs forbidden by any standard? (it's obviously
wasteful, but RFC3280 seems to implicitly allow it).

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: CA ownership checking [DarkMatter Concerns]

2019-02-27 Thread Jakob Bohm via dev-security-policy
t that such a policy may be imposed, every 
  key holding CA employee at every CA will be under duress (by the 
  root programs) to hide all problems at their CA and defend every 
  bad decision and mistake of their CA for fear of becoming 
  blacklisted.

3. People with direct key access are obvious targets of unlawful 
  coercion, just like bank employees with keys to the vaults.  We 
  really don't want to publish a hit list of whom criminal gangs 
  (etc.) should target with violence, kidnapping, blackmail etc. 
  when they want to get malicious certificates for use against high 
  value targets.

4. If a CA still practices the "off-site split key secret trustees" 
  way of preventing root key loss, publishing the names of those 
  trustees would defeat the purpose of that security measure.



Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Possible DigiCert in-addr.arpa Mis-issuance

2019-02-27 Thread Jakob Bohm via dev-security-policy

On 27/02/2019 00:10, Matthew Hardeman wrote:

Is it even proper to have a SAN dnsName in in-addr.arpa ever?

While in-addr.arpa IS a real DNS heirarchy under the .arpa TLD, it rarely
has anything other than PTR and NS records defined.



While there is no current use, and the test below was obviously
somewhat contrived (and seems to have triggered a different issue),
one cannot rule out the possibility of a need appearing in the
future.

One hypothetical use would be to secure BGP traffic, as certificates
with IpAddress SANs are less commonly supported.



Here this was clearly achieved by creating a CNAME record for
69.168.110.79.in-addr.arpa pointed to cynthia.re.

I've never seen any software or documentation anywhere attempting to
utilize a reverse-IP formatted in-addr.arpa address as though it were a
normal host name for resolution.  I wonder whether this isn't a case that
should just be treated as an invalid domain for purposes of SAN dnsName
(like .local).

On Tue, Feb 26, 2019 at 1:05 PM Jeremy Rowley via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


Thanks Cynthia. We are investigating and will report back shortly.

From: dev-security-policy 
on behalf of Cynthia Revström via dev-security-policy <
dev-security-policy@lists.mozilla.org>
Sent: Tuesday, February 26, 2019 12:02:20 PM
To: dev-security-policy@lists.mozilla.org
Cc: b...@benjojo.co.uk
Subject: Possible DigiCert in-addr.arpa Mis-issuance

Hello dev.security.policy


Apologies if I have made any mistakes in how I post, this is my first
time posting here. Anyway:


I have managed to issue a certificate with a FQDN in the SAN that I do
not have control of via Digicert.


The precert is here: https://crt.sh/?id=1231411316

SHA256: 651B68C520492A44A5E99A1D6C99099573E8B53DEDBC69166F60685863B390D1


I have notified Digicert who responded back with a generic response
followed by the certificate being revoked through OCSP. However I
believe that this should be wider investigated, since this cert was
issued by me adding 69.168.110.79.in-addr.arpa to my SAN, a DNS area
that I do control though reverse DNS.


When I verified 5.168.110.79.in-addr.arpa (same subdomain), I noticed
that the whole of in-addr.arpa became validated on my account, instead
of just my small section of it (168.110.79.in-addr.arpa at best).


To test if digicert had just in fact mis-validated a FQDN, I tested with
the reverse DNS address of 192.168.1.1, and it worked and Digicert
issued me a certificate with 1.1.168.192.in-addr.arpa on it.


Is there anything else dev.security.policy needs to do with this? This
seems like a clear case of mis issuance. It's also not clear if
in-addr.arpa should even be issuable.


I would like to take a moment to thank Ben Cartwright-Cox and igloo5
in pointing out this violation.





Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-02-25 Thread Jakob Bohm via dev-security-policy

On 25/02/2019 11:42, Scott Rea wrote:

G’day Paul,

I cannot speak for other CAs, I can only surmise what another CA that is as 
risk intolerant as we are might do. For us, we will collision test since there 
is some probability of a collision and the test is the only way to completely 
mitigate that risk.
There is a limitation in our current platform that sets the serialNumber 
bit-size globally, however we expect a future release will allow this to be 
adjusted per CA. Once that is available, we can use any of the good suggestions 
you have made below to adjust all our Public Trust offerings to move to larger 
entropy on serialNumber determination.

However, the following is the wording from Section 7.1 of the latest Baseline 
Requirements:
“Effective September 30, 2016, CAs SHALL generate non-sequential Certificate 
serial numbers greater than zero (0) containing at least 64 bits of output from 
a CSPRNG.”

Unless we are misreading this, it does not say that serialNumbers must have 
64-bit entropy as output from a CSPRNG, which appears to be the point you and 
others are making. If that was the intention, then perhaps the BRs should be 
updated accordingly?

We don’t necessarily love our current situation in respect to entropy in 
serialNumbers, we would love to be able to apply some of the solutions you have 
outlined, and we expect to be able to do that in the future. However we still 
assert that for now, our current implementation of EJBCA is still technically 
complaint with the BRs Section 7.1 as they are written. Once an update for 
migration to larger entropy serialNumbers is available for the platform, we 
will make the adjustment to remove any potential further isssues.

Regards,
  



I believe the commonly accepted interpretation of the BR requirement is
to ensure that for each new certificate generated, there are at least
2**64 possible serial numbers and no way for anyone involved to predict
or influence which one will be used.

This rule exists to prevent cryptographic attacks on the algorithms used
for signing the certificates (such as RSA+SHA256), and achieves this
protection because of the location of the serial number in the
certificate structure.

If all the serial numbers are strictly in the range 0x0001
to 0x7FFF then there is not enough protection of the signing
algorithm against these attacks.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Firefox Revocation Documentation

2019-02-20 Thread Jakob Bohm via dev-security-policy

On 20/02/2019 00:40, Wayne Thayer wrote:

I have replaced some outdated information on Mozilla's wiki about
revocation checking [1] [2] with a new page on the wiki describing how
Firefox currently performs revocation checking on TLS certificates:

https://wiki.mozilla.org/CA/Revocation_Checking_in_Firefox

It also includes a brief description of our plans for CRLite.

Please respond if you have any questions or comments about the information
on this page. I hope it is useful, and I plan to add more details in the
future.

- Wayne

[1] https://wiki.mozilla.org/index.php?title=CA:RevocationPlan=no
[2]
https://wiki.mozilla.org/index.php?title=CA:ImprovingRevocation=no



Nice write up.  Some minor issues:

1. Because generating the CRLite data will take some non-zero time,
  the time stamp used by the client to check if a certificate is too
  new for the CRLite data should be based on when the Mozilla server
  requested the underlying CT data rather then when the data was
  passed to the client.

2. While you mention the ability of attackers to omit OCSP stapling
  in spoofed responses, you forget the additional problem that there
  are still server software packages without upstream stapling support.

3. Don't forget Thunderbird (technically no longer a primary Mozilla
  product, but still a major use of Mozilla certificate infrastructure).


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Blog: Why Does Mozilla Maintain Our Own Root Certificate Store?

2019-02-18 Thread Jakob Bohm via dev-security-policy
On 19/02/2019 04:04, Ryan Sleevi wrote:
> On Mon, Feb 18, 2019 at 4:59 PM Jakob Bohm via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
>> On 14/02/2019 23:31, Wayne Thayer wrote:
>>> This may be of interest:
>>>
>>>
>> https://blog.mozilla.org/security/2019/02/14/why-does-mozilla-maintain-our-own-root-certificate-store/
>>>
>>
>> Nice write up.
>>
>> Two points only briefly covered:
>>
>> - Because many open source OS distributions use the Mozilla root store,
>>replacing the Mozilla root store by relying on the OS root store would
>>cut off its own feet.
>>
>> - Some participants in the community actively refuse to support use of
>>the Mozilla root store in other open source initiatives.
>>
> 
> It’s not productive or helpful to continue engaging like this.

I was merely suggesting that whatever view Thayer (not you) has on 
these two points was missing from the blog post.  Nothing more, 
nothing less.

The first point is yet another reason why Mozilla provides it's own 
root store rather than relying on the OS one (the main theme of 
Thayer's blog post).

The second point is something closely related that has no rationale 
in the FAQ, only a description of the situation.  The blog post 
actually mentions that those downstreams exist, so it would be 
relevant to mention any policy on this.

And I would highly appreciate if you stopped with your incessant 
bullying and trolling (and not just of me), which has been going on 
for years now.



Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Blog: Why Does Mozilla Maintain Our Own Root Certificate Store?

2019-02-18 Thread Jakob Bohm via dev-security-policy

On 14/02/2019 23:31, Wayne Thayer wrote:

This may be of interest:

https://blog.mozilla.org/security/2019/02/14/why-does-mozilla-maintain-our-own-root-certificate-store/



Nice write up.

Two points only briefly covered:

- Because many open source OS distributions use the Mozilla root store,
 replacing the Mozilla root store by relying on the OS root store would
 cut off its own feet.

- Some participants in the community actively refuse to support use of
 the Mozilla root store in other open source initiatives.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificate issued with OU > 64

2019-02-18 Thread Jakob Bohm via dev-security-policy
On 15/02/2019 19:33, Ryan Sleevi wrote:
> On Fri, Feb 15, 2019 at 12:01 PM Jakob Bohm via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
>> Indeed, the report states that the bug was in the pre-issuance checking
>> software.
>>
> 
> I believe you may have misread the report. I do not have the same
> impression - what is stated is a failure of the control, not the linting.
> 
> 

The report refers to the pre-checking code in question as the "filter".

As the certificate hasn't been signed yet, pre-checking code needs to 
either generate a dummy certificate (signed with a dummy untrusted key) 
as input to a general cert-linter, or check the elements of the intended 
certificate in a disembodied form.

>> One good community lesson from this is that it is prudent to use a
>> different brand and implementation of the software that does post-
>> issuance checking than the software doing pre-issuance checking, as a
>> bug in the latter can be quickly detected by the former due to not
>> having the same set of bugs.
> 
> 
> While I can understand the position here - and robustness is always good -
> I do not believe this is wise or prudent advice, especially from this
> incident. While it is true that post-issuance may catch things pre-issuance
> misses, the value in post-issuance is both as a means of ensuring that
> internal records are consistent (i.e. that the set of pre-issuance
> certificates match what was issued - ensuring no rogue or unauthorized
> certificates) as well as detecting if preissuance was not run.
> 
> There's significant more risk for writing a second implementation "just
> because" - and far greater value in ensuring robustness. Certainly,
> post-issuance is far less important than pre-issuance.
> 

Of cause.  I was not suggesting that approach (although it can be a 
high reliability technique).  I was merely suggesting to obtain a 
cert linter from a different vendor than the main CA suite.  And by 
all means run multiple checkers that purport to check the same 
things.

Also note that pre-issuance checking is not the same as checking CT 
pre-certs.  Pre-issuance checks need to happen before signing the 
pre-certs.


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Certificate issued with OU > 64

2019-02-15 Thread Jakob Bohm via dev-security-policy
erstand the problem in
substantial technical detail, so that we can understand how the fixes will
address, and so that the community at large can be aware of systemic risks
or patterns and ensure that, regardless of what PKI software they use, so
that the ecosystem can itself improve.

Please continue to provide more details regarding this incident




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: P-384 and ecdsa-with-SHA512: is it allowed?

2019-02-11 Thread Jakob Bohm via dev-security-policy
On 10/02/2019 02:55, Corey Bonnell wrote:
> Hello,
> Section 5.1 of the Mozilla Root Store Policy 
> (https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/)
>  specifies the allowed set of key and signature algorithms for roots and 
> certificates that chain to roots in the Mozilla Root Store. Specifically, the 
> following hash algorithms and ECDSA hash/curve pairs are allowed:
> 
> • Digest algorithms: SHA-1 (see below), SHA-256, SHA-384, or SHA-512.
> • P‐256 with SHA-256
> • P‐384 with SHA-384
> 
> Given this, if an End-Entity certificate were signed using a subordinate CA’s 
> P-384 key with ecdsa-with-SHA512 as the signature algorithm (which would be 
> reflected in the End-Entity certificate's signatureAlgorithm field), would 
> this violate Mozilla policy? As I understand it, an ECDSA signing operation 
> with a P-384 key using SHA-512 would be equivalent to using SHA-384 (due to 
> the truncation that occurs), so I am unsure if this would violate the 
> specification above (although the signatureAlgorithm field value would be 
> misleading). I believe the same situation exists if a P-256 key is used for a 
> signing operation with SHA-384.
> 
> Any insight into whether this is allowed or prohibited would be appreciated.
> 


Using the same DSA or ECDSA key with more than one hash algorithm 
violates the cryptographic design of DSA/ECDSA, because those don't 
include a hash identifier into the signature calculation.  It's 
insecure to even accept such signatures, as it would make the 
signature checking code vulnerable to 2nd pre-image attacks on the 
hash algorithm not used by the actual signer to generate 
signatures.  It would also be vulnerable to cross-hash pre-image 
attacks that are otherwise not considered weaknesses in the hash 
algorithms.

Furthermore the FIPS essentially (if not explicitly) require using 
a shortened 384-bit variant of SHA-512 as input to P-384 ECDSA, 
and the only approved such shortened version is, in fact, SHA-384.

Using the same P-384 ECDSA key pair with both SHA-384 and 
SHA-3-384 might be within some readings of the FIPS, but would 
still be vulnerable to the issue above (imagine a pre-image 
weakness being found in either hash algorithm, all signatures 
with such a key would then become suspect).


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: GoDaddy Underscore Revocation Disclosure

2019-02-08 Thread Jakob Bohm via dev-security-policy

On 09/02/2019 01:36, Santhan Raj wrote:

On Friday, February 8, 2019 at 4:09:32 PM UTC-8, Joanna Fox wrote:

I agree on the surface this bug appears to be the same, but the root cause is a 
different. The issue for bug 1462844 was a specific status not counting as 
active when it was. To mitigate this issue, we updated the query to include the 
missing status. However, we are in the process of simplifying the data 
structures to simplify these types of queries.

For the underscore certificates, these were non-active, not even considered as 
provisioned since they were not delivered to a customer and not publicly used 
for any encryption. These certificates were effectively abandoned by our system.


Is the term "certificate" accurate in this case? Assuming you embed SCTs within 
the EE cert, what you have is technically a pre-cert that was abandoned (not meant to be 
submitted to CT). Right? I ask because both the cert you linked are pre-certs, and I 
understand signing a pre-cert is intent to issue and is treated the same way, but still 
wanted to clarify.

Or by non-active certificate, are you actually referring to a fully signed EE 
that was just not delivered to the customer?



And in either case, the only reasonable sequences of events within
expectations (and possibly within requirements) would have been:

Good Scenario A:
1. Pre-certificate is logged in CT.
2. Matching certificate is signed by CA key.
3. Signed certificate is logged in CT.
4. At this point the customer _might_ retrieve the certificate from CT
  without knowledge of CA.
5. Thus certificate is in reality active, despite any records the CA
  may have of not delivering it directly to customer.

Good Scenario B:
1. Pre-certificate is logged in CT.
2. CA decides (for any reason) not to actually sign the actual
  certificate.
3. The serial number listed in the CT pre-certificate is formally
  revoked in CRL and OCSP.  This is done once the decision not to
  sign is final.
4. If possible, label these revocations differently in CRL and OCSP
  responses, to indicate to independent CT auditors that the EE was
  never signed and therefore not logged to CT.

Scenario B should be very rare, as the process from pre-cert logging to
final cert logging should typically be automatic or occur within a
single root key ceremony. Basically, something unusual would have to
happen at the very last moment, such as an incoming report that a
relied-upon external system (Global DNS, Internet routing, reliable
identity database etc.) may have been compromised during the vetting.

In neither case would a CT-logged serial number be left in a
not-active but not-revoked state beyond a relevant procedural
delay.

As pointed out in other recent cases, CA software must allow
revoking a certificate without making it publicly valid first, in
case Scenario B happens.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: AW: AW: Incident Report DFN-PKI: Non-IDNA2003 encoded international domain names

2019-01-25 Thread Jakob Bohm via dev-security-policy

On 25/01/2019 19:23, Buschart, Rufus wrote:

Hello Jakob!


-Ursprüngliche Nachricht-
Von: dev-security-policy  Im 
Auftrag von Jakob Bohm via dev-security-policy
Gesendet: Freitag, 25. Januar 2019 18:47

Example, if the subscriber fills out the human readable order form like
this:
www.example.com
chat.example.com
example.com
detteerenprøve.example.com
www.example.net
example.net
*.eXample.com
*.examPle.nEt
192.0.2.3
the best choice is probably CN=*.example.com which is one of the SANs and is a 
wildcard covering the first SAN (www.example.com).
The BRs do not require a specific choice among the 9 SANs that would go in the 
certificate (all of which must of cause be validated).
The user entered U-label detteerenprøve.example.com must of cause be converted 
to A-label xn--detteerenprve-lnb.example.com
before checking and encoding.


If a CA receives such a list and creates the CSR for the customer (how does the 
CA this without access to the customers private key?), they have of course to 
perform an IDNA translation from U-label to A-label. And as we have learned the 
BRGs (indirectly) enforce the use of IDNA2003. But if the CA receives a filled 
in CSR they don't perform (not even indirectly) an IDNA translation and has no 
obligation to check if the entries are valid IDNA2003 A-label.



I was not suggesting that the CA creates the CSR.

I was simply suggesting the common practice of the CA using the CSR
mostly as a "proof of possession" of the private key, but not as a
precise specification of the desired certificate contents.

For example, a CSR may (due to old tools or human error) contain
additional subject DN elements (such as eMailAddress).  Of cause the
CSR can't be completely different from the requested certificate, as
that would be open to a man-in-the-middle attack where an attacker
submits the victim's CSR with a completely unrelated order.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: AW: Incident Report DFN-PKI: Non-IDNA2003 encoded international domain names

2019-01-25 Thread Jakob Bohm via dev-security-policy
On 25/01/2019 16:06, Tim Hollebeek wrote:
> 
>> On 2019-01-24 20:19, Tim Hollebeek wrote:
>>> I think the assertion that the commonName has anything to do with what
>>> the user would type and expect to see is unsupported by any of the
>>> relevant standards, and as Rob noted, having it be different from the
>>> SAN strings is not in compliance with the Baseline Requirements.
>>
>> The BR do not say anything about it.
> 
> Rob already quoted it: "If present, this field MUST contain a single IP
> address
> or Fully-Qualified Domain Name that is one of the values contained in the
> Certificate's subjectAltName extension".
> 
> The only reason it's allowed at all is because certain legacy software
> implementations would choke if it were missing.
> 
>>> Requiring translation to a U-label by the CA adds a lot of additional
>>> complexity with no benefit.
>>
>> I have no idea what is so complex about that. When generating the
>> certificate, it's really just calling a function. On the other hand, when
> reading
>> a certificate you have to guess what they did.
> 
> Given that it has no benefit at all, any complexity is too much.  As I
> mentioned
> above, its existence is merely a workaround for a bug in obsolete software.
> 

Not as much a bug, as a previous specification.  In other words, 
backwards compatibility with old systems and software predating the full 
migration to SAN-only checking in modern clients.

As with all backwards compatibility, it is about interoperating with 
systems that followed the common standards and practices of their time, as 
they were.

While the Python library mentioned apparently failed to match any valid 
IDNA SAN value to any actual IDNA host name, the much more common case 
is software that will process the A-label as an ASCII string and compare 
it to either the CN or the list of SAN values.

One such example is GNU Wget, which is sometimes used to bootstrap the 
downloading of updated client software (by typing/pasting a known URL).
GNU Wget was notably slow in implementing SAN support, doing only the 
matching of host name to CN.  For some compile options, SAN checking of 
host names was implemented as recently as the year 2014.

For this and other cases this old, putting the A-label in CN (if the 
subscriber chosen "most important to match name" is a DNSname) or 
putting the IP address in CN (if the most important to match name is an 
IPname).

In either case, I suspect (but have no data) that using the lower case 
(ASCII lower case) name form will be the most compatible choice.  If the 
subscriber does not specify which name is most important, choose the 
first DNS/IP name in the subscriber input, unless a later name is a 
wildcard that also matched that first name.  Ideally, the subscriber 
would indicate all desired choices directly in the CSR, but in practice, 
most subscribers will use broken scripts to generate the CSR, not a 
carefully crafted filled in template.

Example, if the subscriber fills out the human readable order form like 
this:
   www.example.com
   chat.example.com
   example.com
   detteerenprøve.example.com
   www.example.net
   example.net
   *.eXample.com
   *.examPle.nEt
   192.0.2.3
the best choice is probably CN=*.example.com which is one of the SANs 
and is a wildcard covering the first SAN (www.example.com).  The BRs 
do not require a specific choice among the 9 SANs that would go in the 
certificate (all of which must of cause be validated).  The user entered 
U-label detteerenprøve.example.com must of cause be converted to A-label 
xn--detteerenprve-lnb.example.com before checking and encoding.

This way, out of the correct uses of the certificate, only the 
example.net names and the IP address would not be accepted by older software.

The clients of cause usually checks only CN or only SAN, it's the CA that 
deals with the complexity of trying to please everyone without harming 
anyone.


>> And if it's really to complex, just remove the CN, or is that too complex
> too?
> 
> See above.
> 
>>> What users type and see are issues that are best left to Application
>>> Software Suppliers (browsers).
>>
>> So you're saying all the other software that deals with certificates
> should
>> instead add complexity?
> 
> What they actually do is to ignore this obsolete field, and process the
> subjectAltNames.  There's no additional complexity for them because
> they already are doing the conversion of IDN names.
> 
> -Tim
> 


Enjoy

Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Odp.: Odp.: Odp.: 46 Certificates issued with BR violations (KIR)

2019-01-21 Thread Jakob Bohm via dev-security-policy

On 18/01/2019 19:21, piotr.grabow...@kir.pl wrote:

W dniu piątek, 18 stycznia 2019 18:44:23 UTC+1 użytkownik Jakob Bohm napisał:

On 17/01/2019 21:12, Wayne Thayer wrote:

Hello Piotr,

On Thu, Jan 17, 2019 at 6:23 AM Grabowski Piotr 
wrote:


Hello Wayne,



I am very sorry for the  delay. Please find below our answers to Ryan's
questions. Regarding the question why we didn't report this misissuance
of this 1 certificate as separate incident in my opinion it is still the
same incident. I can create new incident if you want me to. Taking into
account my email and Ryan's response I assumed it is not required to
create separate incident for  misissuance of this 1 certificate.


I am not so concerned about creating a new bug and/or a new email thread.

My concern is that you did not report the new misissuance even though you
appear to have known about it. Why was it not reported as part of the
existing incident, and what is being done to ensure that future
misissuances - either in relation to existing or new incidents - are
promptly reported?

So our comments in blue:



I don't think it's reasonable to push the problem to your CA software
vendor.

- We are not pushing the problem to CA software vendor. I have just tried
to explain how it happened.



If Verizon does not provide this support, what steps will your CA take?

- We are almost sure that Verizon will provide at least policy field
validation for maximum field size which can be sufficient to eliminate
the last gap in our policy templates which in turn led us to misissuance
of this certificate. If Verizon will not provide this feature we will be 
considering
usage of another UniCERT component - ARM plug-in - which analyzes
requests but this means custom development for us. It would be a big
change in many processes and the challange to preserve current security
state as well so this should be an absolute last resort.


If you know what those steps are, is there reason not to take them now? If
you do not know what those steps are, when will you know?

The main reason why we are not taking these steps now (changing processes
and custom development) is absolute conviction that the cost and the risk of
implementing them is much, much higher that the risk related with waiting
for that feature being delivered by vendor.  Just to recall, now we have
the only gap which in the worst case can give us specific field in
certificate longer than RFC specifies. Of course we are practicing due care
and have put as much counter-measures as we can (procedure, labels above
the fields).



Your software is producing invalid and improper certificates. The
responsibility for that ultimately falls on the CA, and understanding what
steps the CA is taking to prevent that is critical. It appears that the
steps, today, rely on human factors. As the past year of incident reports
have shown, relying solely on human factors is not a reasonable practice.

-I agree entirely with you, that's why we will keep exerting pressure on
Verizon to deliver:

o   Policy field size validation – in our opinion it is simple change request 
and should be delivered ASAP.

o   native x509lint or zlint feature




When can we expect an update from you on Verizon's response to your
request? If I was the customer, I would expect a prompt response from
Verizon.



Additional questions:

- Is this the same CA software that was used on Verizon's own CA or
   SubCA, which had serious problem some time ago?

- Who else (besides KIR) is still using this software to run a public
   CA?


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded


Hello Jacob,
Could you give me the link to the incident with Verizon software you're talking 
about?



There were a number of Verizon incident related messages on this
list/newsgroup from November 2016 to January 2017, all in connection
with Digicert taking over and cleaning up the Verizon CAs.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


  1   2   3   4   5   6   >