Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-11-24 Thread Nils Amiet via dev-security-policy
Hello Ryan, 

Thank you for the ongoing dialogue.  We read your latest message and we 
understand your points. We can follow your arguments that the solution we 
proposed would not be viable for the overall ecosystem. 

It was a pleasure to have this discussion with you and we thank you for the 
time you invested reviewing our solution. 

Best regards, 

Nils and the Kudelski Security team
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-11-18 Thread Jakob Bohm via dev-security-policy

On 2020-11-18 16:36, Ryan Sleevi wrote:

On Wed, Nov 18, 2020 at 8:19 AM Nils Amiet via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


We have carefully read your email, and believe we’ve identified the
following
important points:

1. Potential feasibility issue due to lack of path building support

2. Non-robust CRL implementations may lead to security issues

3. Avoiding financial strain is incompatible with user security.



I wasn't trying to suggest that they are fundamentally incompatible (i.e.
user security benefits by imposing financial strain), but rather, the goal
is user security, and avoiding financial strain is a "nice to have", but
not an essential, along that path.



#1 Potential feasibility issue due to lack of path building support

If a relying party does not implement proper path building, the EE
certificate
may not be validated up to the root certificate. It is true that the new
solution adds some complexity to the certificate hierarchy. More
specifically
it would turn a linear hierarchy into a non-linear one. However, we
consider
that complexity as still being manageable, especially since there are
usually
few levels in the hierarchy in practice. If a CA hierarchy has utilized the
Authority Information Access extension the chance that any PKI library can
build the path seems to be very high.



I'm not sure which PKI libraries you're using, but as with the presumption
of "robust CRL" implementations, "robust AIA" implementations are
unquestionably the minority. While it's true that Chrome, macOS, and
Windows implement AIA checking, a wide variety of popular browsers
(Firefox), operating systems (Android), and libraries (OpenSSL, cURL,
wolf/yassl, etc etc) do not.

Even if the path could not be built, this would lead to a fail-secure

situation
where the EE certificate is considered invalid and it would not raise a
security issue. That would be different if the EE certificate would
erroneously
be considered valid.



This perspective too narrowly focuses on the outcome, without considering
the practical deployment expectations. Failing secure is a desirable
property, surely, but there's no question that a desirable property for CAs
is "Does not widely cause outages" and "Does not widely cause our support
costs to grow".



The focus seems to be on limiting the ecosystem risks associated with
non-comliant 3rd party SubCAs.  Specifically, in the (already important)
situation where SICA was operated by a 3rd party (not the root CA
operator), their proposal would enforce the change (by replacing and
revoking the root CA controlled ICA), whereas the original procedure
would rely on auditing a previously unaudited 3rd party, such as a name
constrained corporate CA.


Consider, for example, that despite RFC 5280 having a "fail close" approach
for nameConstraints (by MUST'ing the extension be critical), the CA/B Forum
chose to deviate and make it a MAY. The reason for this MAY was that the
lack of support for nameConstraints on legacy macOS (and OpenSSL) versions
meant that if a CA used nameConstraints, it would fail closed on those
systems, and such a failure made it impractical for the CA to deploy. The
choice to make such an extension non-critical was precisely because
"Everyone who implements the extension is secure, and everyone who doesn't
implement the extension works, but is not secure".

It is unfortunate-but-seemingly-necessary to take this mindset into
consideration: the choice between hard-failure for a number of clients (but
guaranteed security) or the choice between less-than-ideal security, but
practical usability, is often made for the latter, due to the socioeconomic
considerations of CAs.

Now, as it relates to the above point, the path building complexity would
inevitably cause a serious spike in support overhead, because it requires
that in order to avoid issues, every server operator would need to take
action to reconfigure their certificate chains to ensure the "correct"
paths were built to support such non-robust clients (for example, of the
OpenSSL forks, only LibreSSL supports path discovery, and only within the
past several weeks). As a practical matter, that is the same "cost" of
replacing a certificate, but with less determinism of behaviour: instead of
being able to test "am I using the wrong cert or the right one", a server
operator would need to consider the entire certification path, and we know
they'll get that wrong.



Their proposal would fall directly into the "am I using the wrong cert
or the right one" category.  EE subscribers would just have to do a
SubCA certifiate replacement, just like they would if a CA-internal
cross-certificate expires and is reissued (as happened in 2019 for
GlobalSign's R1-R3 cross cert, which was totally workable for us
affected subscribers, though insufficient notices were sent out).


This is part of why I'm dismissive of the solution; not because it isn't
technically workable, but because it's practic

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-11-18 Thread Ryan Sleevi via dev-security-policy
On Wed, Nov 18, 2020 at 8:19 AM Nils Amiet via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> We have carefully read your email, and believe we’ve identified the
> following
> important points:
>
> 1. Potential feasibility issue due to lack of path building support
>
> 2. Non-robust CRL implementations may lead to security issues
>
> 3. Avoiding financial strain is incompatible with user security.
>

I wasn't trying to suggest that they are fundamentally incompatible (i.e.
user security benefits by imposing financial strain), but rather, the goal
is user security, and avoiding financial strain is a "nice to have", but
not an essential, along that path.


> #1 Potential feasibility issue due to lack of path building support
>
> If a relying party does not implement proper path building, the EE
> certificate
> may not be validated up to the root certificate. It is true that the new
> solution adds some complexity to the certificate hierarchy. More
> specifically
> it would turn a linear hierarchy into a non-linear one. However, we
> consider
> that complexity as still being manageable, especially since there are
> usually
> few levels in the hierarchy in practice. If a CA hierarchy has utilized the
> Authority Information Access extension the chance that any PKI library can
> build the path seems to be very high.
>

I'm not sure which PKI libraries you're using, but as with the presumption
of "robust CRL" implementations, "robust AIA" implementations are
unquestionably the minority. While it's true that Chrome, macOS, and
Windows implement AIA checking, a wide variety of popular browsers
(Firefox), operating systems (Android), and libraries (OpenSSL, cURL,
wolf/yassl, etc etc) do not.

Even if the path could not be built, this would lead to a fail-secure
> situation
> where the EE certificate is considered invalid and it would not raise a
> security issue. That would be different if the EE certificate would
> erroneously
> be considered valid.
>

This perspective too narrowly focuses on the outcome, without considering
the practical deployment expectations. Failing secure is a desirable
property, surely, but there's no question that a desirable property for CAs
is "Does not widely cause outages" and "Does not widely cause our support
costs to grow".

Consider, for example, that despite RFC 5280 having a "fail close" approach
for nameConstraints (by MUST'ing the extension be critical), the CA/B Forum
chose to deviate and make it a MAY. The reason for this MAY was that the
lack of support for nameConstraints on legacy macOS (and OpenSSL) versions
meant that if a CA used nameConstraints, it would fail closed on those
systems, and such a failure made it impractical for the CA to deploy. The
choice to make such an extension non-critical was precisely because
"Everyone who implements the extension is secure, and everyone who doesn't
implement the extension works, but is not secure".

It is unfortunate-but-seemingly-necessary to take this mindset into
consideration: the choice between hard-failure for a number of clients (but
guaranteed security) or the choice between less-than-ideal security, but
practical usability, is often made for the latter, due to the socioeconomic
considerations of CAs.

Now, as it relates to the above point, the path building complexity would
inevitably cause a serious spike in support overhead, because it requires
that in order to avoid issues, every server operator would need to take
action to reconfigure their certificate chains to ensure the "correct"
paths were built to support such non-robust clients (for example, of the
OpenSSL forks, only LibreSSL supports path discovery, and only within the
past several weeks). As a practical matter, that is the same "cost" of
replacing a certificate, but with less determinism of behaviour: instead of
being able to test "am I using the wrong cert or the right one", a server
operator would need to consider the entire certification path, and we know
they'll get that wrong.

This is part of why I'm dismissive of the solution; not because it isn't
technically workable, but because it's practically non-viable when
considering the overall set of ecosystem concerns. These sorts of
considerations all factor in to the decision making and recommendation for
potential remediation for CAs: trying to consider all of the interests, but
also all of the limitations.


> #2 Non-robust CRL implementations may lead to security issues
>
> Relying parties using applications that don’t do a proper revocation check
> do
> have a security risk. This security risk is not introduced by our proposal
> but
> inherent to not implementing core functionality of public key
> infrastructures.
> The new solution rewards relying parties that properly follow the
> standards.
> Indeed, a relying party that has a robust CRL (or OCSP check)
> implementation
> already would not have to bear any additional security risk. They also
> would
> benefit from the point 

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-11-18 Thread Nils Amiet via dev-security-policy
> I realize this is almost entirely critical, and I hope it's taken as
> critical of the proposal, not of the investment or interest in this space.

Not a problem for being critical and we don’t take it personally. We
appreciate the discussion, the time you spend and the opportunity to propose
different approaches to solving this issue.

Our goal is not to defend non-compliant or insecure practices. Quite the
opposite, with our research we hope to find a solution that leads in certain
constellations to a quicker resolution while rewarding projects that follow the
standards. We don’t want to promote our proposal as a better proposal for all
possible constellations, but we believe it can in certain situations be as
secure and more practical as the original proposal on this mailing list.

We have carefully read your email, and believe we’ve identified the following
important points:

1. Potential feasibility issue due to lack of path building support

2. Non-robust CRL implementations may lead to security issues

3. Avoiding financial strain is incompatible with user security.

If we have missed one or misunderstood you, please let us know. We’d like to
address these points below.



#1 Potential feasibility issue due to lack of path building support

If a relying party does not implement proper path building, the EE certificate
may not be validated up to the root certificate. It is true that the new
solution adds some complexity to the certificate hierarchy. More specifically
it would turn a linear hierarchy into a non-linear one. However, we consider
that complexity as still being manageable, especially since there are usually
few levels in the hierarchy in practice. If a CA hierarchy has utilized the
Authority Information Access extension the chance that any PKI library can
build the path seems to be very high.

Even if the path could not be built, this would lead to a fail-secure situation
where the EE certificate is considered invalid and it would not raise a
security issue. That would be different if the EE certificate would erroneously
be considered valid.

Additionally, only the concerned EE certificates are affected. There is no
impact on all the other actors in the ecosystem.

Since lack of proper path building is neither a security issue, nor a
compliance issue, and only the concerned subscribers are affected, it would be
up to the subscriber to decide whether to take the risk that his certificates
are wrongly labeled as invalid by certain relying parties.

In the blog post at medium.com you mentioned in your last answer, you have
analyzed various PKI libraries for their path building capacity. If this
community believes it would clarify the situation, we could run tests with some
of those libraries to see, how they are able to build the path when our
proposal was applied to a simple 4-level PKI hierarchy.



#2 Non-robust CRL implementations may lead to security issues

Relying parties using applications that don’t do a proper revocation check do
have a security risk. This security risk is not introduced by our proposal but
inherent to not implementing core functionality of public key infrastructures.
The new solution rewards relying parties that properly follow the standards.
Indeed, a relying party that has a robust CRL (or OCSP check) implementation
already would not have to bear any additional security risk. They also would
benefit from the point that our solution can be implemented very quickly and so
they would quickly mitigate the security issue. On the other hand, relying
parties with a non-functional revocation mechanism will have to take corrective
measures. And we can consider it a fair balancing of work and risk when the
workload and risk is on the actors with poor current implementations.

It can be noted that the new solution gives some responsibility to the relying
parties. Whereas the original solution gives responsibility to the subscribers
who did nothing wrong about this in the first place. The new solution can be
considered fairer with this regard.



#3 Avoiding financial strain is incompatible with user security

User security is fundamental for the PKI ecosystem. Our proposal increases the
risk exposure level only for those relying parties with a fundamentally
insecure application. Taking into account economic constraints in real life, we
believe that the increase in security should be balanced in terms of cost and
effort. Such a tradeoff may provide sufficient levels of security while
maintaining affordability. This also leads to a quicker time to implement which
limits the exposure and increases security.



One last thing we wanted to get your feedback on, is whether you agree that the
new solution we proposed ensures that if SICA and ICA are considered revoked,
there is no security risk for SICA2.



Thank you for your time and continued discussion.

Nils and team
___
dev-security-policy mailing list
dev-security-policy@lists.m

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-11-17 Thread Jakob Bohm via dev-security-policy

On 2020-11-16 23:17, Ryan Sleevi wrote:

On Mon, Nov 16, 2020 at 8:40 AM Nils Amiet via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


Hi Nils,

This is interesting, but unfortunately, doesn’t work. The 4-certificate
hierarchy makes the very basic, but understandable, mistake of assuming

the

Root CA revoking the SICA is sufficient, but this thread already

captures

why it isn’t.

Unfortunately, as this is key to the proposal, it all falls apart from
there, and rather than improving security, leads to a false sense of it.

To

put more explicitly: this is not a workable or secure solution.


Hello Ryan,

We agree that revoking SICA would not be sufficient and we mentioned that
in section 2.3 in the above message.

The new solution described in section 2.3, not only proposes to revoke
SICA (point (iv)) but also to revoke ICA (point (ii)) in the 4-level
hierarchy (RCA -> ICA -> SICA -> EE).

We believe this makes a substantial difference.



Sorry, I should have been clearer that even the assumptions involved in the
four-certificate design were flawed.

While I do want to acknowledge this has some interesting edges, I don't
think it works as a general solution. There are two critical assumptions
here that don't actually hold in practice, and are amply demonstrated as
causing operational issues.

The first assumption here is the existence of robust path building, as
opposed to path validation. In order for the proposed model to work, every
server needs to know about SICA2 and ICA2, so that it can properly inform
the client, so that the client can build a correct path and avoid ICA /
SICA1. Yet, as highlighted by
https://medium.com/@sleevi_/path-building-vs-path-verifying-the-chain-of-pain-9fbab861d7d6
, very few libraries actually implement path building. Assuming they
support revocation, but don't support path building, it's entirely
reasonable for them to build a path between ICA and SICA and treat the
entire chain as revoked. Major operating systems had this issue as recently
as a few years ago, and server-side, it's even more widespread and rampant.
In order to try to minimize (but impossible to prevent) this issue is to
require servers to reconfigure the intermediates they supply to clients, in
order to try to serve a preferred path, but has the same operational impact
as revoke-and-replace, under the existing, dominant approach.



Correct, thus most affected EE-certificate holders would still have to
reinstall the certificate configuration for their (unchanged) EE cert,
which is still less work than requesting and getting a new certificate.


The second assumption here is with an assumption on robust CRL support,
which I also hope we can agree is hardly the case, in practice. In order
for the security model to be realized by your proposed 4-tier plan, the
clients MUST know about the revocation event of SICA/ICA, in order to
ensure that SICA cannot spoof messages from ICA. In the absence of this,
the risk is entirely borne by the client. Now, you might think it somehow
reasonable to blame the implementer here, but as the recent outage of Apple
shows, there are ample reasons for being cautious regarding revocation. As
already discussed elsewhere on this thread, although Mozilla was not
directly affected due to a secondary check (prevent CAs from signing OCSP
responses), it does not subject responder certificates to OneCRL checks,
and thus there's still complexity lurking.


Actually, for the current major browsers, the situation is better:

1. Chrome would distribute the revocation of the "ICA" cert in it's
  centralized CRL mechanism.

2. Non-chrome Microsoft Browsers would actually check CRLs and or OCSP
 for the root CA to discover that the ICA cert is revoked.  This would
 be done against the CRL/OCSP servers of the root CA, not those of ICA.

3. Firefox would distribute the revocation of ICA (or any other
  intermediary CA) through its centralized "SubCA revocation" mechanism.






As it relates to the conclusions, I think the risk surface is quite
misguided, because it overlooks these assumptions with handwaves such as
"standard PKI procedures", when in fact, they don't, and have never, worked
that way in practice. I don't think this is intentional dismissiveness, but
I think it might be an unsupported rosy outlook on how things work. The
paper dismisses the concern that "key destruction ceremonies can't be
guaranteed", but somehow assumes complete and total ubiquity for deployment
of CRLs and path verification; I think that is, at best, misguided.
Although Section 3.4 includes the past remarks about the danger of
solutions that put all the onus on application developers, this effectively
proposes a solution that continues more of the same.

As a practical matter, I think we may disagree on some of the potential
positive outcomes from the path currently adopted by a number of CAs, such
as the separation of certificate hierarchies, a better awareness and
documentation duri

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-11-16 Thread Ryan Sleevi via dev-security-policy
On Mon, Nov 16, 2020 at 8:40 AM Nils Amiet via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> > Hi Nils,
> >
> > This is interesting, but unfortunately, doesn’t work. The 4-certificate
> > hierarchy makes the very basic, but understandable, mistake of assuming
> the
> > Root CA revoking the SICA is sufficient, but this thread already
> captures
> > why it isn’t.
> >
> > Unfortunately, as this is key to the proposal, it all falls apart from
> > there, and rather than improving security, leads to a false sense of it.
> To
> > put more explicitly: this is not a workable or secure solution.
>
> Hello Ryan,
>
> We agree that revoking SICA would not be sufficient and we mentioned that
> in section 2.3 in the above message.
>
> The new solution described in section 2.3, not only proposes to revoke
> SICA (point (iv)) but also to revoke ICA (point (ii)) in the 4-level
> hierarchy (RCA -> ICA -> SICA -> EE).
>
> We believe this makes a substantial difference.
>

Sorry, I should have been clearer that even the assumptions involved in the
four-certificate design were flawed.

While I do want to acknowledge this has some interesting edges, I don't
think it works as a general solution. There are two critical assumptions
here that don't actually hold in practice, and are amply demonstrated as
causing operational issues.

The first assumption here is the existence of robust path building, as
opposed to path validation. In order for the proposed model to work, every
server needs to know about SICA2 and ICA2, so that it can properly inform
the client, so that the client can build a correct path and avoid ICA /
SICA1. Yet, as highlighted by
https://medium.com/@sleevi_/path-building-vs-path-verifying-the-chain-of-pain-9fbab861d7d6
, very few libraries actually implement path building. Assuming they
support revocation, but don't support path building, it's entirely
reasonable for them to build a path between ICA and SICA and treat the
entire chain as revoked. Major operating systems had this issue as recently
as a few years ago, and server-side, it's even more widespread and rampant.
In order to try to minimize (but impossible to prevent) this issue is to
require servers to reconfigure the intermediates they supply to clients, in
order to try to serve a preferred path, but has the same operational impact
as revoke-and-replace, under the existing, dominant approach.

The second assumption here is with an assumption on robust CRL support,
which I also hope we can agree is hardly the case, in practice. In order
for the security model to be realized by your proposed 4-tier plan, the
clients MUST know about the revocation event of SICA/ICA, in order to
ensure that SICA cannot spoof messages from ICA. In the absence of this,
the risk is entirely borne by the client. Now, you might think it somehow
reasonable to blame the implementer here, but as the recent outage of Apple
shows, there are ample reasons for being cautious regarding revocation. As
already discussed elsewhere on this thread, although Mozilla was not
directly affected due to a secondary check (prevent CAs from signing OCSP
responses), it does not subject responder certificates to OneCRL checks,
and thus there's still complexity lurking.

As it relates to the conclusions, I think the risk surface is quite
misguided, because it overlooks these assumptions with handwaves such as
"standard PKI procedures", when in fact, they don't, and have never, worked
that way in practice. I don't think this is intentional dismissiveness, but
I think it might be an unsupported rosy outlook on how things work. The
paper dismisses the concern that "key destruction ceremonies can't be
guaranteed", but somehow assumes complete and total ubiquity for deployment
of CRLs and path verification; I think that is, at best, misguided.
Although Section 3.4 includes the past remarks about the danger of
solutions that put all the onus on application developers, this effectively
proposes a solution that continues more of the same.

As a practical matter, I think we may disagree on some of the potential
positive outcomes from the path currently adopted by a number of CAs, such
as the separation of certificate hierarchies, a better awareness and
documentation during CA ceremonies, and a plan for agility for all
certificates beneath those hierarchies. In effect, it proposes an
alternative that specifically seeks to avoid those benefits, but
unfortunately, does so without achieving the same security properties. It's
interesting that "financial strain" would be highlighted as a goal to
avoid, because that framing also seems to argue that "compliance should be
cheap" which... isn't necessarily aligned with users' security needs.

I realize this is almost entirely critical, and I hope it's taken as
critical of the proposal, not of the investment or interest in this space.
I think this sort of analysis is exactly the kind of analysis we can and
should hope that any and all CAs can and s

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-11-16 Thread Nils Amiet via dev-security-policy
Hello all,

My colleague André and I recently became aware of this problem and we explored
a new solution to it.
Please find our analysis below.

For a formatted version of this message with images inline, please find it
available at: 
https://research.kudelskisecurity.com/2020/11/16/a-solution-to-the-dangerous-delegated-responder-certificate-problem/

===

Issue of the Dangerous Delegated OCSP Responder Certificate

1. Introduction

In this memo, we address the issue of the "Dangerous Delegated OCSP Responder
Certificate" [1]. We propose a new solution, different from the original
solution suggested in the Mozilla Dev Security Policy (mdsp) mailing list.

This proposed solution addresses compliance issues, security risks,
feasibility, time to implement and practicability (e.g. operational aspects) as
well as negative financial consequences. For the feasibility, please consider
the Proof of Concept provided in [2].

This memo is structured as follows. First, section 2 below describes and
compares the two solutions. Then, section 3 analyzes them by providing an
extensive list of potential concerns and discussing each of them in detail.

To illustrate the analysis further, a complete set of diagrams is available at
[3].


2. Description of the initial situation and the two solutions

2.1. Initial situation

Image: 
https://raw.githubusercontent.com/kudelskisecurity/dangerous-ocsp-delegated-responder-cert-poc/master/img/initial_situation_diagram.jpg
Figure 1: Initial situation - 4-level hierarchy

In Figure 1, SICA contains the problematic id-kp-ocspSigning Extended Key Usage
(EKU). The goal is to reach a situation where the EE certificate can be
verified up to the root CA in a chain where this EKU extension is not present
anymore. Indeed, the mere presence of this EKU makes SICA a delegated OCSP
responder on behalf of ICA. If SICA was intended to be an issuing CA and not an
OCSP responder, there is the security risk that SICA signs an OCSP response for
any sibling certificate on behalf of ICA. This is a huge security concern if
siblings are not operated by the same company/entity. This risk can impact all
the Sub-CA companies having certificates under ICA. Hence, it is important that
all the direct children certificates of ICA that have this problem are
neutralized.

In addition to the security risk, there is also a compliance issue that is
introduced because the Baseline Requirements [4] state that OCSP responders
must also have the id-pkix-ocsp-nocheck extension in addition to the EKU.

2.2. Original solution

Image: 
https://raw.githubusercontent.com/kudelskisecurity/dangerous-ocsp-delegated-responder-cert-poc/master/img/original_solution_diagram.jpg
Figure 2: Original solution - 4-level hierarchy

The original solution addresses the issue in the following way (see Figure 2):

(i) a new SICA2 certificate is issued. It is signed by ICA and is based on a
new key pair;

(ii) the old SICA certificate is revoked and, very importantly for security
reasons, its associated private key is destroyed during an audited ceremony;

(iii) new EE2 certificates are issued by SICA2.

2.3. New solution

Image: 
https://raw.githubusercontent.com/kudelskisecurity/dangerous-ocsp-delegated-responder-cert-poc/master/img/new_solution_diagram.jpg
Figure 3: New solution - 4-level hierarchy

The new solution addresses the issue as illustrated in Figure 3, which can be
summarized as follows:

(i) RCA issues a new ICA2 certificate, using a new DN and a new key pair;

(ii) RCA revokes the old ICA;

(iii) ICA2 issues a new SICA2 certificate, which reuses the same SICA DN and
key pair but does not contain the OCSP problematic EKU;

(iv) SICA is also revoked.

Since the same DN and key pair are reused for SICA2, the EE certificate is
still valid.

Also, the new SICA2 cannot be involved in any OCSP response, in any context at
all (removal of the problematic EKU). The old SICA cannot be involved in any
OCSP response either, in any context at all (it does not validate with regard
to the new ICA2 and ICA is revoked). Finally, no third party, which would not
have properly done the job of properly renewing a problematic SICA certificate,
can do any harm to any other company (the old validation branch is not
available anymore by the revocation of ICA).

2.4. Differences between the 2 solutions

2.4.1. 3-level vs 4-level hierarchy

The original solution works in both a 3-level hierarchy and 4-level hierarchy.
The new solution works only in a 4-level (or more) hierarchy because that
hierarchy allows for the parent of the affected certificate to be revoked
because it is not a trust anchor. Therefore, no third party can be affected
because nobody trusts that chain anymore. This is impossible to achieve in the
3-level hierarchy because the parent of the affected certificate is a trust
anchor and cannot be revoked.

2.4.2. Risk surface

However, each solution induces a different risk surface.

On the one hand, the new solution involves only one entity

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-11-16 Thread Nils Amiet via dev-security-policy
> Hi Nils, 
> 
> This is interesting, but unfortunately, doesn’t work. The 4-certificate 
> hierarchy makes the very basic, but understandable, mistake of assuming the 
> Root CA revoking the SICA is sufficient, but this thread already captures 
> why it isn’t. 
> 
> Unfortunately, as this is key to the proposal, it all falls apart from 
> there, and rather than improving security, leads to a false sense of it. To 
> put more explicitly: this is not a workable or secure solution.

Hello Ryan, 

We agree that revoking SICA would not be sufficient and we mentioned that in 
section 2.3 in the above message. 

The new solution described in section 2.3, not only proposes to revoke SICA 
(point (iv)) but also to revoke ICA (point (ii)) in the 4-level hierarchy (RCA 
-> ICA -> SICA -> EE). 

We believe this makes a substantial difference. 

Nils
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-11-16 Thread Ryan Sleevi via dev-security-policy
Hi Nils,

This is interesting, but unfortunately, doesn’t work. The 4-certificate
hierarchy makes the very basic, but understandable, mistake of assuming the
Root CA revoking the SICA is sufficient, but this thread already captures
why it isn’t.

Unfortunately, as this is key to the proposal, it all falls apart from
there, and rather than improving security, leads to a false sense of it. To
put more explicitly: this is not a workable or secure solution.

On Mon, Nov 16, 2020 at 2:58 AM Nils Amiet via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hello all,
>
> My colleague Andre and I recently became aware of this problem and we
> explored
> a new solution to it.
> Please find our analysis below.
>
> For a formatted version of this message with images inline, please find it
> available at:
> https://research.kudelskisecurity.com/2020/11/16/a-solution-to-the-dangerous-delegated-responder-certificate-problem/
>
> ===
>
> Issue of the Dangerous Delegated OCSP Responder Certificate
>
> 1. Introduction
>
> In this memo, we address the issue of the "Dangerous Delegated OCSP
> Responder
> Certificate" [1]. We propose a new solution, different from the original
> solution suggested in the Mozilla Dev Security Policy (mdsp) mailing list.
>
> This proposed solution addresses compliance issues, security risks,
> feasibility, time to implement and practicability (e.g. operational
> aspects) as
> well as negative financial consequences. For the feasibility, please
> consider
> the Proof of Concept provided in [2].
>
> This memo is structured as follows. First, section 2 below describes and
> compares the two solutions. Then, section 3 analyzes them by providing an
> extensive list of potential concerns and discussing each of them in detail.
>
> To illustrate the analysis further, a complete set of diagrams is
> available at
> [3].
>
>
> 2. Description of the initial situation and the two solutions
>
> 2.1. Initial situation
>
> Image:
> https://raw.githubusercontent.com/kudelskisecurity/dangerous-ocsp-delegated-responder-cert-poc/master/img/initial_situation_diagram.jpg
> Figure 1: Initial situation - 4-level hierarchy
>
> In Figure 1, SICA contains the problematic id-kp-ocspSigning Extended Key
> Usage
> (EKU). The goal is to reach a situation where the EE certificate can be
> verified up to the root CA in a chain where this EKU extension is not
> present
> anymore. Indeed, the mere presence of this EKU makes SICA a delegated OCSP
> responder on behalf of ICA. If SICA was intended to be an issuing CA and
> not an
> OCSP responder, there is the security risk that SICA signs an OCSP
> response for
> any sibling certificate on behalf of ICA. This is a huge security concern
> if
> siblings are not operated by the same company/entity. This risk can impact
> all
> the Sub-CA companies having certificates under ICA. Hence, it is important
> that
> all the direct children certificates of ICA that have this problem are
> neutralized.
>
> In addition to the security risk, there is also a compliance issue that is
> introduced because the Baseline Requirements [4] state that OCSP responders
> must also have the id-pkix-ocsp-nocheck extension in addition to the EKU.
>
> 2.2. Original solution
>
> Image:
> https://raw.githubusercontent.com/kudelskisecurity/dangerous-ocsp-delegated-responder-cert-poc/master/img/original_solution_diagram.jpg
> Figure 2: Original solution - 4-level hierarchy
>
> The original solution addresses the issue in the following way (see Figure
> 2):
>
> (i) a new SICA2 certificate is issued. It is signed by ICA and is based on
> a
> new key pair;
>
> (ii) the old SICA certificate is revoked and, very importantly for security
> reasons, its associated private key is destroyed during an audited
> ceremony;
>
> (iii) new EE2 certificates are issued by SICA2.
>
> 2.3. New solution
>
> Image:
> https://raw.githubusercontent.com/kudelskisecurity/dangerous-ocsp-delegated-responder-cert-poc/master/img/new_solution_diagram.jpg
> Figure 3: New solution - 4-level hierarchy
>
> The new solution addresses the issue as illustrated in Figure 3, which can
> be
> summarized as follows:
>
> (i) RCA issues a new ICA2 certificate, using a new DN and a new key pair;
>
> (ii) RCA revokes the old ICA;
>
> (iii) ICA2 issues a new SICA2 certificate, which reuses the same SICA DN
> and
> key pair but does not contain the OCSP problematic EKU;
>
> (iv) SICA is also revoked.
>
> Since the same DN and key pair are reused for SICA2, the EE certificate is
> still valid.
>
> Also, the new SICA2 cannot be involved in any OCSP response, in any
> context at
> all (removal of the problematic EKU). The old SICA cannot be involved in
> any
> OCSP response either, in any context at all (it does not validate with
> regard
> to the new ICA2 and ICA is revoked). Finally, no third party, which would
> not
> have properly done the job of properly renewing a problematic SICA
> certificate,
> can do any harm to any other company 

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-11-15 Thread Nils Amiet via dev-security-policy
Hello all,

My colleague Andre and I recently became aware of this problem and we explored
a new solution to it.
Please find our analysis below.

For a formatted version of this message with images inline, please find it
available at: 
https://research.kudelskisecurity.com/2020/11/16/a-solution-to-the-dangerous-delegated-responder-certificate-problem/

===

Issue of the Dangerous Delegated OCSP Responder Certificate

1. Introduction

In this memo, we address the issue of the "Dangerous Delegated OCSP Responder
Certificate" [1]. We propose a new solution, different from the original
solution suggested in the Mozilla Dev Security Policy (mdsp) mailing list.

This proposed solution addresses compliance issues, security risks,
feasibility, time to implement and practicability (e.g. operational aspects) as
well as negative financial consequences. For the feasibility, please consider
the Proof of Concept provided in [2].

This memo is structured as follows. First, section 2 below describes and
compares the two solutions. Then, section 3 analyzes them by providing an
extensive list of potential concerns and discussing each of them in detail.

To illustrate the analysis further, a complete set of diagrams is available at
[3].


2. Description of the initial situation and the two solutions

2.1. Initial situation

Image: 
https://raw.githubusercontent.com/kudelskisecurity/dangerous-ocsp-delegated-responder-cert-poc/master/img/initial_situation_diagram.jpg
Figure 1: Initial situation - 4-level hierarchy

In Figure 1, SICA contains the problematic id-kp-ocspSigning Extended Key Usage
(EKU). The goal is to reach a situation where the EE certificate can be
verified up to the root CA in a chain where this EKU extension is not present
anymore. Indeed, the mere presence of this EKU makes SICA a delegated OCSP
responder on behalf of ICA. If SICA was intended to be an issuing CA and not an
OCSP responder, there is the security risk that SICA signs an OCSP response for
any sibling certificate on behalf of ICA. This is a huge security concern if
siblings are not operated by the same company/entity. This risk can impact all
the Sub-CA companies having certificates under ICA. Hence, it is important that
all the direct children certificates of ICA that have this problem are
neutralized.

In addition to the security risk, there is also a compliance issue that is
introduced because the Baseline Requirements [4] state that OCSP responders
must also have the id-pkix-ocsp-nocheck extension in addition to the EKU.

2.2. Original solution

Image: 
https://raw.githubusercontent.com/kudelskisecurity/dangerous-ocsp-delegated-responder-cert-poc/master/img/original_solution_diagram.jpg
Figure 2: Original solution - 4-level hierarchy

The original solution addresses the issue in the following way (see Figure 2):

(i) a new SICA2 certificate is issued. It is signed by ICA and is based on a
new key pair;

(ii) the old SICA certificate is revoked and, very importantly for security
reasons, its associated private key is destroyed during an audited ceremony;

(iii) new EE2 certificates are issued by SICA2.

2.3. New solution

Image: 
https://raw.githubusercontent.com/kudelskisecurity/dangerous-ocsp-delegated-responder-cert-poc/master/img/new_solution_diagram.jpg
Figure 3: New solution - 4-level hierarchy

The new solution addresses the issue as illustrated in Figure 3, which can be
summarized as follows:

(i) RCA issues a new ICA2 certificate, using a new DN and a new key pair;

(ii) RCA revokes the old ICA;

(iii) ICA2 issues a new SICA2 certificate, which reuses the same SICA DN and
key pair but does not contain the OCSP problematic EKU;

(iv) SICA is also revoked.

Since the same DN and key pair are reused for SICA2, the EE certificate is
still valid.

Also, the new SICA2 cannot be involved in any OCSP response, in any context at
all (removal of the problematic EKU). The old SICA cannot be involved in any
OCSP response either, in any context at all (it does not validate with regard
to the new ICA2 and ICA is revoked). Finally, no third party, which would not
have properly done the job of properly renewing a problematic SICA certificate,
can do any harm to any other company (the old validation branch is not
available anymore by the revocation of ICA).

2.4. Differences between the 2 solutions

2.4.1. 3-level vs 4-level hierarchy

The original solution works in both a 3-level hierarchy and 4-level hierarchy.
The new solution works only in a 4-level (or more) hierarchy because that
hierarchy allows for the parent of the affected certificate to be revoked
because it is not a trust anchor. Therefore, no third party can be affected
because nobody trusts that chain anymore. This is impossible to achieve in the
3-level hierarchy because the parent of the affected certificate is a trust
anchor and cannot be revoked.

2.4.2. Risk surface

However, each solution induces a different risk surface.

On the one hand, the new solution involves only one entity

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-09-11 Thread Sebastian Nielsen via dev-security-policy


> Look, we've had Root CAs that have actively lied in this Forum, 
> misrepresenting things to the community they later admitted they knew were 
> false, and had previously been an otherwise CA in good standing (or at 
> least, no worse standing than other CAs). A CA is a CA, and the risk is 
> treated the same. 

What the original poster is trying to say, is that if a CA is malicious, and 
the CA operates its own sub-CA, the CA itself could use its own OCSP signing 
certificate (the real one) to sign fake unrevocation responses.
Thus, if a CA is bad, they is bad and nothing cannot be done to that.

To take the example:
Lets say im a CA, and then operate a own sub-CA with delegated OCSP certificate 
with the invalid security properties.
This means the sub-CA's certificate can be abused to for example unrevoke 
certificates that CA revoked, or revoke certificates belongning to the CA or 
other sub-CA.

**BUT**
Note that CA and sub-CA is the same entity!!!
This means, that even if the issue is fixed, if the CA is malicious, the CA 
could simply have their OCSP server sign fake responses that unrevokes revoked 
certificates. Nothing prevents that, even if the sub-CA's right to sign OCSP 
responses is revoked or disabled.

The ONLY way to actively prevent a malicious CA from unrevoking a certificate, 
would be to require some sort of public OCSP ledger, like blockchain or 
similiar, where no changes can be made to anything posted in OCSP server.

Its like saying: "You have the master key and a user key to the room A. You can 
abuse the room A, so you shouldn't have the user key to room A.".
But wait - I have the master key to room A, and can still abuse the right, even 
if you take my user key.
Or another example: "I know the root password to system A, and have sudo rights 
to system A. You revoke my sudo rights - but I still have the root password."
So it becomes superfluicious to revoke my sudo rights, as I still have the same 
privileges.

So the fact that a CA and sub-CA operated by the very same entity, where the 
sub-CA can be abused, is not a security problem, because the same security 
problem exists even if the sub-CA doesn't exist at all.


However, if the sub-CA is a separate entity - like another person - then they 
stand in a position where they get higher privileges than they should have.
And then it becomes a security risk, because then it exist a escalation of 
privileges, where the sub-CA gets more privileges than it should have.

The only problem here is if a CA then lies about a sub-CA being entity of the 
CA, when they actually isn't -- but that should be visible in the audit because 
the composition of the sub-CA and CA must be revealed in the audit and it 
should be easy to catch if CA and sub-CA is different entities.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-16 Thread Oscar Conesa via dev-security-policy

Hi Ryan,

Obviously it is just my personal opinion of the facts made in a public 
discussion forum. Like many other participants in this forum, I only 
give my professional point of view as a PKI expert. This does not mean 
that my opinion and my arguments are shared by the company where I work.


I am not the person in charge of officially answering these issues at 
Firmaprofesional (as can be seen on the Bugzilla incident website) and I 
do not have the authorization from my company to do so. I reiterate that 
these are only opinions and arguments made in a personal capacity. I 
apologize if I have involuntarily hinted that this was the official 
position of Firmaprofesional. Maybe I should have used a personal email 
to participate in the forum.


I also wanted to take the opportunity to apologize if I have offended 
you with any of my comments. It was not my intention at all. I believe 
that both Google and Mozilla are doing a great job in defense of PKI 
technology and digital certificates, putting the safety of users before 
the economic interests of CAs. Thanks to this great work, the 
willingness of CAs to fulfill their obligations has improved 
dramatically in recent years. We all remember what the situation was 10 
or 15 years ago, when bad practices and misissued certificates were the 
usual practice without any consequences.


What we have achieved is a great achievement for the community, and we 
must defend it. Although with some unilateral decisions, there is a risk 
that this open and objective security model of CA control will become a 
closed and totally arbitrary process, managed by a few multinational 
companies.


I hope that within 24 hours Frmaprofesional will respond officially to 
the open ticket.


I also hope and trust that in any case, Firmaprofesional will be treated 
fairly and equitably with respect to the rest of the other affected CAs.




On 16/7/20 19:33, Ryan Sleevi wrote:


Hi Oscar,

Unfortunately, there's a number of factual errors here that I think 
greatly call into question the ability for Firmaprofessional to work 
with users and relying parties to understand the risks and to take 
them seriously.


I would greatly appreciate if Firmaprofesional share their official 
response on https://bugzilla.mozilla.org/show_bug.cgi?id=1649943 
within the next 24 hours, so that we can avoid any further delays in 
taking the appropriate steps to ensure users are protected and any 
risks are appropriately mitigated. If this message is meant to be your 
official response, please feel free to paste it there.


Unfortunately, I don't think discussing the point-by-point takedown of 
your confusion here is useful, because I think we've moved beyond 
discussing into the abstract and discussing very specifically about 
the degree to which Firmaprofesional is interested  (or not) in 
collaborating to keep users safe.


I think, barring an update within the next 24 hours, it seems 
reasonable to take this post as the final and official response, and 
begin taking steps appropriately to reduce risk.


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-16 Thread Ryan Sleevi via dev-security-policy
On Thu, Jul 16, 2020 at 12:45 PM Oscar Conesa via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> We should reposition the debate again, because a false reality is being
> assumed.
>
> Some people are assuming that this is the real situation: "A security
> incident has occurred due to the negligence of certain CAs, and this
> security incident has got worse by the CAs' refusal to apply the correct
> measures to resolve the incident, putting the entire community at risk".
>
> But this description of the situation is not true at all, for many
> reasons and on many levels.
>
> By order:
>
> a) There is no security incident. Since no key has been compromised and
> there is no suspicion that a key compromise has occurred.


> b) There has been no negligence, errors, or suspicious or malicious
> behavior by any CA in the issuance of these certificates. All affected
> certificates have been issued following good practices and trying to
> comply with all the applicable security standards and regulations.
>
> c) The only relevant event that occurred recently (July 1) is that a
> person, acting unilaterally and by surprise, reported 14 incidents in
> the Mozilla Root Program about 157 ICA certificates misissued
> (certificates issued with the EKU "OCSPSigning" without the extension
> "OCSP-nocheck"). The wording of the incidents assumes that "his
> interpretation of the standards is the correct one" and that "his
> proposed solution is the only valid one".
>
> d) This is not a new incident since most of the certificates affected
> were issued years ago. The existence of this ambiguity in the standards
> was made public 1 year ago, on September 4, 2019, but the debate was
> closed without conclusion 6 days later
> (
> https://groups.google.com/g/mozilla.dev.security.policy/c/XQd3rNF4yOo/m/bXYjt1mZAwAJ
> ).
>
> e) The reason why these 157 ICA certificates were issued with EKU
> "OCSPSigning" was because at least 14 CAs understood that it was a
> requirement of the standard. They interpreted that they must include
> this EKU to simultaneously implement three requirements: "CA Technical
> Restriction", "OCSP Delegate Responders" and "EKU Chaining".
> Unfortunately, on this point the standard is ambiguous and poorly written.
>
> f) Last year, a precedent was created with the management of the
> "Insufficient Serial Number Entropy" incident. Many CAs were forced to
> do a massive certificate revocation order to comply with the standards
> "literally", even if these standards are poorly written or do not
> reflect the real intention of the editors.
>
> g) None of the 157 ICA certificates affected were generated with the
> intention of being used as "Delegated OCSP Responders". None of the 157
> certificates has been used at any time to sign OCSP responses.
> Additionally, most of them do not include the KU Digital Signature, so
> according to the standard, the OCSP responses that could be generated
> would not be valid.
>
> h) As a consequence of these discussions, a Security Bug has been
> detected that affects multiple implementations of PKI clients (a CVE
> code should be assigned). The security bug occurs when a client PKI
> application accept digitally signed OCSP responses for a certificate
> that does not have the "DigitalSignature" KeyUsage active, as required
> by the standard.
>
> i) There is no procedure in any standard within Mozilla Policy that
> specifies anything about "key destruction" or "reuse of uncompromised
> keys".
>
> j) As long as the CAs maintain sole control of the affected keys, there
> is no security problem. For a real security incident to exist, it should
> happen simultaneously: (i) the keys were compromised, (ii) they were
> used to generate malicious OCSP responses, (iii) some client application
> accepted these OCSP responses as valid and (iv) the revocation process
> of the affected certificate was ineffective because the attacker was
> able to generate fraudulent OCSP responses about its own status. Still,
> in that case there is a quick and easy solution: remove the affected CA
> Root from the list of trusted CAs.
>
> k) The real risk of this situation is assumed by the affected CAs and
> not by the end users, since in the worst case, the ICA key compromise
> would imply the revocation of the CA Root.
>
>

Hi Oscar,

Unfortunately, there's a number of factual errors here that I think greatly
call into question the ability for Firmaprofessional to work with users and
relying parties to understand the risks and to take them seriously.

I would greatly appreciate if Firmaprofesional share their official
response on https://bugzilla.mozilla.org/show_bug.cgi?id=1649943 within the
next 24 hours, so that we can avoid any further delays in taking the
appropriate steps to ensure users are protected and any risks are
appropriately mitigated. If this message is meant to be your official
response, please feel free to paste it there.

Unfortunately, I don't think discussing 

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-16 Thread Oscar Conesa via dev-security-policy



We should reposition the debate again, because a false reality is being 
assumed.


Some people are assuming that this is the real situation: "A security 
incident has occurred due to the negligence of certain CAs, and this 
security incident has got worse by the CAs' refusal to apply the correct 
measures to resolve the incident, putting the entire community at risk".


But this description of the situation is not true at all, for many 
reasons and on many levels.


By order:

a) There is no security incident. Since no key has been compromised and 
there is no suspicion that a key compromise has occurred.


b) There has been no negligence, errors, or suspicious or malicious 
behavior by any CA in the issuance of these certificates. All affected 
certificates have been issued following good practices and trying to 
comply with all the applicable security standards and regulations.


c) The only relevant event that occurred recently (July 1) is that a 
person, acting unilaterally and by surprise, reported 14 incidents in 
the Mozilla Root Program about 157 ICA certificates misissued 
(certificates issued with the EKU "OCSPSigning" without the extension 
"OCSP-nocheck"). The wording of the incidents assumes that "his 
interpretation of the standards is the correct one" and that "his 
proposed solution is the only valid one".


d) This is not a new incident since most of the certificates affected 
were issued years ago. The existence of this ambiguity in the standards 
was made public 1 year ago, on September 4, 2019, but the debate was 
closed without conclusion 6 days later 
(https://groups.google.com/g/mozilla.dev.security.policy/c/XQd3rNF4yOo/m/bXYjt1mZAwAJ).


e) The reason why these 157 ICA certificates were issued with EKU 
"OCSPSigning" was because at least 14 CAs understood that it was a 
requirement of the standard. They interpreted that they must include 
this EKU to simultaneously implement three requirements: "CA Technical 
Restriction", "OCSP Delegate Responders" and "EKU Chaining". 
Unfortunately, on this point the standard is ambiguous and poorly written.


f) Last year, a precedent was created with the management of the 
"Insufficient Serial Number Entropy" incident. Many CAs were forced to 
do a massive certificate revocation order to comply with the standards 
"literally", even if these standards are poorly written or do not 
reflect the real intention of the editors.


g) None of the 157 ICA certificates affected were generated with the 
intention of being used as "Delegated OCSP Responders". None of the 157 
certificates has been used at any time to sign OCSP responses. 
Additionally, most of them do not include the KU Digital Signature, so 
according to the standard, the OCSP responses that could be generated 
would not be valid.


h) As a consequence of these discussions, a Security Bug has been 
detected that affects multiple implementations of PKI clients (a CVE 
code should be assigned). The security bug occurs when a client PKI 
application accept digitally signed OCSP responses for a certificate 
that does not have the "DigitalSignature" KeyUsage active, as required 
by the standard.


i) There is no procedure in any standard within Mozilla Policy that 
specifies anything about "key destruction" or "reuse of uncompromised keys".


j) As long as the CAs maintain sole control of the affected keys, there 
is no security problem. For a real security incident to exist, it should 
happen simultaneously: (i) the keys were compromised, (ii) they were 
used to generate malicious OCSP responses, (iii) some client application 
accepted these OCSP responses as valid and (iv) the revocation process 
of the affected certificate was ineffective because the attacker was 
able to generate fraudulent OCSP responses about its own status. Still, 
in that case there is a quick and easy solution: remove the affected CA 
Root from the list of trusted CAs.


k) The real risk of this situation is assumed by the affected CAs and 
not by the end users, since in the worst case, the ICA key compromise 
would imply the revocation of the CA Root.



___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-15 Thread Filippo Valsorda via dev-security-policy
2020-07-15 12:30 GMT-04:00 Chema López via dev-security-policy 
:
> El martes, 14 de julio de 2020 a las 9:02:01 UTC+2, Filippo Valsorda escribió:
> 
> 
> > This whole argument seems to lose track of the difference between CAs and 
> > RPs. CAs have strict responsibilities to follow all the rules of the 
> > policies they committed to in order to be trusted by RPs. Full stop. There 
> > is no blaming RPs for a CA's failure to follow those rules. RPs, 
> > themselves, only have a responsibility to their users—not to the CAs—and 
> > uphold it as they see fit. 
> > 
> 
> I utterly agree with you at this point, Filippo. Especially when you state 
> that "RPs, themselves, only have a responsibility to their users—not to the 
> CAs—and uphold it as they see fit. ". If the RP does not check what they 
> shall to be sure if a specific item is trustworthy, it is up to them and 
> their clients. But, again,  if the RP does not check what they shall to be 
> sure if a specific item is trustworthy, it is not CA's fault. Do we agree on 
> that?

What the RPs do have little direct bearing on the CAs responsibilities, yes. 
(Indirectly, RPs forge policies that allow them to operate safely.) The CAs 
have to follow the policies regardless of whether the RPs are behaving 
correctly or not, and regardless of whether it impacts the RPs or not. RPs are 
also allowed to rely on all and any parts of the policies, because CAs are 
supposed to follow them all.

It seems we agree on that, but disagree on the interpretation of the rules, so 
I'll skip ahead.

> > RPs trust the CAs to do exactly what they say in the policies, not to do 
> > something that is sort of the same as long as the RPs follow the 
> > specification correctly. That's not the deal. We trust the CAs only because 
> > and as long as they show they can precisely follow those policies.
> 
> And, in general, CAs show that they can precisely follow those policies. 
> 
> For example, let's see the beginning of this thread: 
> - "Section 4.9.9 of the BRs requires that OCSP Delegated Responders MUST 
> include an id-pkix-ocsp-nocheck extension."
> - but BR also state:  Section  7.1.2.2. Subordinate CA Certificate   " e. 
> keyUsage This extension MUST be present and MUST be marked critical. Bit 
> positions for keyCertSign and cRLSign MUST be set. If the Subordinate CA 
> Private Key is used for signing OCSP responses, then the digitalSignature bit 
> MUST be set. ". This section is align with RFC5280 and RFC6960
> 
> So, an ICA or SCA cert. without keyUsage set to digitalSignature is not an 
> OCSP Responder. Full stop. We can agree that this would be kind of a weird 
> certificate, but it is not a valid OCSP responder certificate and RPs 
> shouldn't trust their responses.

I implement and sometimes write RFCs as a day job, and this interpretation of 
"MUST" is new to me.

Let me pull up an RFC I am familiar with, for example RFC 8446. Section 4.1.4. 
Hello Retry Request states:

> The server's extensions MUST contain "supported_versions".

By your interpretation if a TLS client receives a Hello Retry Request message 
that does not contain a "supported_versions" extension, then it's not an HRR 
message, the client should not consider it a HRR message, and... ignore it? All 
of the other rules that apply to HRR messages don't apply anymore?

This is such a mismatch in the understanding of how policies _work_ between you 
and most* *(as Ryan pointed out) RPs that I don't think it matters who's right. 
If we are not on the same page on how to fundamentally read the rules, there is 
no way to communicate and trust each other. Needless to say, that's _a problem_ 
and RPs don't have a lot of pleasant ways to fix it.

(Because of course, RPs owe it to their users to only trust CAs they can 
effectively communicate with, so that they can agree on a set of mutually 
understood policies they can trust the CAs to follow, in order to protect the 
users. When there's a mismatch, incidents like this happen.)

We can discuss here what "MUST" means in our personal capacities, but when 
speaking in your professional capacity, my personal advice is to acknowledge 
how RPs interpret the language in the policies and try to demonstrate you can 
understand and abide by that. Anything else is unworkable, I'm afraid.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-15 Thread Ryan Sleevi via dev-security-policy
On Wed, Jul 15, 2020 at 12:30 PM Chema López via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> So, an ICA or SCA cert. without keyUsage set to digitalSignature is not an
> OCSP Responder. Full stop.


False. Full stop.

I mentioned in my reply to Corey, but I think it's disastrous for trust for
a CA to make this response. I realize you qualified this as a personal
capacity, but I want to highlight you're also seeming to make this argument
in a professional capacity, as highlighted by
https://bugzilla.mozilla.org/show_bug.cgi?id=1649943 . My hope is that, in
a professional capacity, you'll respond to that issue as has been requested.

Absent a further update, it may be necessary and appropriate to have a
discussion as to whether continued trust is warranted, because there's a
lack of urgency, awareness, transparency, and responsiveness to this issue.

I appreciate you quoting replies from the thread, but you also seem to have
cherry-picked replies that demonstrate an ignorance or lack of awareness
about the actual PKI ecosystem.

* macOS does not require the digitalSignature bit for validating OCSP
responses
* OpenSSL does not require the digitalSignature bit for validating OCSP
responses
* GnuTLS does not require the digitalSignature bit for validating OCSP
responses
* Mozilla NSS does not require the digitalSignature bit for validating OCSP
responses
* As best I can tell, Microsoft CryptoAPI does not require the
digitalSignature bit for validating OCSP responses (instead relying on
pkix-nocheck)

Mozilla code explicitly stated and referenced the fact that the
digitalSignature bit was not only seen as not necessary, but harmful for
interoperability, due to CAs.

You cannot pretend these are not OCSP responders simply because you haven't
issued OCSP responses (intent). They are, for every purpose, OCSP
responders that are misissued. And you need to treat this with the urgency,
seriousness, gravity, and trustworthiness required.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-15 Thread Chema López via dev-security-policy
El martes, 14 de julio de 2020 a las 9:02:01 UTC+2, Filippo Valsorda escribió:


> This whole argument seems to lose track of the difference between CAs and 
> RPs. CAs have strict responsibilities to follow all the rules of the policies 
> they committed to in order to be trusted by RPs. Full stop. There is no 
> blaming RPs for a CA's failure to follow those rules. RPs, themselves, only 
> have a responsibility to their users—not to the CAs—and uphold it as they see 
> fit. 
> 

I utterly agree with you at this point, Filippo. Especially when you state that 
"RPs, themselves, only have a responsibility to their users—not to the CAs—and 
uphold it as they see fit. ". If the RP does not check what they shall to be 
sure if a specific item is trustworthy, it is up to them and their clients. 
But, again,  if the RP does not check what they shall to be sure if a specific 
item is trustworthy, it is not CA's fault. Do we agree on that?

Some evidences that some RPs at least have doubts if they are doing things 
correctly would this comment in Chromiums OCSP code:

// TODO(eroman): Not all properties of the certificate are verified, only the
// signature and EKU. Can full RFC 5280 validation be used, or are there
// compatibility concerns?
 

And this fact, as a user of Chromium based browser that I am, awakens 
conflicting feelings in me: on one hand, I appreciate the transparency; on the 
other, I'm not sure when a Chromium based browser is trusting in a certificate, 
 if the browser is checking what it shall.

> RPs trust the CAs to do exactly what they say in the policies, not to do 
> something that is sort of the same as long as the RPs follow the 
> specification correctly. That's not the deal. We trust the CAs only because 
> and as long as they show they can precisely follow those policies.

And, in general, CAs show that they can precisely follow those policies. 

For example, let's see the beginning of this thread: 
- "Section 4.9.9 of the BRs requires that OCSP Delegated Responders MUST 
include an id-pkix-ocsp-nocheck extension."
- but BR also state:  Section  7.1.2.2. Subordinate CA Certificate   " e. 
keyUsage This extension MUST be present and MUST be marked critical. Bit 
positions for keyCertSign and cRLSign MUST be set. If the Subordinate CA 
Private Key is used for signing OCSP responses, then the digitalSignature bit 
MUST be set. ". This section is align with RFC5280 and RFC6960

So, an ICA or SCA cert. without keyUsage set to digitalSignature is not an OCSP 
Responder. Full stop. We can agree that this would be kind of a weird 
certificate, but it is not a valid OCSP responder certificate and RPs shouldn't 
trust their responses.

It seems it is also clear to more people, according to:
- https://bugzilla.mozilla.org/show_bug.cgi?id=1652581
- 
https://www.mail-archive.com/dev-security-policy@lists.mozilla.org/msg13541.html
- 
https://www.mail-archive.com/dev-security-policy@lists.mozilla.org/msg13599.html

> "No, you see, it's actually your fault" is the least trustworthy reaction I 
> can possibly imagine to being caught not following the policy. 
> 
Not worth commenting on

> As an outsider (because again I speak in my personal capacity, and at most I 
> work on a non-browser RP, Go's crypto/x509) it's puzzling to see the 
> CA/Browser forum regularly lose track of the different roles of the 
> participants.

I would like to make it clear that I speak in my personal capacity too, and my 
fault in previous messages has been to use Corporate email

Thanks.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-14 Thread Filippo Valsorda via dev-security-policy
2020-07-13 13:39 GMT-04:00 Chema Lopez via dev-security-policy 
:
> From my point of view, the arguments at
> https://www.mail-archive.com/dev-security-policy@lists.mozilla.org/msg13642.html
> are
> as incontestable as the ones stated by Corey Bonnell here:
> https://www.mail-archive.com/dev-security-policy@lists.mozilla.org/msg13541.html
> .
> 
> 
> RFC5280 and RFC6960 have to be considered and thus, a certificate without
> KU digitalSignature is not an OCSP Responder. We can not choose what to
> comply with or what is mandatory or if a RFC is mandatory but BR "profiles"
> the RFC. And when I say "we" I mean all the players, especially the ones in
> the CA / Browser forum.
> 
> 
> And yes, relying parties need to check this. For its own benefit, relying
> parties need to understand how a proper OCSP response is made and check it
> properly.
> 
> 
> It is astonishing how what looks like a bad practice of (some) relying
> parties has mutated into a security risk at CAs side.

This whole argument seems to lose track of the difference between CAs and RPs. 
CAs have strict responsibilities to follow all the rules of the policies they 
committed to in order to be trusted by RPs. Full stop. There is no blaming RPs 
for a CA's failure to follow those rules. RPs, themselves, only have a 
responsibility to their users—not to the CAs—and uphold it as they see fit.

RPs trust the CAs to do exactly what they say in the policies, not to do 
something that is sort of the same as long as the RPs follow the specification 
correctly. That's not the deal. We trust the CAs only because and as long as 
they show they can precisely follow those policies. "No, you see, it's actually 
your fault" is the least trustworthy reaction I can possibly imagine to being 
caught not following the policy.

As an outsider (because again I speak in my personal capacity, and at most I 
work on a non-browser RP, Go's crypto/x509) it's puzzling to see the CA/Browser 
forum regularly lose track of the different roles of the participants.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-13 Thread Chema Lopez via dev-security-policy
>From my point of view, the arguments at
https://www.mail-archive.com/dev-security-policy@lists.mozilla.org/msg13642.html
are
as incontestable as the ones stated by Corey Bonnell here:
https://www.mail-archive.com/dev-security-policy@lists.mozilla.org/msg13541.html
.


RFC5280 and RFC6960 have to be considered and thus, a certificate without
KU digitalSignature is not an OCSP Responder. We can not choose what to
comply with or what is mandatory or if a RFC is mandatory but BR "profiles"
the RFC. And when I say "we" I mean all the players, especially the ones in
the CA / Browser forum.


And yes, relying parties need to check this. For its own benefit, relying
parties need to understand how a proper OCSP response is made and check it
properly.


It is astonishing how what looks like a bad practice of (some) relying
parties has mutated into a security risk at CAs side.


It is not only a matter of CA's leading the solution of a, at least
questionable security risk. It is a matter of working all together.


It is not a secret that CA /B Forum is not living its better moments, in
part, due to unilateral decisions of (again, some) browsers against the
democratic (in terms of CA/B Forum bylaws) decision of a ballot.


It is time to collaborate again between CAs and Browsers instead of the
latelly usual (some) Browsers slapping CAs. For transparency sake, I think
that it would be a nice initiative from Browsers to disclose their
practices regarding the validation of OCSP Responses and working all
together, improve or even design practices on this to be followed,
although following RFC 5280 and RFC 6960 should be sufficient.


Thanks,

Chema.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-12 Thread Ryan Sleevi via dev-security-policy
On Sun, Jul 12, 2020 at 4:19 PM Oscar Conesa via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> To obtain this confidence, CAs must comply with all the requirements
> that are imposed on them in the form of Policies, Norms, Standards and
> Audits that are decided on an OBJECTIVE basis for all CAs. The
> fulfillment of all these requirements must be NECESSARY, but also
> SUFFICIENT to stay in the Root Program.
>

As Matt Palmer points out, that's not consistent with how any root program
behaves. This is not unique to Mozilla, as you find similar text at
Microsoft and Google, and I'm sure you'd find similar text with Apple were
they to say anything.

Mozilla's process is transparent, in that it seeks to weigh public
information, but it's inherently subjective: this can easily be seen by the
CP/CPS reviews, which are, by necessity, a subjective evaluation of risk
criteria based on documentation. You can see this in the CAs that have been
accepted, and the applications that have been rejected, and the CAs that
have been removed and those that have not been. Relying parties act on the
information available to them, including how well the CA handles and
responds to incidents.

Some CAs may want to assume a leadership role in the sector and
> unilaterally assume more additional strict security controls. That is
> totally legitimate. But it is also legitimate for other CAs to assume a
> secondary role and limit ourselves to complying with all the
> requirements of the Root Program. You cannot remove a CA from a Root
> Program for not meeting fully SUBJETIVE additional requirements.
>

CAs have been, can be, and will continue to be. I think it should be
precise here: we're talking about an incident response. Were things as
objective as you present, then every CA who has misissued such a
certificate would be at immediate risk of total and complete distrust. We
know that's not a desirable outcome, nor a likely one, so we recognize that
there is, in fact, a shade of gray here for judgement.

That judgement is whether or not the CA is taking the issue seriously, and
acting to assume a leadership role. CAs that fail to do so are CAs that
pose risk, and it may be that the risk they pose is unacceptable. Key
destruction is one way to reassure relying parties that the risk is not
possible. I think a CA that asked for it to be taken on faith,
indefinitely, is a CA that fundamentally misunderstands the purpose and
goals of a root program.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-12 Thread Matt Palmer via dev-security-policy
On Sun, Jul 12, 2020 at 10:13:59PM +0200, Oscar Conesa via dev-security-policy 
wrote:
> Some CAs may want to assume a leadership role in the sector and unilaterally
> assume more additional strict security controls. That is totally legitimate.
> But it is also legitimate for other CAs to assume a secondary role and limit
> ourselves to complying with all the requirements of the Root Program. You
> cannot remove a CA from a Root Program for not meeting fully SUBJETIVE
> additional requirements.

I fear that your understanding of the Mozilla Root Store Policy is at odds
with the text of that document.

"Mozilla MAY, at its sole discretion, decide to disable (partially or fully)
or remove a certificate at any time and for any reason."

I'd like to highlight the phrase "at its sole discretion", and also "for any
reason".

If the CA Module owner wakes up one day and, having had a dream which causes
them to dislike the month of July, decides that all CAs whose root
certificates have a notBefore in July must be removed, the impacted CAs do
not have any official cause for complaint.  I have no doubt that such an
arbitrary decision would be reversed, and the consequences would not make it
into production, but the decision would not be reversed because it "cannot"
happen, but rather because it is contrary to the interests of Mozilla and
the user community which Mozilla serves.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-12 Thread Oscar Conesa via dev-security-policy

On 12/7/20 2:21, Ryan Sleevi wrote:
I want to be clear here: CAs are not trusted by default. The existence 
of a CA, within a Root Program, is not a blanket admission of trust in 
the CA.


Here we have a deep disagreement: A CA within a Root Program must be 
considered as a trusted CA by default. Mistrust in a CA about its 
ability to operate safely can occur BEFORE being admitted in the Root 
Program or AFTER being removed of the Root Program. Relaying parties 
trust in the Root Program (this implies that they trust all the CAs that 
are part of the program without exception).


To obtain this confidence, CAs must comply with all the requirements 
that are imposed on them in the form of Policies, Norms, Standards and 
Audits that are decided on an OBJECTIVE basis for all CAs. The 
fulfillment of all these requirements must be NECESSARY, but also 
SUFFICIENT to stay in the Root Program.


Some CAs may want to assume a leadership role in the sector and 
unilaterally assume more additional strict security controls. That is 
totally legitimate. But it is also legitimate for other CAs to assume a 
secondary role and limit ourselves to complying with all the 
requirements of the Root Program. You cannot remove a CA from a Root 
Program for not meeting fully SUBJETIVE additional requirements.


I want to highlight that both the "destruction of uncompromised keys" 
and "the prohibition to reuse uncompromised keys" are two security 
controls that do not appear in any requirement of the Mozilla Root 
Program, so CAs have no obligation to fulfill them. If someone considers 
these security controls as necessary, they can be requested to be 
included in the next version of the corresponding standard.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-11 Thread Ryan Sleevi via dev-security-policy
On Sat, Jul 11, 2020 at 1:18 PM Oscar Conesa via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> f) For CAs that DO have sole control of the keys: There is no reason to
> doubt the CA's ability to continue to maintain the security of these
> keys, so the CA could reuse the keys by reissuing the certificate with
> the same keys. If there are doubts about the ability of a CA to protect
> its own critical keys, that CA cannot be considered "trusted" in any way.
>

While Filippo has pointed out the logical inconsistency of f and g, I do
want to establish here the problem with f, as written.

For CAs that DO have sole control of the keys: There is no reason to
*trust* the CA's ability to continue to maintain the security of the keys.

I want to be clear here: CAs are not trusted by default. The existence of a
CA, within a Root Program, is not a blanket admission of trust in the CA.
We can see this through elements such as annual audits, and we can also see
this in the fact that, for the most part, CAs have largely not been removed
on the basis of individual incident reports.

CAs seem to assume that they're trusted until they prove otherwise, when
rather, the opposite is true: we constantly view CAs through the lens of
distrusting them, and it is only by the CA's action and evidence that we
hold off on removing trust in them. Do they follow all of the requirements?
Do they disclose sufficient detail in how they operate? Do they maintain
annual audits with independent evaluation? Do they handle incident reports
thoughtfully and thoroughly, or do they dismiss or minimize them?

As it comes to this specific issue: there is zero reason to trust that a
CA's key, intended for issuing intermediates, is sufficiently protected
from being able to issue OCSP responses. As you point out in g), that's not
a thing some CAs have expected to need to do, so why would or should they?
The CA needs to provide sufficient demonstration of evidence that this has
not, can not, and will not happen. And even then, it's merely externalizing
risk: the community has to constantly be evaluating that evidence in
deciding whether to continue. That's why any failure to revoke, or any
revocation by rotating EKUs but without rolling keys, is fundamentally
insufficient.

The question is not "Do these keys need to be destroyed", but rather, "when
do these keys need to be destroyed" - and CAs need to come up with
meaningful plans to get there. I would consider it unacceptable if that
process lasted a year, and highly questionable if it lasted 9 months,
because these all rely on clients, globally, accepting the risk that a
control will fail. If a CA is going beyond the 7 days require by the BRs -
which, to be clear, it would seem the majority are - they absolutely need
to come up with a plan to remove this eventual risk, and detail their logic
for the timeline about when, how, and why they've chosen when they chose.

As I said, there's no reason to trust the CA here: there are plenty of ways
the assumed controls are insufficient. The CA needs to demonstrate why.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-11 Thread Filippo Valsorda via dev-security-policy
2020-07-11 13:17 GMT-04:00 Oscar Conesa via dev-security-policy 
:
> f) For CAs that DO have sole control of the keys: There is no reason to 
> doubt the CA's ability to continue to maintain the security of these 
> keys, so the CA could reuse the keys by reissuing the certificate with 
> the same keys. If there are doubts about the ability of a CA to protect 
> its own critical keys, that CA cannot be considered "trusted" in any way.

In this section, you argue that we (the relying party ecosystem, I am speaking 
in my personal capacity) should not worry about the existence of unrevokable 
ICAs with long expiration dates, because we can trust CAs to operate them 
safely.

> g) On the other hand, if the affected certificate (with EKU OCSPSigning) 
> does not have the KU Digital Signature, then that certificate cannot 
> generate valid OCSP responses according to the standard. This situation 
> has two consequences: (i) the CA cannot generate OCSP responses by 
> mistake using this certificate, since its own software prevents it, and 
> (ii) in the event that an attacker compromises the keys and uses 
> modified software to generate malicious OCSP responses, it will be also 
> necessary that the client software had a bug that validated these 
> malicious and malformed OCSP responses. In this case, the hypothetical 
> scenarios involving security risks are even more limited.

In this section, you argue that we can't trust CAs to apply the 
id-kp-OCSPSigning EKU correctly and it's then our responsibility to check the 
rest of the profile for consistency.

These two arguments seem at odds to me.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-11 Thread Oscar Conesa via dev-security-policy

As a summary of the situation, we consider that:

a) Affected certificates do not comply with the norm (EKU OCSPSigning 
without OCSP-no-check extension). They are misissued and they must be 
revoked


b) This non-compliance issue has potential security risks in case of key 
compromise and/or malicious use of the keys, as indicated by Ryan Sleevi.


c) No key has been compromised nor has the malicious or incorrect use of 
the key been detected, so at the moment there are no security incidents


d) There are two groups of affected CAs: (i) CAs that maintain sole 
control of the affected keys and (ii) CAs that have delegated the 
control of these keys to other entities.


e) In the case of CAs who DO NOT have sole control of the affected keys: 
in addition to revoking the affected certificates, they should request 
the delegated entities to proceed with the destruction of the keys in a 
safe and audited manner. This does not guarantee 100% that all copies of 
the keys will indeed be destroyed, as audits and procedures have their 
limitations. But it does guarantee that the CA has done everything  in 
their power to avoid the compromise of these keys.


f) For CAs that DO have sole control of the keys: There is no reason to 
doubt the CA's ability to continue to maintain the security of these 
keys, so the CA could reuse the keys by reissuing the certificate with 
the same keys. If there are doubts about the ability of a CA to protect 
its own critical keys, that CA cannot be considered "trusted" in any way.


g) On the other hand, if the affected certificate (with EKU OCSPSigning) 
does not have the KU Digital Signature, then that certificate cannot 
generate valid OCSP responses according to the standard. This situation 
has two consequences: (i) the CA cannot generate OCSP responses by 
mistake using this certificate, since its own software prevents it, and 
(ii) in the event that an attacker compromises the keys and uses 
modified software to generate malicious OCSP responses, it will be also 
necessary that the client software had a bug that validated these 
malicious and malformed OCSP responses. In this case, the hypothetical 
scenarios involving security risks are even more limited.



___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-10 Thread Ryan Sleevi via dev-security-policy
On Fri, Jul 10, 2020 at 12:01 PM ccampetto--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Wouldn't be enough to check that OCSP responses are signed with a
> certificate which presents the (mandatory, by BR) id-pkix-ocsp-nocheck?
> I've not checked, but I don't think that subordinate CA certificates have
> that extension


You're describing a behaviour change to all clients, in order to work
around the CA not following the profile.

This is a common response to many misissuance events: if the client
software does not enforce that CAs actually do what they say, then it's not
really a rule. Or, alternatively, that the only rules should be what
clients enforce. We see this come up from time to time, e.g. certificate
lifetimes, but this is a way of externalizing the costs/risks onto clients.

None of this changes what clients, in the field, today do. And if the
problem was caused by a CA, isn't it reasonable to expect the problem to be
fixed by the CA?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-10 Thread Tofu Kobe via dev-security-policy

Mr. zxzxzx9,

The "real" risk, which is illustrated through an adversary, 
vulnerability, impact probability, risk mitigation strategy and the 
residual risk doesn't matter. Hence is not discussed. I've yet to see a 
comprehensive risk assessment on this matter.


The primary reason there is no real discussion is all the CAs have 
chickened out due to the "distrust" flag from Mr. Sleevi. This is 
supposed to be a community to freely discuss but he essentially 
mentioned arguing = distrust. "Distrust" is equivalent to a death 
sentence to a CA. So...can't really blame em chickening out.


As an individual observing this whole situation, I'm wondering too.
You are not alone.

Best regards,

T.K.


On 7/10/2020 7:35 PM, zxzxzx9--- via dev-security-policy wrote:

On Wednesday, July 8, 2020 at 6:02:56 AM UTC+3, Ryan Sleevi wrote:

The question is simply whether or not user agents will accept the risk of
needing to remove the root suddenly, and with significant (e.g. active)
attack, or whether they would, as I suggest, take steps to remove the root
beforehand, to mitigate the risk. The cost of issuance plus the cost of
revocation are a fixed cost: it's either pay now or pay later. And it seems
like if one needs to contemplate revoking roots, it's better to do it
sooner, than wait for it to be an inconvenient or inopportune time. This is
why I meant earlier, when I said a solution that tries to wait until the
'last possible minute' is just shifting the cost of misissuance onto
RPs/Browsers, by leaving them to clean up the mess. And a CA that tries to
shift costs onto the ecosystem like that seems like it's not a CA that can
be trusted to, well, be trustworthy.


This assumes that the private key of these intermediate CAs will inevitably get 
compromised.

Why such an assumption?

Following the same argument we can assume that the private key of any root CA 
will inevitably get compromised and suggest all CAs to revoke their roots 
already today. Does not seem to make sense.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-10 Thread zxzxzx66669--- via dev-security-policy
On Wednesday, July 8, 2020 at 6:02:56 AM UTC+3, Ryan Sleevi wrote:
> The question is simply whether or not user agents will accept the risk of
> needing to remove the root suddenly, and with significant (e.g. active)
> attack, or whether they would, as I suggest, take steps to remove the root
> beforehand, to mitigate the risk. The cost of issuance plus the cost of
> revocation are a fixed cost: it's either pay now or pay later. And it seems
> like if one needs to contemplate revoking roots, it's better to do it
> sooner, than wait for it to be an inconvenient or inopportune time. This is
> why I meant earlier, when I said a solution that tries to wait until the
> 'last possible minute' is just shifting the cost of misissuance onto
> RPs/Browsers, by leaving them to clean up the mess. And a CA that tries to
> shift costs onto the ecosystem like that seems like it's not a CA that can
> be trusted to, well, be trustworthy.


This assumes that the private key of these intermediate CAs will inevitably get 
compromised.

Why such an assumption?

Following the same argument we can assume that the private key of any root CA 
will inevitably get compromised and suggest all CAs to revoke their roots 
already today. Does not seem to make sense.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-10 Thread ccampetto--- via dev-security-policy
On Wednesday, 8 July 2020 05:02:56 UTC+2, Ryan Sleevi  wrote:
> On Tue, Jul 7, 2020 at 10:36 PM Matt Palmer via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
> > On Mon, Jul 06, 2020 at 10:53:50AM -0700, zxzxzx9--- via
> > dev-security-policy wrote:
> > > Can't the affected CAs decide on their own whether to destroy the
> > > intermediate CA private key now, or in case the affected intermediate CA
> > > private key is later compromised, revoke the root CA instead?
> >
> > No, because there's no reason to believe that a CA would follow through on
> > their decision, and rapid removal of trust anchors (which is what "revoke
> > the root CA" means in practice) has all sorts of unpleasant consequences
> > anyway.
> >
> 
> Er, not quite?
> 
> I mean, yes, removing the root is absolutely the final answer, even if
> waiting until something "demonstrably" bad happens.
> 
> The question is simply whether or not user agents will accept the risk of
> needing to remove the root suddenly, and with significant (e.g. active)
> attack, or whether they would, as I suggest, take steps to remove the root
> beforehand, to mitigate the risk. The cost of issuance plus the cost of
> revocation are a fixed cost: it's either pay now or pay later. And it seems
> like if one needs to contemplate revoking roots, it's better to do it
> sooner, than wait for it to be an inconvenient or inopportune time. This is
> why I meant earlier, when I said a solution that tries to wait until the
> 'last possible minute' is just shifting the cost of misissuance onto
> RPs/Browsers, by leaving them to clean up the mess. And a CA that tries to
> shift costs onto the ecosystem like that seems like it's not a CA that can
> be trusted to, well, be trustworthy.

Wouldn't be enough to check that OCSP responses are signed with a certificate 
which presents the (mandatory, by BR) id-pkix-ocsp-nocheck? I've not checked, 
but I don't think that subordinate CA certificates have that extension
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-07 Thread Ryan Sleevi via dev-security-policy
On Tue, Jul 7, 2020 at 10:36 PM Matt Palmer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Mon, Jul 06, 2020 at 10:53:50AM -0700, zxzxzx9--- via
> dev-security-policy wrote:
> > Can't the affected CAs decide on their own whether to destroy the
> > intermediate CA private key now, or in case the affected intermediate CA
> > private key is later compromised, revoke the root CA instead?
>
> No, because there's no reason to believe that a CA would follow through on
> their decision, and rapid removal of trust anchors (which is what "revoke
> the root CA" means in practice) has all sorts of unpleasant consequences
> anyway.
>

Er, not quite?

I mean, yes, removing the root is absolutely the final answer, even if
waiting until something "demonstrably" bad happens.

The question is simply whether or not user agents will accept the risk of
needing to remove the root suddenly, and with significant (e.g. active)
attack, or whether they would, as I suggest, take steps to remove the root
beforehand, to mitigate the risk. The cost of issuance plus the cost of
revocation are a fixed cost: it's either pay now or pay later. And it seems
like if one needs to contemplate revoking roots, it's better to do it
sooner, than wait for it to be an inconvenient or inopportune time. This is
why I meant earlier, when I said a solution that tries to wait until the
'last possible minute' is just shifting the cost of misissuance onto
RPs/Browsers, by leaving them to clean up the mess. And a CA that tries to
shift costs onto the ecosystem like that seems like it's not a CA that can
be trusted to, well, be trustworthy.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-07 Thread Matt Palmer via dev-security-policy
On Mon, Jul 06, 2020 at 10:53:50AM -0700, zxzxzx9--- via 
dev-security-policy wrote:
> Can't the affected CAs decide on their own whether to destroy the
> intermediate CA private key now, or in case the affected intermediate CA
> private key is later compromised, revoke the root CA instead?

No, because there's no reason to believe that a CA would follow through on
their decision, and rapid removal of trust anchors (which is what "revoke
the root CA" means in practice) has all sorts of unpleasant consequences
anyway.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-07 Thread zxzxzx66669--- via dev-security-policy
On Thursday, July 2, 2020 at 12:06:22 AM UTC+3, Ryan Sleevi wrote:
> Unfortunately, revocation of this certificate is simply not enough to
> protect Mozilla TLS users. This is because this Sub-CA COULD provide OCSP
> for itself that would successfully validate, AND provide OCSP for other
> revoked sub-CAs, even if it was revoked.

If I understand correctly, the logic behind the proposal to destroy 
intermediate CA private key now, is to avoid a situation that in case this 
intermediate CA private key is later compromised the intermediate CA becomes 
non-revocable until it expires.

So the action now is required to mitigate a potential security risk that can 
materialize later.

Can't the affected CAs decide on their own whether to destroy the intermediate 
CA private key now, or in case the affected intermediate CA private key is 
later compromised, revoke the root CA instead?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-06 Thread Rob Stradling via dev-security-policy

On 06/07/2020 12:47, Rob Stradling via dev-security-policy wrote:

On 06/07/2020 06:11, Dimitris Zacharopoulos via dev-security-policy wrote:


IETF made an attempt to set an extention for EKU constraints
(https://datatracker.ietf.org/doc/draft-housley-spasm-eku-constraints/)
where Rob Stradling made an indirect reference in
https://groups.google.com/d/msg/mozilla.dev.security.policy/f5-URPoNarI/yf2YLpKJAQAJ 



(Rob, please correct me if I'm wrong).

There was a follow-up discussion in IETF that resulted that noone should
deal with this issue
(https://mailarchive.ietf.org/arch/msg/spasm/3zZzKa2lcT3gGJOskVrnODPBgM0/). 


A day later, all attempts died off because noone would actually
implement this(?)
https://mailarchive.ietf.org/arch/msg/spasm/_gJTeUjxc2kmDcRyWPb9slUF47o/.
If this extension was standardized, we would probably not be having this
issue right now. However, this entire topic demonstrates the necessity
to standardize the EKU existence in CA Certificates as constraints for
EKUs of leaf certificates.


Oh, I misread.

Standardizing the use of the existing EKU extension in CA certificates 
as a constraint for permitted EKUs in leaf certificates has been 
proposed at IETF before.  Probably many times before.  However, plenty 
of people take the (correct, IMHO) view that the EKU extension was not 
intended to be (ab)used in this way, and so the chances of getting 
"rough consensus" for a Standards Track RFC to specify this seems rather 
remote.


I suppose it might be worth drafting an Informational RFC that explains 
how the EKU extension is used in practice, what the footguns are and how 
to avoid them, what the security implications are of doing EKU wrong, etc.



If only we could edit RFC2459 so that it (1) defined an "EKU
constraints" extension and (2) said that the EKU extension MUST NOT
appear in CA certificates...

Unfortunately, we're more than 20 years too late to do that.  And whilst
it completely sucks that real-world use of the EKU extension comes with
some nasty footguns, I just don't see how you'd ever persuade the WebPKI
ecosystem to adopt a new "EKU Constraints" extension at this point in
history.

--
Rob Stradling
Senior Research & Development Scientist
Sectigo Limited

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-06 Thread Rob Stradling via dev-security-policy

On 06/07/2020 06:11, Dimitris Zacharopoulos via dev-security-policy wrote:


IETF made an attempt to set an extention for EKU constraints
(https://datatracker.ietf.org/doc/draft-housley-spasm-eku-constraints/)
where Rob Stradling made an indirect reference in
https://groups.google.com/d/msg/mozilla.dev.security.policy/f5-URPoNarI/yf2YLpKJAQAJ 


(Rob, please correct me if I'm wrong).

There was a follow-up discussion in IETF that resulted that noone should
deal with this issue
(https://mailarchive.ietf.org/arch/msg/spasm/3zZzKa2lcT3gGJOskVrnODPBgM0/).
A day later, all attempts died off because noone would actually
implement this(?)
https://mailarchive.ietf.org/arch/msg/spasm/_gJTeUjxc2kmDcRyWPb9slUF47o/.
If this extension was standardized, we would probably not be having this
issue right now. However, this entire topic demonstrates the necessity
to standardize the EKU existence in CA Certificates as constraints for
EKUs of leaf certificates.


If only we could edit RFC2459 so that it (1) defined an "EKU 
constraints" extension and (2) said that the EKU extension MUST NOT 
appear in CA certificates...


Unfortunately, we're more than 20 years too late to do that.  And whilst 
it completely sucks that real-world use of the EKU extension comes with 
some nasty footguns, I just don't see how you'd ever persuade the WebPKI 
ecosystem to adopt a new "EKU Constraints" extension at this point in 
history.


--
Rob Stradling
Senior Research & Development Scientist
Sectigo Limited

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-06 Thread Paul van Brouwershaven via dev-security-policy
Summary of some OCSP client tests:

   - `Root` is self signed, and does not have any EKU's
   - 'ICA' is signed by 'Root' with the EKU ServerAuth and ClientAuth
   - 'ICA 2' is signed by 'Root' with the EKU ServerAuth, ClientAuth and
   OCSPSigning
   - 'Server certificate' is signed by `ICA` with the EKU ServerAuth and
   ClientAuth
   - Both `ICA 2` and `ICA` have their own delegated OCSP responder
   certificate.
   - `ICA 2` signs an OCSP response for `ICA` and overrules the response
   created by the delegated responder.

certutil (Windows): Recognizes but rejects the revoked response
openssl (Ubuntu & MacOS): Accepts the response
ocspcheck (MacOS): Accepts the response

Output and script located on:
https://gist.github.com/vanbroup/84859cd10479ed95c64abe6fcdbdf83d

On Mon, 6 Jul 2020 at 12:09, Dimitris Zacharopoulos 
wrote:

> On 6/7/2020 11:39 π.μ., Paul van Brouwershaven via dev-security-policy
> wrote:
> > As follow up to Dimitris comments I tested the scenario where a
> > sibling issuing CA [ICA 2] with the OCSP signing EKU (but without
> > digitalSignature KU) under [ROOT] would sign a revoked OCSP response for
> > [ICA] also under [ROOT]
> > https://gist.github.com/vanbroup/84859cd10479ed95c64abe6fcdbdf83d
> >
> > I was actually surprised to see that certutil fails to validate decode
> the
> > OCSP response in this scenario. But this doesn't say it's not a problem
> as
> > other responders or versions might accept the response.
> >
> > I will try to perform the same test on Mac in a moment.
>
> Thank you very much Paul, this is really helpful.
>
> Dimitris.
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-06 Thread Dimitris Zacharopoulos via dev-security-policy
On 6/7/2020 11:39 π.μ., Paul van Brouwershaven via dev-security-policy 
wrote:

As follow up to Dimitris comments I tested the scenario where a
sibling issuing CA [ICA 2] with the OCSP signing EKU (but without
digitalSignature KU) under [ROOT] would sign a revoked OCSP response for
[ICA] also under [ROOT]
https://gist.github.com/vanbroup/84859cd10479ed95c64abe6fcdbdf83d

I was actually surprised to see that certutil fails to validate decode the
OCSP response in this scenario. But this doesn't say it's not a problem as
other responders or versions might accept the response.

I will try to perform the same test on Mac in a moment.


Thank you very much Paul, this is really helpful.

Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-06 Thread Dimitris Zacharopoulos via dev-security-policy

On 6/7/2020 11:03 π.μ., Ryan Sleevi via dev-security-policy wrote:

Yep. You have dismissed it but others may have not. If no other voices are
raised, then your argument prevails:)


I mean, it’s not a popularity contest:)


As others have highlighted already, there are times where people get 
confused by you posting by default in a personal capacity. It is easy to 
confuse readers when using the word "I" in your emails.


Even if you use your "Google Chrome hat" to make a statement, there 
might be a different opinion or interpretation from the Mozilla Module 
owner where this Forum is mainly for. There's more agreement than 
disagreement between Mozilla and Google when it comes to policy so I 
hope my statement was not taken the wrong way as an attempt to "push" 
for a disagreement.


I have already asked for the Mozilla CA Certificate Policy owner's 
opinion regarding separate hierarchies for Mozilla Root program in 
https://groups.google.com/d/msg/mozilla.dev.security.policy/EzjIkNGfVEE/jOO2NhKAAwAJ, 
highlighting your already clearly stated opinion on behalf of Google, 
because I am interested to hear their opinion as well. I hope I'm not 
accused of doing something wrong by asking for more "voices", if there 
are any.




___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-06 Thread Paul van Brouwershaven via dev-security-policy
>
> Some tests were performed by Paul van Brouwershaven
> > https://gist.github.com/vanbroup/84859cd10479ed95c64abe6fcdbdf83d.
>
> As mentioned, those tests weren’t correct. I’ve provided sample test cases
> to several other browser vendors, and heard back or demonstrated that
> they’re vulnerable. As are the majority of open-source TLS libraries with
> support for OCSP.


Ryan, you made a statement about a bug in Golang, the test case linked by
Dimitris was about the follow-up tests I did with certutil and
Test-Certificate in powershell.

As follow up to Dimitris comments I tested the scenario where a
sibling issuing CA [ICA 2] with the OCSP signing EKU (but without
digitalSignature KU) under [ROOT] would sign a revoked OCSP response for
[ICA] also under [ROOT]
https://gist.github.com/vanbroup/84859cd10479ed95c64abe6fcdbdf83d

I was actually surprised to see that certutil fails to validate decode the
OCSP response in this scenario. But this doesn't say it's not a problem as
other responders or versions might accept the response.

I will try to perform the same test on Mac in a moment.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-06 Thread Ryan Sleevi via dev-security-policy
On Mon, Jul 6, 2020 at 3:38 AM Dimitris Zacharopoulos 
wrote:

> On 6/7/2020 9:47 π.μ., Ryan Sleevi wrote:
>
> I can understand wanting to wait to see what others do first, but that’s
> not leadership.
>
>
> This is a security community, and it is expected to see and learn from
> others, which is equally good of proposing new things. I'm not sure what
> you mean by "leadership". Leadership for who?
>

Leadership as a CA affected by this, taking steps to follow through on
their commitments and operate beyond reproach, suspicion, or doubt.

As a CA, the business is built on trust, and that is the most essential
asset. Trust takes years to build and seconds to lose. Incidents, beyond
being an opportunity to share lessons learned and mitigations applied,
provide an opportunity for a CA to earn trust (by taking steps that are
disadvantageous for their short-term interests but which prioritize being
irreproachable) or lose trust (by taking steps that appear to minimize or
dismiss concerns or fail to take appropriate action).

Tim’s remarks on behalf of DigiCert, if followed through on, stand in stark
contrast to remarks by others. And that’s encouraging, in that it seems
that past incidents at DigiCert have given rise to a stronger focus on
security and compliance than May have existed there in the past, and which
there were concerns about with the Symantec PKI acquisition/integration.
Ostensibly, that is an example of leadership: making difficult choices to
prioritize relying parties over subscribers, and to focus on removing
any/all doubt.

You mean when I dismissed this line of argument? :)
>
>
> Yep. You have dismissed it but others may have not. If no other voices are
> raised, then your argument prevails :)
>

I mean, it’s not a popularity contest :)

It’s a question of what information is available to the folks ultimately
deciding things. If there is information being overlooked, if there are
facts worth considering, this is the time to bring it up. Conclusions will
ultimately be decided by those trusting these certificates, but that’s why
it’s important to introduce any new information that may have been
overlooked.

>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-06 Thread Dimitris Zacharopoulos via dev-security-policy

On 6/7/2020 9:47 π.μ., Ryan Sleevi wrote:
I can understand wanting to wait to see what others do first, but 
that’s not leadership.


This is a security community, and it is expected to see and learn from 
others, which is equally good of proposing new things. I'm not sure what 
you mean by "leadership". Leadership for who?


We 



Who is we here? HARICA? The CA Security Council? The affected CAs in 
private collaboration? It’s unclear which of the discussions taking 
place are being referenced here.


HARICA.


There was also an interesting observation that came up during a
recent
discussion. 



You mean when I dismissed this line of argument? :)


Yep. You have dismissed it but others may have not. If no other voices 
are raised, then your argument prevails :)



Dimitris.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-05 Thread Ryan Sleevi via dev-security-policy
On Mon, Jul 6, 2020 at 1:12 AM Dimitris Zacharopoulos via
dev-security-policy  wrote:

> Ryan's response on
> https://bugzilla.mozilla.org/show_bug.cgi?id=1649939#c8 seems
> unreasonably harsh (too much "bad faith" in affected CAs, even of these
> CA Certificates were operated by the Root Operator).


Then revoke within 7 days, as required. That’s a discussion with WISeKey,
not HARIC, and HARICA needs to have its own incident report and be judged
on it. I can understand wanting to wait to see what others do first, but
that’s not leadership.

The duty is on the CA to demonstrate nothing can go wrong and nothing has
gone wrong. Unlike a certificate the CA “intended” as a responder, there is
zero guarantee about the controls, unless and until the CA establishes the
facts around such controls. The response to Pedro is based on Peter Bowen’s
suggestion that controls are available, and uses those controls.

As an ETSI-audited CA, I can understand why you might balk, because the
same WebTrust controls aren’t available and the same assurance isn’t
possible. The baseline assurance expectation is you will revoke in 7 days.
That’s not unreasonable, that’s the promise you made the community you
would do.

It’s understandable that it turns out to be more difficult than you
thought. You want more time to mitigate, to avoid disruption. As expected,
you’re expected to file an incident report on that revocation delay, in
addition to an incident report on the certificate profile issue that was
already filed, that examines why you’re delaying, what you’re doing to
correct that going forward, and your risk analysis. You need to establish
that why and how nothing can go wrong: simply saying “it’s a CA key” or
“trust us” surely can’t be seen as sufficient.

There are auditable
> events that auditors could check and attest to, if needed, for example
> OCSP responder configuration changes or key signing operations, and
> these events are kept/archived according to the Baseline Requirements
> and the CA's CP/CPS. This attestation could be done during a "special
> audit" (as described in the ETSI terminology) and possibly a
> Point-In-Time audit (under WebTrust).


This demonstrates a failed understanding about the level of assurance these
audits provide. A Point in Time Audit doesn’t establish that nothing has
gone wrong or will go wrong; just at a single instant, the configuration
looks good enough. The very moment the auditors leave the CA can configure
things to go wrong, and that assurance lost. I further believe you’re
confusing this with an Agreed Upon Procedures report.

In any event, the response to WISeKey is acknowledging a path forward
relying on audits. The Relying Party bears all the risk in accepting such
audits. The path you describe above, without any further modification, is
just changing “trust us (to do it right)” to “trust our auditor”, which is
just as risky. I outlined a path to “trust, but verify,” to allow some
objective evaluation. Now it just seems like “we don’t like that either,”
and this just recycled old proposals that are insufficient.

Look, the burden is on the CA to demonstrate how nothing can go wrong or
has gone wrong. This isn’t a one size fits all solution. If you have a
specific proposal from HARICA, filing it, before the revocation deadline,
where you show your work and describe your plan and timeline, is what’s
expected. It’s the same expectation as before this incident and consistent
with Ben’s message. But you have to demonstrate why, given the security
concerns, this is acceptable, and “just trust us” can’t be remotely seen as
reasonable.

We


Who is we here? HARICA? The CA Security Council? The affected CAs in
private collaboration? It’s unclear which of the discussions taking place
are being referenced here.

If this extension was standardized, we would probably not be having this
> issue right now. However, this entire topic demonstrates the necessity
> to standardize the EKU existence in CA Certificates as constraints for
> EKUs of leaf certificates.


This is completely the *wrong* takeaway.

Solving this, at the CABF level via profiles, would clearly resolve this.
If the OCSPSigning EKU was prohibited from appearing with other EKUs, as
proposed, this would have resolved it. There’s no guarantee that a
hypothetical specification would have resolved this, since the
ambiguity/issue is not with respect to the EKU in a CA cert, it’s whether
or not the profile for an OCSP Responder is allowed to assert the CA bit.
This *same* ambiguity also exists for TLS certs, and Mozilla has similar
non-standard behavior here that prevents a CA cert from being a server cert
unless it’s also self-signed.


There was also an interesting observation that came up during a recent
> discussion.


You mean when I dismissed this line of argument? :)

As mandated by RFC 5280 (4.2.1.12), EKUs are supposed to be
> normative constrains to *end-entity Certificates*, not CA Certificates.
> Should RFC 6960 need to 

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-05 Thread Dimitris Zacharopoulos via dev-security-policy


I'd like to chime-in on this particular topic because I had similar 
thoughs with Pedro and Peter.


I would like to echo Pedro's, Peter's and other's argument that it is 
unreasonable for Relying Parties and Browsers to say "I trust the CA 
(the Root Operator) to do the right thing and manage their Root Keys 
adequately", and not do the same for their _internally operated_ and 
audited Intermediate CA Certificates. The same Operator could do "nasty 
things" with revocation, without needing to go to all the trouble of 
creating -possibly- incompatible OCSP responses (at least for some 
currently known implementations) using a CA Certificate that has the 
id-kp-OCSPSigning EKU. Browsers have never asked for public records on 
"current CA operations", except in very rare cases where the CA was 
accused of "bad behavior". Ryan's response on 
https://bugzilla.mozilla.org/show_bug.cgi?id=1649939#c8 seems 
unreasonably harsh (too much "bad faith" in affected CAs, even of these 
CA Certificates were operated by the Root Operator). There are auditable 
events that auditors could check and attest to, if needed, for example 
OCSP responder configuration changes or key signing operations, and 
these events are kept/archived according to the Baseline Requirements 
and the CA's CP/CPS. This attestation could be done during a "special 
audit" (as described in the ETSI terminology) and possibly a 
Point-In-Time audit (under WebTrust).


We did some research and this "convention", as explained by others, 
started from Microsoft.


In 
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn786428(v=ws.11), 
one can read "if a CA includes EKUs to state allowed certificate usages, 
then it EKUs will be used to restrict usages of certificates issued by 
this CA" in the paragraph titled "Extended Key Usage Constraints".


Mozilla agreed to this convention and added it to Firefox 
https://bugzilla.mozilla.org/show_bug.cgi?id=725351. The rest of the 
information was already covered in this thread (how it also entered into 
the Mozilla Policy).


IETF made an attempt to set an extention for EKU constraints 
(https://datatracker.ietf.org/doc/draft-housley-spasm-eku-constraints/) 
where Rob Stradling made an indirect reference in 
https://groups.google.com/d/msg/mozilla.dev.security.policy/f5-URPoNarI/yf2YLpKJAQAJ 
(Rob, please correct me if I'm wrong).


There was a follow-up discussion in IETF that resulted that noone should 
deal with this issue 
(https://mailarchive.ietf.org/arch/msg/spasm/3zZzKa2lcT3gGJOskVrnODPBgM0/). 
A day later, all attempts died off because noone would actually 
implement this(?) 
https://mailarchive.ietf.org/arch/msg/spasm/_gJTeUjxc2kmDcRyWPb9slUF47o/. 
If this extension was standardized, we would probably not be having this 
issue right now. However, this entire topic demonstrates the necessity 
to standardize the EKU existence in CA Certificates as constraints for 
EKUs of leaf certificates.


We even found a comment referencing the CA/B Forum about whether it has 
accepted that EKUs in CA Certificates are considered constraints 
(https://mailarchive.ietf.org/arch/msg/spasm/Y1V_vbEw91D2Esv_SXxZpo-aQgc/). 
Judging from the result and the discussion of this issue, even today, it 
is unclear how the CA/B Forum (as far as its Certificate Consumers are 
concerned) treats EKUs in CA Certificates.


CAs that enabled the id-kp-OCSPSigning EKU in the Intermediate CA 
Profiles were following the letter of the Baseline Requirements to 
"protect relying parties". According to the BRs 7.1.2.2:


/"Generally Extended Key Usage will only appear within end entity 
certificates (as highlighted in RFC 5280 (4.2.1.12)), however, 
Subordinate CAs MAY include the extension to further *protect 
**r**elying parties* until the use of the extension is consistent 
between Application Software Suppliers whose software is used by a 
substantial portion of Relying Parties worldwide."/


So, on one hand, a Root Operator was trying to do "the right thing" 
following the agreed standards and go "above and beyond" to "protect" 
relying parties by adding this EKU in the issuing CA Certificate (at a 
minimum it "protected" users using Microsoft that required this "EKU 
Chaining"), and on the other hand it unintentionally tripped into a case 
where a CA Certificate with such an EKU could be used  in an OCSP 
responder service to sign status messages for its parent.


There was also an interesting observation that came up during a recent 
discussion. As mandated by RFC 5280 (4.2.1.12), EKUs are supposed to be 
normative constrains to *end-entity Certificates*, not CA Certificates. 
Should RFC 6960 need to be read in conjunction with RFC 5280 and not on 
its own? Should conforming OCSP Clients to the Publicly-Trusted 
Certificates (PTC) and BR-compliant solutions, implement both? If the 
answer is yes, this means that a "conforming" OCSP client should not 
place trust on the id-kp-OCSPSigning 

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-05 Thread Matt Palmer via dev-security-policy
On Mon, Jul 06, 2020 at 03:48:06AM +, Peter Gutmann wrote:
> Matt Palmer via dev-security-policy  
> writes:
> >If you're unhappy with the way which your interests are being represented by
> >your CA, I would encourage you to speak with them.
> 
> It's not the CAs, it's the browsers, and many other types of clients.

How, exactly, is it not CAs fault that they claim to represent their
customers in the CA/B Forum, and then fail to do so effectively?

> Ever tried connecting to a local (RFC1918 LAN) IoT device that has a
> self-signed cert?

If we expand "IoT device" to include, say, IPMI web-based management
interfaces, then yes, I do so on an all-too-regular basis.  But mass-market
web browsers are not built specifically for that use-case, so the fact that
they don't do a stellar job is hardly a damning indictment on them.

That IoT/IPMI devices piggyback on mass-market web browsers (and the Web PKI
they use) is, as has been identified previously, an example of externalising
costs, which doesn't always work out as well as the implementers might have
liked.  That it doesn't end well is hardly the fault of the Web PKI, the
BRs, or the browsers.

Your question is roughly equivalent to "ever tried fitting a screw with a
hammer?", or perhaps "ever tried making a request to https://google.com
using telnet and a pen and paper?".  That your arithmetic skills might not
be up to doing a TLS negotiation by hand is not the fault of TLS, it's that
you're using the wrong tool for the job.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-05 Thread Peter Gutmann via dev-security-policy
Matt Palmer via dev-security-policy  
writes:

>If you're unhappy with the way which your interests are being represented by
>your CA, I would encourage you to speak with them.

It's not the CAs, it's the browsers, and many other types of clients.  Every
Internet-enabled (meaning web-enabled) device is treated by browsers as if it
were a public web server, no matter how illogical and nonsensical that
actually is.  You don't have a choice to opt out of the Web PKI because all of
the (mainstream) clients you can use force you into it.  Ever tried connecting
to a local (RFC1918 LAN) IoT device that has a self-signed cert?

It's not really the CAs that are the problem, everything you're likely to use
assumes there's only the Web PKI and nothing else.  When the clients all
enforce use of the Web PKI, there's no way out even if the CAs want to help
you.

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-05 Thread Ryan Hurst via dev-security-policy
On Saturday, July 4, 2020 at 3:43:22 PM UTC-7, Ryan Sleevi wrote:
> > Thank you for explaining that.  We need to hear the official position from
> > Google.  Ryan Hurst are you out there?

Although Ryan Sleevi has already pointed this out, since I was named 
explicitly, I wanted to respond and re-affirm that I am not responsible for 
Chrome's (or anyone else's) root program. I represent Google Trust Services 
(GTS), a Certificate Authority (CA) that is subject to the same requirements as 
any other WebPKI CA.

While I am watching this issue closely, as I do all WebPKI related incidents, 
since this is not an issue that directly impacts GTS I have chosen to be a 
quiet observer.

With that said, as a long time member of the WebPKI, and in a personal 
capacity, I would say one of the largest challenges in operating a CA is how to 
handle incidents when they occur. In every incident, I try to keep in mind is 
that a CAs ultimate responsibility is to the users that rely on the 
certificates they issue.

This means when balancing the impact of decisions a CA should give weight to 
protecting those users. This reality unfortunately also means that sometimes it 
is necessary to take actions that may cause pain for the subscribers they 
provide services to.

Wherever possible a CA should minimize pain on the relying party but more times 
than not, the decision to use the WebPKI for these non-browser TLS use cases 
was done to externalize the costs of deploying a dedicated PKI that is fit for 
purpose and as with most trade-offs there may be later consequences to that 
decision.

As for my take on this topic, I think Peter Bowen has done an excellent job 
capturing the issue, it's risks, origins, and the choices available.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-05 Thread Matt Palmer via dev-security-policy
On Sun, Jul 05, 2020 at 09:30:33PM +, Buschart, Rufus via 
dev-security-policy wrote:
> > From: dev-security-policy  
> > On Behalf Of Matt Palmer via dev-security-policy
> > Sent: Sonntag, 5. Juli 2020 06:36
> > 
> > On Sat, Jul 04, 2020 at 07:42:12PM -0700, Peter Bowen wrote:
> > > On Sat, Jul 4, 2020 at 7:12 PM Matt Palmer via dev-security-policy
> > >  wrote:
> > > >
> > > > > On Sat, Jul 04, 2020 at 08:42:03AM -0700, Mark Arnott via 
> > > > > dev-security-policy wrote:
> > > > >
> > > > > In the CIA triad Availability is as important as Confidentiality.
> > > > > Has anyone done a threat model and a serious risk analysis to
> > > > > determine what a reasonable risk mitigation strategy is?
> > > >
> > > > Did you do a threat model and a serious risk analysis before you
> > > > chose to use the WebPKI in your application?
> > >
> > > I think it is important to keep in mind that many of the CA
> > > certificates that were identified are constrained to not issue TLS
> > > certificates.  The certificates they issue are explicitly excluded
> > > from the Mozilla CA program requirements.
> > 
> > Yes, I'm aware of that.
> > 
> > > I don't think it is reasonable to assert that everyone impacted by
> > > this should have been aware of the possibly of revocation
> > 
> > At the limits, I agree with you.  However, to whatever degree that there is 
> > complaining to be done, it should be directed at the CA(s)
> > which sold a product that, it is now clear, was not fit for whatever 
> > purpose it has been put to, and not at Mozilla.
> 
> Let me quote from the NSS website of Mozilla 
> (https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Overview):
> 
>   If you want to add support for SSL, S/MIME, or other Internet security 
> standards to your application, you can use Network Security Services (NSS) to 
> implement
>   all your security features. NSS provides a complete open-source 
> implementation of the crypto libraries used by AOL, Red Hat, Google, and 
> other companies in a
>   variety of products, including the following:

[snip]

Are you using NSS for your S/MIME implementation?  If not, I fail to see how
it is relevant here.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-05 Thread Ryan Sleevi via dev-security-policy
On Sun, Jul 5, 2020 at 5:30 PM Buschart, Rufus via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> > From: dev-security-policy 
> On Behalf Of Matt Palmer via dev-security-policy
> > At the limits, I agree with you.  However, to whatever degree that there
> is complaining to be done, it should be directed at the CA(s)
> > which sold a product that, it is now clear, was not fit for whatever
> purpose it has been put to, and not at Mozilla.
>
> Let me quote from the NSS website of Mozilla (
> https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Overview):
>
>   If you want to add support for SSL, S/MIME, or other Internet security
> standards to your application, you can use Network Security Services (NSS)
> to implement
>   all your security features. NSS provides a complete open-source
> implementation of the crypto libraries used by AOL, Red Hat, Google, and
> other companies in a
>   variety of products, including the following:
>   * Mozilla products, including Firefox, Thunderbird, SeaMonkey, and
> Firefox OS.
>   * [and a long list of additional reference applications]
>
> Probably it would be good if someone from Mozilla team steps in here, but
> S/MIME _is_ an advertised use-case for NSS. And the Mozilla website says
> nowhere, that the demands and rules of WebPKI / CA/B-Forum overrule all
> other demands. It is simply not to be expected by a consumer of S/MIME
> certificates that they become invalid within 7 days just because the BRGs
> for TLS certificates are requiring it. This feels close to intrusive
> behavior of the WebPKI community.
>

Mozilla already places requirements on S/MIME revocation:
https://github.com/mozilla/pkipolicy/blob/master/rootstore/policy.md#62-smime
- The only difference is that for Subscriber certificates, a timeline is
not yet attached.

The problem is that this is no different than if you issued a TLS-capable
S/MIME issuing CA, which, as we know from past incidents, many CAs did
exactly that, and had to revoke them due to the lack of appropriate audits.
Your Root is subject to the TLS policies, because that Root was enabled for
TLS, and so everything the Root issues is, to some extent, subject to those
policies.

The solution here for CAs has long been clear: maintaining separate
hierarchies, from the root onward, for separate purposes, if they
absolutely want to avoid any cohabitation of responsibilities. They *can*
continue on the current path, but they have to plan for the most
restrictive policy applying throughout that hierarchy and designing
accordingly. Continuing to support other use cases from a common root -
e.g. TLS client authentication, document signing, timestamping, etc -
unnecessarily introduces additional risk, and for limited concrete benefit,
either for users or for the CA. Having to maintain two separate "root"
certificates in a root store, one for each purpose, is no more complex than
having to maintain a single root trusted for two purposes; and
operationally, for the CA, is is vastly less complex.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-05 Thread Buschart, Rufus via dev-security-policy


> From: dev-security-policy  On 
> Behalf Of Ryan Sleevi via dev-security-policy
> On Sat, Jul 4, 2020 at 10:42 PM Peter Bowen via dev-security-policy < 
> dev-security-policy@lists.mozilla.org> wrote:
> 
> > As several others have indicated, WebPKI today is effectively a subset
> > of the more generic shared PKI. It is beyond time to fork the WebPKI
> > from the general PKI and strongly consider making WebPKI-only CAs that
> > are subordinate to the broader PKI; these WebPKI-only CAs can be
> > carried by default in public web browsers and operating systems, while
> > the broader general PKI roots can be added locally (using centrally
> > managed policies or local configuration) by those users who what a
> > superset of the WebPKI.
> >
> 
> +1.  This is the only outcome that, long term, balances the tradeoffs
> appropriately.

+1. Maybe a first step would be to write an RFC that explains, how technical 
constraining based on EKU (and Certificate Policies) through the layers of a 
multi-tier-PKI-Hierarchy should work. We have seen in this thread, that 
different Application Software Suppliers have different ideas, sometimes not 
even consistent within their application. I would be willing to support it.

With best regards,
Rufus Buschart

Siemens AG
Siemens Operations
Information Technology
Value Center Core Services
SOP IT IN COR
Freyeslebenstr. 1
91058 Erlangen, Germany 
Tel.: +49 1522 2894134
mailto:rufus.busch...@siemens.com
www.twitter.com/siemens

www.siemens.com/ingenuityforlife

Siemens Aktiengesellschaft: Chairman of the Supervisory Board: Jim Hagemann 
Snabe; Managing Board: Joe Kaeser, Chairman, President and Chief Executive 
Officer; Roland Busch, Klaus Helmrich, Cedrik Neike, Ralf P. Thomas; Registered 
offices: Berlin and Munich, Germany; Commercial registries: Berlin 
Charlottenburg, HRB 12300, Munich, HRB 6684; WEEE-Reg.-No. DE 23691322
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-05 Thread Buschart, Rufus via dev-security-policy
> From: dev-security-policy  On 
> Behalf Of Matt Palmer via dev-security-policy
> Sent: Sonntag, 5. Juli 2020 06:36
> 
> On Sat, Jul 04, 2020 at 07:42:12PM -0700, Peter Bowen wrote:
> > On Sat, Jul 4, 2020 at 7:12 PM Matt Palmer via dev-security-policy
> >  wrote:
> > >
> > > > On Sat, Jul 04, 2020 at 08:42:03AM -0700, Mark Arnott via 
> > > > dev-security-policy wrote:
> > > >
> > > > In the CIA triad Availability is as important as Confidentiality.
> > > > Has anyone done a threat model and a serious risk analysis to
> > > > determine what a reasonable risk mitigation strategy is?
> > >
> > > Did you do a threat model and a serious risk analysis before you
> > > chose to use the WebPKI in your application?
> >
> > I think it is important to keep in mind that many of the CA
> > certificates that were identified are constrained to not issue TLS
> > certificates.  The certificates they issue are explicitly excluded
> > from the Mozilla CA program requirements.
> 
> Yes, I'm aware of that.
> 
> > I don't think it is reasonable to assert that everyone impacted by
> > this should have been aware of the possibly of revocation
> 
> At the limits, I agree with you.  However, to whatever degree that there is 
> complaining to be done, it should be directed at the CA(s)
> which sold a product that, it is now clear, was not fit for whatever purpose 
> it has been put to, and not at Mozilla.

Let me quote from the NSS website of Mozilla 
(https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Overview):

  If you want to add support for SSL, S/MIME, or other Internet security 
standards to your application, you can use Network Security Services (NSS) to 
implement
  all your security features. NSS provides a complete open-source 
implementation of the crypto libraries used by AOL, Red Hat, Google, and other 
companies in a
  variety of products, including the following:
  * Mozilla products, including Firefox, Thunderbird, SeaMonkey, and Firefox OS.
  * [and a long list of additional reference applications]

Probably it would be good if someone from Mozilla team steps in here, but 
S/MIME _is_ an advertised use-case for NSS. And the Mozilla website says 
nowhere, that the demands and rules of WebPKI / CA/B-Forum overrule all other 
demands. It is simply not to be expected by a consumer of S/MIME certificates 
that they become invalid within 7 days just because the BRGs for TLS 
certificates are requiring it. This feels close to intrusive behavior of the 
WebPKI community.

With best regards,
Rufus Buschart

Siemens AG
Siemens Operations
Information Technology
Value Center Core Services
SOP IT IN COR
Freyeslebenstr. 1
91058 Erlangen, Germany 
Tel.: +49 1522 2894134
mailto:rufus.busch...@siemens.com
www.twitter.com/siemens

www.siemens.com/ingenuityforlife

Siemens Aktiengesellschaft: Chairman of the Supervisory Board: Jim Hagemann 
Snabe; Managing Board: Joe Kaeser, Chairman, President and Chief Executive 
Officer; Roland Busch, Klaus Helmrich, Cedrik Neike, Ralf P. Thomas; Registered 
offices: Berlin and Munich, Germany; Commercial registries: Berlin 
Charlottenburg, HRB 12300, Munich, HRB 6684; WEEE-Reg.-No. DE 23691322
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Matt Palmer via dev-security-policy
On Sat, Jul 04, 2020 at 07:42:12PM -0700, Peter Bowen wrote:
> On Sat, Jul 4, 2020 at 7:12 PM Matt Palmer via dev-security-policy
>  wrote:
> >
> > On Sat, Jul 04, 2020 at 08:42:03AM -0700, Mark Arnott via 
> > dev-security-policy wrote:
> > > I was informed yesterday that I would have to replace just over 300
> > > certificates in 5 days because my CA is required by rules from the CA/B
> > > forum to revoke its subCA certificate.
> >
> > The possibility of such an occurrence should have been made clear in the
> > subscriber agreement with your CA.  If not, I encourage you to have a frank
> > discussion with your CA.
> >
> > > In the CIA triad Availability is as important as Confidentiality.  Has
> > > anyone done a threat model and a serious risk analysis to determine what a
> > > reasonable risk mitigation strategy is?
> >
> > Did you do a threat model and a serious risk analysis before you chose to
> > use the WebPKI in your application?
> 
> I think it is important to keep in mind that many of the CA
> certificates that were identified are constrained to not issue TLS
> certificates.  The certificates they issue are explicitly excluded
> from the Mozilla CA program requirements.

Yes, I'm aware of that.

> I don't think it is reasonable to assert that everyone impacted by
> this should have been aware of the possibly of revocation

At the limits, I agree with you.  However, to whatever degree that there is
complaining to be done, it should be directed at the CA(s) which sold a
product that, it is now clear, was not fit for whatever purpose it has been
put to, and not at Mozilla.

> it is completely permissible under all browser programs to issue
> end-entity certificates with infinite duration that guarantee that they
> will never be revoked, even in the case of full key compromise, as long as
> the certificate does not assert a key purpose in the EKU that is covered
> under the policy.  The odd thing in this case is that the subCA
> certificate itself is the certificate in question.

And a sufficiently[1] thorough threat modelling and risk analysis exercise
would have identified the hazard of a subCA certificate that needed to be
revoked, assessed the probability of that hazard occurring, and either
accepted the risk (and thus have no reasonable cause for complaint now), or
would have controlled the risk until it was acceptable.

That there are people cropping up now demanding that Mozilla do a risk
analysis for them indicates that they themselves didn't do the necessary
risk analysis beforehand, which pegs my irony meter.

I wonder how these Masters of Information Security have "threat modelled"
the possibility that their chosen CA might get unceremoniously removed from
trust stores.  Show us yer risk register!

- Matt

[1] one might also substitute "impossibly" for "sufficiently" here; I've
done enough "risk analysis" to know that trying to enumerate all possible
threats is an absurd notion.  The point I'm trying to get across is
that someone asking Mozilla to do what they can't is not the iron-clad,
be-all-and-end-all argument that some appear to believe it is.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Ryan Sleevi via dev-security-policy
On Sat, Jul 4, 2020 at 10:42 PM Peter Bowen via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> As several others have indicated, WebPKI today is effectively a subset
> of the more generic shared PKI. It is beyond time to fork the WebPKI
> from the general PKI and strongly consider making WebPKI-only CAs that
> are subordinate to the broader PKI; these WebPKI-only CAs can be
> carried by default in public web browsers and operating systems, while
> the broader general PKI roots can be added locally (using centrally
> managed policies or local configuration) by those users who what a
> superset of the WebPKI.
>

+1.  This is the only outcome that, long term, balances the tradeoffs
appropriately.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Peter Bowen via dev-security-policy
On Sat, Jul 4, 2020 at 7:12 PM Matt Palmer via dev-security-policy
 wrote:
>
> On Sat, Jul 04, 2020 at 08:42:03AM -0700, Mark Arnott via dev-security-policy 
> wrote:
> > I was informed yesterday that I would have to replace just over 300
> > certificates in 5 days because my CA is required by rules from the CA/B
> > forum to revoke its subCA certificate.
>
> The possibility of such an occurrence should have been made clear in the
> subscriber agreement with your CA.  If not, I encourage you to have a frank
> discussion with your CA.
>
> > In the CIA triad Availability is as important as Confidentiality.  Has
> > anyone done a threat model and a serious risk analysis to determine what a
> > reasonable risk mitigation strategy is?
>
> Did you do a threat model and a serious risk analysis before you chose to
> use the WebPKI in your application?

I think it is important to keep in mind that many of the CA
certificates that were identified are constrained to not issue TLS
certificates.  The certificates they issue are explicitly excluded
from the Mozilla CA program requirements.

The issue at hand is caused by a lack of standardization of the
meaning of the Extended Key Usage certificate extension when included
in a CA-certificate.  This has resulted in some software developers
taking certain EKUs in CA-certificates to act as a constraint (similar
to Name Constraints), some to take it as the purpose for which the
public key may be used, and some to simultaneously take both
approaches - using the former for id-kp-serverAuth key purpose and the
latter for the id-kp-OCSPSigning key purpose.

I don't think it is reasonable to assert that everyone impacted by
this should have been aware of the possibly of revocation - it is
completely permissible under all browser programs to issue end-entity
certificates with infinite duration that guarantee that they will
never be revoked, even in the case of full key compromise, as long as
the certificate does not assert a key purpose in the EKU that is
covered under the policy.  The odd thing in this case is that the
subCA certificate itself is the certificate in question.

As several others have indicated, WebPKI today is effectively a subset
of the more generic shared PKI. It is beyond time to fork the WebPKI
from the general PKI and strongly consider making WebPKI-only CAs that
are subordinate to the broader PKI; these WebPKI-only CAs can be
carried by default in public web browsers and operating systems, while
the broader general PKI roots can be added locally (using centrally
managed policies or local configuration) by those users who what a
superset of the WebPKI.

Thanks,
Peter
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Matt Palmer via dev-security-policy
On Sat, Jul 04, 2020 at 08:42:03AM -0700, Mark Arnott via dev-security-policy 
wrote:
> I was informed yesterday that I would have to replace just over 300
> certificates in 5 days because my CA is required by rules from the CA/B
> forum to revoke its subCA certificate.

The possibility of such an occurrence should have been made clear in the
subscriber agreement with your CA.  If not, I encourage you to have a frank
discussion with your CA.

> In the CIA triad Availability is as important as Confidentiality.  Has
> anyone done a threat model and a serious risk analysis to determine what a
> reasonable risk mitigation strategy is?

Did you do a threat model and a serious risk analysis before you chose to
use the WebPKI in your application?

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Matt Palmer via dev-security-policy
On Sat, Jul 04, 2020 at 12:51:32PM -0700, Mark Arnott via dev-security-policy 
wrote:
> I think that the lack of fairness comes from the fact that the CA/B forum
> only represents the view points of two interests - the CAs and the Browser
> vendors.  Who represents the interests of industries and end users? 
> Nobody.

CAs claim that they represent what I assume you mean by "industries" (that
is, the entities to which WebPKI certificates are issued).  If you're
unhappy with the way which your interests are being represented by your CA,
I would encourage you to speak with them.  Alternately, anyone can become an
"Interested Party" within the CA/B Forum, which a brief perusal of the CA/B
Forum website will make clear.

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Ryan Sleevi via dev-security-policy
On Sat, Jul 4, 2020 at 9:41 PM Peter Gutmann 
wrote:

> Ryan Sleevi  writes:
>
> >And they are accomodated - by using something other than the Web PKI.
>
> That's the HTTP/2 "let them eat cake" response again.  For all intents and
> purposes, PKI *is* the Web PKI.  If it wasn't, people wouldn't be worrying
> about having to reissue/replace certificates that will never be used in a
> web
> context because of some Web PKI requirement that doesn't apply to them.
>

Thanks Peter, but I fail to see how you're making your point.

The problem that "PKI *is* the Web PKI" is the problem to be solved. That's
not a desirable outcome, and exactly the kind of thing we'd expect to see
as part of a CA transition.

PKI is a technology, much like HTTP/2 is a protocol. Unlike your example,
of HTTP/2 not being considerate of SCADA devices, PKI is an abstract
technology fully capable of addressing the SCADA needs. The only
distinction is that, by design and rather intentionally, it doesn't mean
that the billions of devices out there, in their default configuration, can
or should expect to talk to SCADA servers. I'm would hope you recognize why
that's undesirable, just like it would be if your phone were to ship with a
SCADA client. At the end of the day, this is something that should require
a degree of intentionality. Whether it's HL7 or SCADA, these are limited
use cases that aren't part of a generic and interoperable Web experience,
and it's not at all unreasonable to think they may require additional,
explicit configuration to support.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Peter Gutmann via dev-security-policy
Ryan Sleevi  writes:

>And they are accomodated - by using something other than the Web PKI.

That's the HTTP/2 "let them eat cake" response again.  For all intents and
purposes, PKI *is* the Web PKI.  If it wasn't, people wouldn't be worrying
about having to reissue/replace certificates that will never be used in a web
context because of some Web PKI requirement that doesn't apply to them.

Peter.





 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Ryan Sleevi via dev-security-policy
On Sat, Jul 4, 2020 at 9:21 PM Peter Gutmann 
wrote:

> So the problem isn't "everyone should do what the Web PKI wants, no matter
> how
> inappropriate it is in their environment", it's "CAs (and protocol
> designers)
> need to acknowledge that something other than the web exists and
> accommodate
> it".


And they are accomodated - by using something other than the Web PKI.

Your examples of SCADA are apt: there's absolutely no reason to assume a
default phone device, for example, should be able to manage a SCADA device.
Of course we'd laugh at that and say "Oh god, who would do something that
stupid?"

Yet that's what happens when one tries to make a one-size fits-all PKI.

Of course the PKI technologies accommodate these scenarios: you use locally
trusted anchors, specific to your environment, and hope that the OS vendor
does not remove support for your use case in a subsequent update. Yet it
would be grossly negligent if we allowed SCADA, in your example, to hold
back the evolution of the Web. As you yourself note, it's something other
than the Web. And it can use its own PKI.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [FORGED] Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Peter Gutmann via dev-security-policy
Eric Mill via dev-security-policy  
writes:

>This is a clear, straightforward statement of perhaps the single biggest core
>issue that limits the agility and security of the Web PKI

That's not the biggest issue by a long shot.  The biggest issue is that the
public PKI (meaning public/commercial CAs, not sure what the best collective
noun for that is) assumes that the only possible use for certificates is the
web.  For all intents and purposes, public PKI = Web PKI.  For example for
embedded systems, SCADA devices, anything on an RFC 1918 LAN, and much more,
the only appropriate expiry date for a certificate is never.  However, since
the Web PKI has decided that certificates should constantly expire because
$reasons, everything that isn't the web has to deal with this, or more usually
suffer under it.

The same goes for protocols like HTTP and TLS, the current versions (HTTP/2 /3
and TLS 1.3) are designed for efficient content delivery by large web service
providers above everything else.  When some SCADA folks requested a few minor
changes to the SCADA-hostile HTTP/2 from the WG, not mandatory but just
negotiable options to make it more usable in a SCADA environment, the response
was "let them eat HTTP/1.1".  In other words they'd explicitly forked HTTP,
there was HTTP/2 for the web and HTTP/1.1 for the rest of them.

So the problem isn't "everyone should do what the Web PKI wants, no matter how
inappropriate it is in their environment", it's "CAs (and protocol designers)
need to acknowledge that something other than the web exists and accommodate
it".

Peter.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Buschart, Rufus via dev-security-policy

From: Eric Mill 
Sent: Sonntag, 5. Juli 2020 00:55
To: Buschart, Rufus (SOP IT IN COR) 
Cc: mozilla-dev-security-policy 
; r...@sleevi.com; 
mark.arno...@gmail.com
Subject: Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous 
Delegated Responder Cert


On Sat, Jul 4, 2020 at 3:15 PM Buschart, Rufus via dev-security-policy 
mailto:dev-security-policy@lists.mozilla.org>>
 wrote:
...especially since many of those millions of certificates are not even TLS 
certificates and their consumers never expected the hard revocation deadlines 
of the BRGs to be of any relevance for them. And therefore they didn't design 
their infrastructure to be able to do an automated mass-certificate exchange.

This is a clear, straightforward statement of perhaps the single biggest core 
issue that limits the agility and security of the Web PKI: certificate 
customers (particularly large enterprises) don't seem to actually expect they 
may have to revoke many certificates on short notice, despite it being 
extremely realistic that they may need to do so. We're many years into the Web 
PKI now, and there have been multiple mass-revocation events along the way. At 
some point, these expectations have to change and result in redesigns that 
match them.

[>] Maybe I wasn’t able to bring over my message: those 700 k certificates that 
are hurting us most, have never been “WebPKI” certificates. They are from 
technically constrained issuing CAs that are limited to S/MIME and client 
authentication. We are just ‘collateral damage’ from a compliance point of view 
(of course not in a security pov). In the upcoming BRGs for S/MIME I hope that 
the potential technical differences between TLS certificates (nearly all stored 
as P12 files in on-line server) and S/MIME certificates (many of them stored 
off-line on smart-cards or other tokens) will be reflected also in the 
revocation requirements. For WebPKI (aka TLS) certificates, we are getting 
better based on the lessons learned of the last mass exchanges.

It's extremely convenient and cost-effective for organizations to rely on the 
WebPKI for non-public-web needs, and given that the WebPKI is still 
(relatively) more agile than a lot of private PKIs, there will likely continue 
to be security advantages for organizations that do so. But if the security and 
agility needs of the WebPKI don't align with an organization's needs, using an 
alternate PKI is a reasonable solution that reduces friction on both sides of 
the equation.

[>] But we are talking in S/MIME also about “public needs”: It’s about the 
interchange of signed and encrypted emails between different entities that 
don’t share a private PKI.

--
Eric Mill
617-314-0966 | 
konklone.com<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fkonklone.com%2F&data=02%7C01%7Crufus.buschart%40siemens.com%7Cf2f26fa2fbe34263c93808d8206d650e%7C38ae3bcd95794fd4addab42e1495d55a%7C1%7C1%7C637295001505543267&sdata=6H0qC1Ag9B1vuJ05d2BFrvaL0WBv5grib86q2NOSLuA%3D&reserved=0>
 | 
@konklone<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftwitter.com%2Fkonklone&data=02%7C01%7Crufus.buschart%40siemens.com%7Cf2f26fa2fbe34263c93808d8206d650e%7C38ae3bcd95794fd4addab42e1495d55a%7C1%7C1%7C637295001505543267&sdata=7MwPiz4jDprx9fNj8gcwq6W59s6VcZ46yotvY4dsqTs%3D&reserved=0>



With best regards,
Rufus Buschart

Siemens AG
Siemens Operations
Information Technology
Value Center Core Services
SOP IT IN COR
Freyeslebenstr. 1
91058 Erlangen, Germany
Tel.: +49 1522 2894134
mailto:rufus.busch...@siemens.com
www.twitter.com/siemens<http://www.twitter.com/siemens>
www.siemens.com/ingenuityforlife<https://siemens.com/ingenuityforlife>
[cid:image001.gif@01D6526A.82B24320]
Siemens Aktiengesellschaft: Chairman of the Supervisory Board: Jim Hagemann 
Snabe; Managing Board: Joe Kaeser, Chairman, President and Chief Executive 
Officer; Roland Busch, Klaus Helmrich, Cedrik Neike, Ralf P. Thomas; Registered 
offices: Berlin and Munich, Germany; Commercial registries: Berlin 
Charlottenburg, HRB 12300, Munich, HRB 6684; WEEE-Reg.-No. DE 23691322

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Matthew Hardeman via dev-security-policy
Just chiming in as another subscriber and relying party, with a view to
speaking to the other subscribers on this topic.

To the extent that your use case is not specifically the WebPKI as pertains
to modern browsers, it was clear to me quite several years ago and gets
clearer every day: the WebPKI is not for you, us, or anyone outside that
very particular scope.

Want to pin server cert public keys in an app?  Have a separate TLS
endpoint for that with an industry or org specific private PKI behind it.

Make website endpoints that need to face broad swathes of public users’ web
browsers participate in the WebPKI.  Get client certs and API endpoints out
of it.

That was the takeaway I had quite some years ago and I’ve been saved much
grief for having moved that way.

On Saturday, July 4, 2020, Ryan Sleevi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Sat, Jul 4, 2020 at 5:32 PM Mark Arnott via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> > Why aren't we hearing more from the 14 CAs that this affects.  Correct me
> > if I am wrong, but the CA/B form has something like 23 members??  An
> issue
> > that affects 14 CAs indicates a problem with the way the forum
> collaborates
> > (or should I say 'fails to work together')  Maybe this incident should
> have
> > followed a responsible disclosure process and not been fully disclosed
> > right before holidays in several nations.
>
>
> This was something disclosed 6 months ago and 6 years ago. This is not
> something “new”. The disclosure here is precisely because CAs failed, when
> engaged privately, to understand both the compliance failure and the
> security risk.
>
> Unfortunately, debates about “responsible” disclosure have existed for as
> long as computer security has been an area of focus, and itself was a term
> that was introduced as way of having the language favor the vendor, not the
> reporter. We have a security risk introduced by a compliance failure, which
> has been publicly known for some time, and which some CAs have dismissed as
> not an issue. Transparency is an essential part of bringing attention and
> understanding. This is, in effect, a “20-year day”. It’s not some new
> surprise.
>
> Even if disclosed privately, the CAs would still be under the same 7 day
> timeline. The mere act of disclosure triggers this obligation, whether
> private or public. That’s what the BRs obligate CAs to do.
>
>
> > Thank you for explaining that.  We need to hear the official position
> from
> > Google.  Ryan Hurst are you out there?
>
>
> Just to be clear: Ryan Hurst does not represent Google/Chrome’s decisions
> on certificates. He represents the CA, which is accountable to
> Google/Chrome just as it is to Mozilla/Firefox or Apple/Safari.
>
> In the past, and when speaking on behalf of Google/Chrome, it’s been
> repeatedly emphasized: Google/Chrome does not grant exceptions to the
> Baseline Requirements. In no uncertain terms, Google/Chrome does not give
> CAs blank checks to ignore violations of the Baseline Requirements.
>
> Ben’s message, while seeming somewhat self-contradictory in messaging,
> similarly reflects Mozilla’s long-standing position that it does not grant
> exceptions to the BRs. They treat violations as incidents, as Ben’s message
> emphasized, including the failure to revoke, and as Peter highlighted, both
> Google and Mozilla work through a public post-mortem process that seeks to
> understand the facts and nature of the CA’s violations and how the
> underlying systemic issues are being addressed. If a CA demonstrates poor
> judgement in handling these incidents, they may be distrusted, as some have
> in the past. However, CAs that demonstrate good judgement and demonstrate
> clear plans for improvement are given the opportunity to do so.
> Unfortunately, because some CAs think that the exact same plan should work
> for everyone, it’s necessary to repeatedly make it clear that there are no
> exceptions, and that each situation is case-by-case.
>
> This is not a Google/Mozilla issue either: as Mozilla reminds CAs at
> https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation , delayed
> revocation issues affect everyone, and CAs need to directly communicate
> with all of the root programs that they have made representations to.
> WISeKey knows how to do this, but they also know what the expectation and
> response will be, which is aligned with the above.
>
> Some CAs have had a string of failures, and around matters like this, and
> thus know that they’re at risk of being seen as CAs that don’t take
> security seriously, which may lead to distrust. Other CAs recognize that
> security, while painful, is also a competitive advantage, and so look to be
> leaders in an industry of followers and do the right thing, especially when
> this leadership can help ensure greater flexibility if/when they do have an
> incident. Other CAs may be in uniquely difficult positions where they

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Eric Mill via dev-security-policy
On Sat, Jul 4, 2020 at 3:15 PM Buschart, Rufus via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> ...especially since many of those millions of certificates are not even
> TLS certificates and their consumers never expected the hard revocation
> deadlines of the BRGs to be of any relevance for them. And therefore they
> didn't design their infrastructure to be able to do an automated
> mass-certificate exchange.
>

This is a clear, straightforward statement of perhaps the single biggest
core issue that limits the agility and security of the Web PKI: certificate
customers (particularly large enterprises) don't seem to actually expect
they may have to revoke many certificates on short notice, despite it being
extremely realistic that they may need to do so. We're many years into the
Web PKI now, and there have been multiple mass-revocation events along the
way. At some point, these expectations have to change and result in
redesigns that match them.

As Ryan [Sleevi] said, neither Mozilla nor Google employ some binary
unthinking process where either all the certs are revoked or all the CAs
who don't do it are instantly cut loose. If a CA makes a decision to not
revoke, citing systemic barriers to meeting the security needs of the
WebPKI that end users rely on, their incident reports are expected to
describe how the CA will work towards systemic solutions to those barriers
- to project a persuasive vision of why these sorts of events will not
result in a painful crucible going forward.

It's extremely convenient and cost-effective for organizations to rely on
the WebPKI for non-public-web needs, and given that the WebPKI is still
(relatively) more agile than a lot of private PKIs, there will likely
continue to be security advantages for organizations that do so. But if the
security and agility needs of the WebPKI don't align with an organization's
needs, using an alternate PKI is a reasonable solution that reduces
friction on both sides of the equation.

-- 
Eric Mill
617-314-0966 | konklone.com | @konklone 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Ryan Sleevi via dev-security-policy
On Sat, Jul 4, 2020 at 5:32 PM Mark Arnott via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Why aren't we hearing more from the 14 CAs that this affects.  Correct me
> if I am wrong, but the CA/B form has something like 23 members??  An issue
> that affects 14 CAs indicates a problem with the way the forum collaborates
> (or should I say 'fails to work together')  Maybe this incident should have
> followed a responsible disclosure process and not been fully disclosed
> right before holidays in several nations.


This was something disclosed 6 months ago and 6 years ago. This is not
something “new”. The disclosure here is precisely because CAs failed, when
engaged privately, to understand both the compliance failure and the
security risk.

Unfortunately, debates about “responsible” disclosure have existed for as
long as computer security has been an area of focus, and itself was a term
that was introduced as way of having the language favor the vendor, not the
reporter. We have a security risk introduced by a compliance failure, which
has been publicly known for some time, and which some CAs have dismissed as
not an issue. Transparency is an essential part of bringing attention and
understanding. This is, in effect, a “20-year day”. It’s not some new
surprise.

Even if disclosed privately, the CAs would still be under the same 7 day
timeline. The mere act of disclosure triggers this obligation, whether
private or public. That’s what the BRs obligate CAs to do.


> Thank you for explaining that.  We need to hear the official position from
> Google.  Ryan Hurst are you out there?


Just to be clear: Ryan Hurst does not represent Google/Chrome’s decisions
on certificates. He represents the CA, which is accountable to
Google/Chrome just as it is to Mozilla/Firefox or Apple/Safari.

In the past, and when speaking on behalf of Google/Chrome, it’s been
repeatedly emphasized: Google/Chrome does not grant exceptions to the
Baseline Requirements. In no uncertain terms, Google/Chrome does not give
CAs blank checks to ignore violations of the Baseline Requirements.

Ben’s message, while seeming somewhat self-contradictory in messaging,
similarly reflects Mozilla’s long-standing position that it does not grant
exceptions to the BRs. They treat violations as incidents, as Ben’s message
emphasized, including the failure to revoke, and as Peter highlighted, both
Google and Mozilla work through a public post-mortem process that seeks to
understand the facts and nature of the CA’s violations and how the
underlying systemic issues are being addressed. If a CA demonstrates poor
judgement in handling these incidents, they may be distrusted, as some have
in the past. However, CAs that demonstrate good judgement and demonstrate
clear plans for improvement are given the opportunity to do so.
Unfortunately, because some CAs think that the exact same plan should work
for everyone, it’s necessary to repeatedly make it clear that there are no
exceptions, and that each situation is case-by-case.

This is not a Google/Mozilla issue either: as Mozilla reminds CAs at
https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation , delayed
revocation issues affect everyone, and CAs need to directly communicate
with all of the root programs that they have made representations to.
WISeKey knows how to do this, but they also know what the expectation and
response will be, which is aligned with the above.

Some CAs have had a string of failures, and around matters like this, and
thus know that they’re at risk of being seen as CAs that don’t take
security seriously, which may lead to distrust. Other CAs recognize that
security, while painful, is also a competitive advantage, and so look to be
leaders in an industry of followers and do the right thing, especially when
this leadership can help ensure greater flexibility if/when they do have an
incident. Other CAs may be in uniquely difficult positions where they see
greater harm resulting, due to specific decisions made in the past that
were not properly thought through: but the burden falls to them to
demonstrate that uniqueness, that burden, and both what steps the CA is
taking to mitigate that risk **and the cost to the ecosystem** and what
steps they’re taking to prevent that going forward. Each CA is different
here, which is why blanket statements aren’t a one-size fits all solution.

I’m fully aware there are some CAs who are simply not prepared to rotate
intermediates within a week, despite them promising they were capable of
doing so. Those CAs need to have a plan to establish that capability, they
need to truly make sure this is exceptional and not just a continuance of a
pattern of problematic behavior, and they need to be transparent about all
of this. That’s consistent with all of my messages to date, and consistent
with Ben’s message regarding Mozilla’s expectations. They are different
ways of saying the same thing: you can’t sweep this under the rug, you 

RE: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Buschart, Rufus via dev-security-policy
Dear Mark!

> -Original Message-
> From: dev-security-policy  On 
> Behalf Of Ryan Sleevi via dev-security-policy
> Sent: Samstag, 4. Juli 2020 20:06
> 
> On Sat, Jul 4, 2020 at 12:52 PM mark.arnott1--- via dev-security-policy < 
> dev-security-policy@lists.mozilla.org> wrote:
> 
> > This is insane!
> > Those 300 certificates are used to secure healthcare information
> > systems at a time when the global healthcare system is strained by a
> > global pandemic. 

Thank you for bringing in your perspective as a certificate consumer. We at 
Siemens - as a certificate consumer - also have ~ 700 k affected personal 
S/MIME certificates out in the field, all of them stored on smart cards (+ code 
signing and TLS certificates ...). You can imagine, that rekeying them on short 
notice would be a total nightmare.

> To be clear; "the issue" we're talking about is only truly 'solved' by the 
> rotation and key destruction. Anything else, besides that, is just
> a risk calculation, and the CA is responsible for balancing that. Peter's 
> highlighting how the fix for the *compliance* issued doesn't fix
> the *security* issue, as other CAs, like DigiCert, have also noted.

Currently, I'm not convinced, that the underlying security issue (whose 
implication I of course fully understand and don't want to downplay) can only 
be fixed by revoking the issuing CAs and destructing the old keys. Sadly, all 
the brilliant minds on this mailing list are discussing compliance issues and 
the interpretation of RFCs, BRGs and 15-year-old Microsoft announcements, but 
it seems nobody is trying to find (or at least publicly discuss) a solution 
that can solve the security issue, is BRG / RFC compliant and doesn't require 
the replacement of millions of certificates - especially since many of those 
millions of certificates are not even TLS certificates and their consumers 
never expected the hard revocation deadlines of the BRGs to be of any relevance 
for them. And therefore they didn't design their infrastructure to be able to 
do an automated mass-certificate exchange.

With best regards,
Rufus Buschart

Siemens AG
Siemens Operations
Information Technology
Value Center Core Services
SOP IT IN COR
Freyeslebenstr. 1
91058 Erlangen, Germany 
Tel.: +49 1522 2894134
mailto:rufus.busch...@siemens.com
www.twitter.com/siemens

www.siemens.com/ingenuityforlife

Siemens Aktiengesellschaft: Chairman of the Supervisory Board: Jim Hagemann 
Snabe; Managing Board: Joe Kaeser, Chairman, President and Chief Executive 
Officer; Roland Busch, Klaus Helmrich, Cedrik Neike, Ralf P. Thomas; Registered 
offices: Berlin and Munich, Germany; Commercial registries: Berlin 
Charlottenburg, HRB 12300, Munich, HRB 6684; WEEE-Reg.-No. DE 23691322
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Key-destruction audit web-trust vs. ETSI (RE: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert)

2020-07-04 Thread Ryan Sleevi via dev-security-policy
Indeed, you’re welcome to do so, but I also don’t think these are easily
adjusted for or corrected. ETSI ESI is trying to solve a different need and
use case, and it’s structure and design reflect that.

And that’s ok! There’s nothing inherently wrong with that. They are trying
to develop a set of standards suitable for their community of users, which
generally are government regulators. As browsers, we have different needs
and expectations, reflecting the different trust frameworks. This is why I
stand by my assertion that it’s almost certainly better to move off ETSI
ESI, having spent a number of years trying, and failing, to highlight the
areas of critical concern and importance.

If the ETSI ESI liaisons have not been communicating the risk, clearly
communicated for several years, that a failure to address these will
ultimately lead to market rejection of using such audits as the basis for
browsers’ trust frameworks, I can only say that highlights an ongoing
systemic failure for said liaisons to inform both communities about
developments. If the answer is “as you know, it takes time, we have many
members” (as the response to these concerns frequently is answered), well,
it’s taking too long and it’s time to move on.

Luckily, audits are something that, like many other compliance or
contracting schemes, don’t inherentl conflict. An approach that has a CA
getting a WebTrust audit to satisfy browser needs and, if appropriate, an
ETSI ESI to satisfy others’ needs, doesn’t seem an unreasonable thing. I
can understand why it may not be desirable for a CA, but the goal is to
make sure browsers have the assurance they need.

On Sat, Jul 4, 2020 at 5:29 PM Buschart, Rufus 
wrote:

> Thank you Ryan for spending your 4th of July weekend answering my
> questions! From my purely technical understanding, without knowing too much
> about the history in the discussion between the ETSI community and you nor
> about the “Überbau” of the audit schemes, I would believe that most of the
> points you mentioned could be easily fixed, especially since they don’t
> seem to be unreasonable. Of course, I can’t speak for ETSI but since
> Siemens is a long-standing member of ETSI I’ll forward your email to the
> correct working group and try to make sure that you will receive a
> constructive answer.
>
>
>
> With best regards,
> Rufus Buschart
>
> Siemens AG
> Siemens Operations
> Information Technology
> Value Center Core Services
> SOP IT IN COR
> Freyeslebenstr. 1
> <https://www.google.com/maps/search/Freyeslebenstr.+1+%0D%0A91058+Erlangen,+Germany?entry=gmail&source=g>
> 91058 Erlangen, Germany
> <https://www.google.com/maps/search/Freyeslebenstr.+1+%0D%0A91058+Erlangen,+Germany?entry=gmail&source=g>
> Tel.: +49 1522 2894134
> mailto:rufus.busch...@siemens.com 
> www.twitter.com/siemens
> www.siemens.com/ingenuityforlife <https://siemens.com/ingenuityforlife>
>
>
> Siemens Aktiengesellschaft: Chairman of the Supervisory Board: Jim
> Hagemann Snabe; Managing Board: Joe Kaeser, Chairman, President and Chief
> Executive Officer; Roland Busch, Klaus Helmrich, Cedrik Neike, Ralf P.
> Thomas; Registered offices: Berlin and Munich, Germany; Commercial
> registries: Berlin Charlottenburg, HRB 12300, Munich, HRB 6684;
> WEEE-Reg.-No. DE 23691322
>
> *From:* Ryan Sleevi 
> *Sent:* Samstag, 4. Juli 2020 16:37
> *To:* Buschart, Rufus (SOP IT IN COR) 
> *Cc:* Peter Bowen ;
> mozilla-dev-security-pol...@lists.mozilla.org; r...@sleevi.com
> *Subject:* Re: Key-destruction audit web-trust vs. ETSI (RE: SECURITY
> RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder
> Cert)
>
>
>
>
>
>
>
> On Sat, Jul 4, 2020 at 9:17 AM Buschart, Rufus 
> wrote:
>
> Dear Ryan!
>
> > From: dev-security-policy 
> On Behalf Of Ryan Sleevi via dev-security-policy
> > Sent: Freitag, 3. Juli 2020 23:30
> > To: Peter Bowen 
> > Cc: Ryan Sleevi ; Pedro Fuentes ;
> mozilla-dev-security-pol...@lists.mozilla.org
> > Subject: Re: SECURITY RELEVANT FOR CAs: The curious case of the
> Dangerous Delegated Responder Cert
> >
> > On Fri, Jul 3, 2020 at 4:19 PM Peter Bowen  wrote:
> > > I agree that we cannot make blanket statements that apply to all CAs,
> > > but these are some examples where it seems like there are alternatives
> > > to key destruction.
> > >
> >
> > Right, and I want to acknowledge, there are some potentially viable
> paths specific to WebTrust, for which I have no faith with respect
> > to ETSI precisely because of the nature and design of ETSI audits, that,
> in an ideal world, could provide the assurance desired.
>
> Could you elaborate a little bit further, why you don't have "faith in
> respect to ETSI"? 

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Mark Arnott via dev-security-policy
On Saturday, July 4, 2020 at 3:01:34 PM UTC-4, Peter Bowen wrote:
> On Sat, Jul 4, 2020 at 11:06 AM Ryan Sleevi via dev-security-policy
>  wrote:

> One of the challenges is that not everyone in the WebPKI ecosystem has
> aligned around the same view of incidents as learning opportunities.
> This makes it very challenging for CAs to find a path that suits all
> participants and frequently results in hesitancy to use the blameless
> post-mortem version of incidents.
> 
Why aren't we hearing more from the 14 CAs that this affects.  Correct me if I 
am wrong, but the CA/B form has something like 23 members??  An issue that 
affects 14 CAs indicates a problem with the way the forum collaborates (or 
should I say 'fails to work together')  Maybe this incident should have 
followed a responsible disclosure process and not been fully disclosed right 
before holidays in several nations.

> To clarify what Ryan is saying here: he is pointing out that he is not
> representing the position of Google or Alphabet, rather he is stating
> he is acting as an independent party.

> As you can see from earlier messages, Mozilla has clearly stated that
> they are NOT requiring revocation in 7 days in this case, as they
> judge the risk from revocation greater than the risks from not
> revoking on that same timeframe. Ben Wilson, who does represent
> Mozilla, stated:

> If Google were to officially state something similar to Mozilla, then
> this thread would likely resolve itself quickly.  Yes, there are other
> trust stores to deal with, but they have historically not engaged in
> this Mozilla forum, so discussion here is not helpful for them.

Thank you for explaining that.  We need to hear the official position from 
Google.  Ryan Hurst are you out there?
 
> For the future, HL7 probably would be well served by working to create
> a separate PKI that meets their needs.  This would enable a different
> risk calculation to be used - one that is specific to the HL7 health
> data interoperability world.  I don't know if you or your organization
> has influence in HL7, but it is something worth pushing if you can.

This has been discussed in the past and abandoned, but this incident will 
probably restart that discussion.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Mark Arnott via dev-security-policy
On Saturday, July 4, 2020 at 2:06:53 PM UTC-4, Ryan Sleevi wrote:
> On Sat, Jul 4, 2020 at 12:52 PM mark.arnott1--- via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
> 
> As part of this, you should re-evaluate certificate pinning. As one of the
> authors of that specification, and indeed, my co-authors on the
> specification agree, certificate pinning does more harm than good, for
> precisely this reason.
> 
I agree that certificate pinning is a bad practice, but it is not a decision 
that I made or that I can reverse quickly.  It will take time to convince 
several different actors that this needs to change.

> I realize you're new here, and so I encourage you to read
> https://wiki.mozilla.org/CA/Policy_Participants for context about the
> nature of participation.

Thank you for helping me understand who the participants in this discussion are 
and what roles they fill.

> I'm very familiar with the implications of applying these rules, both
> personally and professionally. This is why policies such as
> https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation exist.
> This is where such information is shared, gathered, and considered, as
> provided by the CA. It is up to the CA to demonstrate the balance of
> equities, but also to ensure that going forward, they actually adhere to
> the rules they agreed to as a condition of trust. Simply throwing out
> agreements and contractual obligations when it's inconvenient,
> *especially *when
> these were scenarios contemplated when they were written and CAs
> acknowledged they would take steps to ensure they're followed, isn't a
> fair, equitable, or secure system.

I think that the lack of fairness comes from the fact that the CA/B forum only 
represents the view points of two interests - the CAs and the Browser vendors.  
Who represents the interests of industries and end users?  Nobody.


___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Key-destruction audit web-trust vs. ETSI (RE: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert)

2020-07-04 Thread Buschart, Rufus via dev-security-policy
Thank you Ryan for spending your 4th of July weekend answering my questions! 
From my purely technical understanding, without knowing too much about the 
history in the discussion between the ETSI community and you nor about the 
“Überbau” of the audit schemes, I would believe that most of the points you 
mentioned could be easily fixed, especially since they don’t seem to be 
unreasonable. Of course, I can’t speak for ETSI but since Siemens is a 
long-standing member of ETSI I’ll forward your email to the correct working 
group and try to make sure that you will receive a constructive answer.

With best regards,
Rufus Buschart

Siemens AG
Siemens Operations
Information Technology
Value Center Core Services
SOP IT IN COR
Freyeslebenstr. 1
91058 Erlangen, Germany
Tel.: +49 1522 2894134
mailto:rufus.busch...@siemens.com
www.twitter.com/siemens<http://www.twitter.com/siemens>
www.siemens.com/ingenuityforlife<https://siemens.com/ingenuityforlife>
[cid:image001.gif@01D6525A.EA305A60]
Siemens Aktiengesellschaft: Chairman of the Supervisory Board: Jim Hagemann 
Snabe; Managing Board: Joe Kaeser, Chairman, President and Chief Executive 
Officer; Roland Busch, Klaus Helmrich, Cedrik Neike, Ralf P. Thomas; Registered 
offices: Berlin and Munich, Germany; Commercial registries: Berlin 
Charlottenburg, HRB 12300, Munich, HRB 6684; WEEE-Reg.-No. DE 23691322

From: Ryan Sleevi 
Sent: Samstag, 4. Juli 2020 16:37
To: Buschart, Rufus (SOP IT IN COR) 
Cc: Peter Bowen ; 
mozilla-dev-security-pol...@lists.mozilla.org; r...@sleevi.com
Subject: Re: Key-destruction audit web-trust vs. ETSI (RE: SECURITY RELEVANT 
FOR CAs: The curious case of the Dangerous Delegated Responder Cert)



On Sat, Jul 4, 2020 at 9:17 AM Buschart, Rufus 
mailto:rufus.busch...@siemens.com>> wrote:
Dear Ryan!

> From: dev-security-policy 
> mailto:dev-security-policy-boun...@lists.mozilla.org>>
>  On Behalf Of Ryan Sleevi via dev-security-policy
> Sent: Freitag, 3. Juli 2020 23:30
> To: Peter Bowen mailto:pzbo...@gmail.com>>
> Cc: Ryan Sleevi mailto:r...@sleevi.com>>; Pedro Fuentes 
> mailto:pfuente...@gmail.com>>; 
> mozilla-dev-security-pol...@lists.mozilla.org<mailto:mozilla-dev-security-pol...@lists.mozilla.org>
> Subject: Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous 
> Delegated Responder Cert
>
> On Fri, Jul 3, 2020 at 4:19 PM Peter Bowen 
> mailto:pzbo...@gmail.com>> wrote:
> > I agree that we cannot make blanket statements that apply to all CAs,
> > but these are some examples where it seems like there are alternatives
> > to key destruction.
> >
>
> Right, and I want to acknowledge, there are some potentially viable paths 
> specific to WebTrust, for which I have no faith with respect
> to ETSI precisely because of the nature and design of ETSI audits, that, in 
> an ideal world, could provide the assurance desired.

Could you elaborate a little bit further, why you don't have "faith in respect 
to ETSI"? I have to admit, I never totally understood your concerns with ETSI 
audits because a simple comparison between WebTrust test requirements and ETSI 
test requirements don't show a lot of differences. If requirements are missing, 
we should discuss them with ETSI representatives to have them included in one 
of the next updates.

ETSI ESI members, especially the vice chairs, often like to make this claim of 
“simple comparison”, but that fails to take into account the holistic picture 
of how the audits are designed, operated, and their goals to achieve.

For example, you will find nothing to the detail of say the AICPA Professional 
Standards (AT-C) to provide insight into the obligations about how the audit is 
performed, methodological requirements such as sampling design, professional 
obligations regarding statements being made which can result in censure or loss 
of professional qualification. You have clear guidelines on reporting and 
expectations which can be directly mapped into the reports produced. You also 
have a clear recognition by WebTrust auditors about the importance of 
transparency. They are not a checklist of things to check, but an entire set of 
“assume the CA is not doing this” objectives. And even if all this fails, the 
WebTrust licensure and review process provides an incredibly valuable check on 
shoddy auditors, because it’s clear they harm the “WebTrust brand”

ETSI ESI-based audits lack all of that. They are primarily targeted at a 
different entity - the Supervisory Body within a Member State - and ETSI 
auditors fail to recognize that browsers want, and expect, as much detail as 
provided to the SB and more. We see the auditors, and the TC, entirely 
dismissive to the set of concerns regarding the lack of consistency and 
transparency. There is similarly no equivalent set of professional standards 
here: this is nominally handled by 

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Peter Bowen via dev-security-policy
On Sat, Jul 4, 2020 at 11:06 AM Ryan Sleevi via dev-security-policy
 wrote:
>
> On Sat, Jul 4, 2020 at 12:52 PM mark.arnott1--- via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> > This is insane!
> > Those 300 certificates are used to secure healthcare information systems
> > at a time when the global healthcare system is strained by a global
> > pandemic.  I have to coordinate with more than 30 people to make this
> > happen.  This includes three subsidiaries and three contract partner
> > organizations as well as dozens of managers and systems engineers.  One of
> > my contract partners follows the guidance of an HL7 specification that
> > requires them to do certificate pinning.  When we replace these
> > certificates we must give them 30 days lead time to make the change.
> >
>
> As part of this, you should re-evaluate certificate pinning. As one of the
> authors of that specification, and indeed, my co-authors on the
> specification agree, certificate pinning does more harm than good, for
> precisely this reason.
>
> Ultimately, the CA is responsible for following the rules, as covered in
> https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation . If
> they're not going to revoke, such as for the situation you describe,
> they're required to treat this as an incident and establish a remediation
> plan to ensure it does not happen again. In this case, a remediation plan
> certainly involves no longer certificate pinning (it is not safe to do),
> and also involves implementing controls so that it doesn't require 30
> people, across three subsidiaries, to replace "only" 300 certificates. The
> Baseline Requirements require those certificates to be revoked in as short
> as 24 hours, and so you need to design your systems robustly to meet that.

One of the things that can be very non-obvious to many people is that
"incident" as Ryan describes it is not a binary thing.  When Ryan says
"treat this as an incident" it is not necessarily the same kind of
incident system where there is a goal to have zero incidents forever.
In some environments the culture is that any incident is a career
limiting event or has financial impacts - for example, a factory might
pay out bonuses to employees for every month in which zero incidents
are reported.  This does not align with what Ryan speaks about.
Instead, based on my experience working with Ryan, incidents are the
trigger for blameless postmortems which are used to teach.  Google
documented this in their SRE book
(https://landing.google.com/sre/sre-book/chapters/postmortem-culture/
) and AWS includes this as part of their well-architected framework
(https://wa.aws.amazon.com/wat.concept.coe.en.html ).

One of the challenges is that not everyone in the WebPKI ecosystem has
aligned around the same view of incidents as learning opportunities.
This makes it very challenging for CAs to find a path that suits all
participants and frequently results in hesitancy to use the blameless
post-mortem version of incidents.

> > After wading through this very long chain of messages I see little
> > discussion of the impact this will have on end users.  Ryan Sleevi, in the
> > name of Google, is purporting to speak for the end users, but it is obvious
> > that Ryan does not understand the implication of applying these rules.
> >
>
> I realize you're new here, and so I encourage you to read
> https://wiki.mozilla.org/CA/Policy_Participants for context about the
> nature of participation.

To clarify what Ryan is saying here: he is pointing out that he is not
representing the position of Google or Alphabet, rather he is stating
he is acting as an independent party.

As you can see from earlier messages, Mozilla has clearly stated that
they are NOT requiring revocation in 7 days in this case, as they
judge the risk from revocation greater than the risks from not
revoking on that same timeframe. Ben Wilson, who does represent
Mozilla, stated:

"Mozilla does not need the certificates that incorrectly have the
id-kp-OCSPSigning EKU to be revoked within the next 7 days, as per
section 4.9.1.2 of the BRs. We want to work with CAs to identify a
path forward, which includes determining a reasonable timeline and
approach to replacing the certificates that incorrectly have the
id-kp-OCSPSigning EKU (and performing key destruction for them)."

The reason this discussion is ongoing is that Ryan does work for
Google and it is widely understood that: 1) certificates that are not
trusted by the Google Chrome browser  in its default configuration
(e.g. install on a home version of Windows with no further
configuration) or not trusted on widely used Android devices by
default are not commercially viable as they do not meet the needs of
many organizations and individuals who request certificates and 2)
Ryan appears to be highly influential in Chrome and Android decision
making about what certificates to trust.

If Google were to officially state something similar to Mo

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Ryan Sleevi via dev-security-policy
On Sat, Jul 4, 2020 at 12:52 PM mark.arnott1--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> This is insane!
> Those 300 certificates are used to secure healthcare information systems
> at a time when the global healthcare system is strained by a global
> pandemic.  I have to coordinate with more than 30 people to make this
> happen.  This includes three subsidiaries and three contract partner
> organizations as well as dozens of managers and systems engineers.  One of
> my contract partners follows the guidance of an HL7 specification that
> requires them to do certificate pinning.  When we replace these
> certificates we must give them 30 days lead time to make the change.
>

As part of this, you should re-evaluate certificate pinning. As one of the
authors of that specification, and indeed, my co-authors on the
specification agree, certificate pinning does more harm than good, for
precisely this reason.

Ultimately, the CA is responsible for following the rules, as covered in
https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation . If
they're not going to revoke, such as for the situation you describe,
they're required to treat this as an incident and establish a remediation
plan to ensure it does not happen again. In this case, a remediation plan
certainly involves no longer certificate pinning (it is not safe to do),
and also involves implementing controls so that it doesn't require 30
people, across three subsidiaries, to replace "only" 300 certificates. The
Baseline Requirements require those certificates to be revoked in as short
as 24 hours, and so you need to design your systems robustly to meet that.

There are proposals to the Baseline Requirements which would ensure this is
part of the legal agreement you make with the CA, to make sure you
understand these risks and expectations. It's already implicitly part of
the agreement you made, and you're expected to understand the legal
agreements you enter into. It's unfortunate that this is the first time
you're hearing about them, because the CA is responsible for making sure
their Subscribers know about this.


> After wading through this very long chain of messages I see little
> discussion of the impact this will have on end users.  Ryan Sleevi, in the
> name of Google, is purporting to speak for the end users, but it is obvious
> that Ryan does not understand the implication of applying these rules.
>

I realize you're new here, and so I encourage you to read
https://wiki.mozilla.org/CA/Policy_Participants for context about the
nature of participation.

I'm very familiar with the implications of applying these rules, both
personally and professionally. This is why policies such as
https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation exist.
This is where such information is shared, gathered, and considered, as
provided by the CA. It is up to the CA to demonstrate the balance of
equities, but also to ensure that going forward, they actually adhere to
the rules they agreed to as a condition of trust. Simply throwing out
agreements and contractual obligations when it's inconvenient,
*especially *when
these were scenarios contemplated when they were written and CAs
acknowledged they would take steps to ensure they're followed, isn't a
fair, equitable, or secure system.

This is the unfortunate nature of PKI: as a system, the cost of revocation
is often not properly accounted for when CAs or Subscribers are designing
their systems, and so folks engage in behaviours that increase risk, such
as lacking automation or certificate pinning. For lack of a better analogy,
it's like a contract that was agreed, a service rendered, and then refusing
to pay the invoice because it turns out, it's actually more than you can
pay. We wouldn't accept that within businesses, so why should we accept
here? CAs warrant to the community that they understand the risks and have
designed their systems, as they feel appropriate, to account for that. That
some have failed to do is unfortunate, but that highlights poor design by
the CA, not the one sending the metaphorical invoice for what was agreed to.

Just like with invoices that someone can't pay, sometimes it makes sense to
work on payment plans, collaboratively. But now that means the person who
was expecting the money similarly may be short, and that can quickly
cascade into deep instability, so has to be done with caution. That's what
https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation is about

However, if someone is regularly negligent in paying their bills, and have
to continue to work on payment agreements, eventually, you'll stop doing
business with them, because you realize that they are a risk. That's the
same as when we talk about distrust.

Peter Bowen says
> > ... simply revoking doesn't solve the issue; arguably it makes it
> >  worse than doing nothing.
>
> You are absolutely right, Peter.  Doctors will not be able to communicate
> with each ot

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Mark Arnott via dev-security-policy
On Friday, July 3, 2020 at 5:30:47 PM UTC-4, Ryan Sleevi wrote:
> On Fri, Jul 3, 2020 at 4:19 PM Peter Bowen wrote:
> 
I feel compelled to respond here for the first time even though I have never 
participated in CA/B forum proceeding and have never read through a single one 
of the 55 BRs that have been published over the last 8 years.
 
I was informed yesterday that I would have to replace just over 300 
certificates in 5 days because my CA is required by rules from the CA/B forum 
to revoke its subCA certificate.
 
This is insane!
Those 300 certificates are used to secure healthcare information systems at a 
time when the global healthcare system is strained by a global pandemic.  I 
have to coordinate with more than 30 people to make this happen.  This includes 
three subsidiaries and three contract partner organizastions as well as dozens 
of managers and systems engineers.  One of my contract partners follows the 
guidance of an HL7 specification that requires them to do certificate pinning.  
When we replace these certificates we must give them 30 days lead time to make 
the change.
 
After wading through this very long chain of messages I see little discussion 
of the impact this will have on end users.  Ryan Sleevi, in the name of Google, 
is purporting to speak for the end users, but it is obvious that Ryan does not 
understand the implication of applying these rules.
 
Peter Bowen says
> ... simply revoking doesn't solve the issue; arguably it makes it
>  worse than doing nothing.
 
You are absolutely right, Peter.  Doctors will not be able to communicate with 
each other effectively and people could die if the CA/B forum continues to 
blindly follow its rules without consideration for the greater impact this will 
have on the security of the internet.
 
In the CIA triad Availability is as important as Confidentiality.  Has anyone 
done a threat model and a serious risk analysis to determine what a reasonable 
risk mitigation strategy is?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread mark.arnott1--- via dev-security-policy
On Friday, July 3, 2020 at 5:30:47 PM UTC-4, Ryan Sleevi wrote:
> On Fri, Jul 3, 2020 at 4:19 PM Peter Bowen wrote:
> 

I feel compelled to respond here for the first time even though I have never 
participated in CA/B forum proceeding and have never read through a single one 
of the 55 BRs that have been published over the last 8 years.

I was informed yesterday that I would have to replace just over 300 
certificates in 5 days because my CA is required by rules from the CA/B forum 
to revoke its subCA certificate.

This is insane!
Those 300 certificates are used to secure healthcare information systems at a 
time when the global healthcare system is strained by a global pandemic.  I 
have to coordinate with more than 30 people to make this happen.  This includes 
three subsidiaries and three contract partner organizations as well as dozens 
of managers and systems engineers.  One of my contract partners follows the 
guidance of an HL7 specification that requires them to do certificate pinning.  
When we replace these certificates we must give them 30 days lead time to make 
the change.

After wading through this very long chain of messages I see little discussion 
of the impact this will have on end users.  Ryan Sleevi, in the name of Google, 
is purporting to speak for the end users, but it is obvious that Ryan does not 
understand the implication of applying these rules.

Peter Bowen says
> ... simply revoking doesn't solve the issue; arguably it makes it
>  worse than doing nothing.

You are absolutely right, Peter.  Doctors will not be able to communicate with 
each other effectively and people could die if the CA/B forum continues to 
blindly follow its rules without consideration for the greater impact this will 
have on the security of the internet.

In the CIA triad Availability is as important as Confidentiality.  Has anyone 
done a threat model and a serious risk analysis to determine what a reasonable 
risk mitigation strategy is?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Pedro Fuentes via dev-security-policy
Ryan,
I'm moving our particular discussions to Bugzilla.

I just want to clarify, again, that I'm not proposing to delay the revocation 
of the offending CA certificate, what I'm proposing is to give more time to the 
key destruction. Our position right now, is that the certificate would be 
revoked in any case during the 7 day period.

Thanks,
Pedro

El sábado, 4 de julio de 2020, 17:10:51 (UTC+2), Ryan Sleevi  escribió:
> Pedro: I said I understood you, and I thought we were discussing in the
> abstract.
> 
> I encourage you to reread this thread to understand why such a response
> varies on a case by case basis. I can understand your *attempt* to balance
> things, but I don’t think it would be at all appropriate to treat your
> email as your incident response.
> 
> You still need to holistically address the concerns I raised. As I
> mentioned in the bug: either this is a safe space to discuss possible
> options, which will vary on a CA-by-CA basis based on a holistic set of
> mitigations, or this was having to repeatedly explain to a CA why they were
> failing to recognize a security issue.
> 
> I want to believe it’s the former, and I would encourage you, that before
> you decide to delay revocation, you think very carefully. Have you met the
> Mozilla policy obligations on a delay to revocation? Perhaps it’s worth
> re-reading those expectations, before you make a decision that will also
> fail to uphold community expectations.
> 
> 
> On Sat, Jul 4, 2020 at 10:22 AM Pedro Fuentes via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
> > Thanks, Ryan.
> > I’m happy we are now in understanding to this respect.
> >
> > Then I’d change the literally ongoing plan. We should have the new CAs
> > hopefully today. Then I would do maybe also today the reissuance of the bad
> > ones and I’ll revoke the offending certificates during the period.
> >
> > Best.
> > ___
> > dev-security-policy mailing list
> > dev-security-policy@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-security-policy
> >

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Ryan Sleevi via dev-security-policy
Pedro: I said I understood you, and I thought we were discussing in the
abstract.

I encourage you to reread this thread to understand why such a response
varies on a case by case basis. I can understand your *attempt* to balance
things, but I don’t think it would be at all appropriate to treat your
email as your incident response.

You still need to holistically address the concerns I raised. As I
mentioned in the bug: either this is a safe space to discuss possible
options, which will vary on a CA-by-CA basis based on a holistic set of
mitigations, or this was having to repeatedly explain to a CA why they were
failing to recognize a security issue.

I want to believe it’s the former, and I would encourage you, that before
you decide to delay revocation, you think very carefully. Have you met the
Mozilla policy obligations on a delay to revocation? Perhaps it’s worth
re-reading those expectations, before you make a decision that will also
fail to uphold community expectations.


On Sat, Jul 4, 2020 at 10:22 AM Pedro Fuentes via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Thanks, Ryan.
> I’m happy we are now in understanding to this respect.
>
> Then I’d change the literally ongoing plan. We should have the new CAs
> hopefully today. Then I would do maybe also today the reissuance of the bad
> ones and I’ll revoke the offending certificates during the period.
>
> Best.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Key-destruction audit web-trust vs. ETSI (RE: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert)

2020-07-04 Thread Ryan Sleevi via dev-security-policy
On Sat, Jul 4, 2020 at 9:17 AM Buschart, Rufus 
wrote:

> Dear Ryan!
>
> > From: dev-security-policy 
> On Behalf Of Ryan Sleevi via dev-security-policy
> > Sent: Freitag, 3. Juli 2020 23:30
> > To: Peter Bowen 
> > Cc: Ryan Sleevi ; Pedro Fuentes ;
> mozilla-dev-security-pol...@lists.mozilla.org
> > Subject: Re: SECURITY RELEVANT FOR CAs: The curious case of the
> Dangerous Delegated Responder Cert
> >
> > On Fri, Jul 3, 2020 at 4:19 PM Peter Bowen  wrote:
> > > I agree that we cannot make blanket statements that apply to all CAs,
> > > but these are some examples where it seems like there are alternatives
> > > to key destruction.
> > >
> >
> > Right, and I want to acknowledge, there are some potentially viable
> paths specific to WebTrust, for which I have no faith with respect
> > to ETSI precisely because of the nature and design of ETSI audits, that,
> in an ideal world, could provide the assurance desired.
>
> Could you elaborate a little bit further, why you don't have "faith in
> respect to ETSI"? I have to admit, I never totally understood your concerns
> with ETSI audits because a simple comparison between WebTrust test
> requirements and ETSI test requirements don't show a lot of differences. If
> requirements are missing, we should discuss them with ETSI representatives
> to have them included in one of the next updates.


ETSI ESI members, especially the vice chairs, often like to make this claim
of “simple comparison”, but that fails to take into account the holistic
picture of how the audits are designed, operated, and their goals to
achieve.

For example, you will find nothing to the detail of say the AICPA
Professional Standards (AT-C) to provide insight into the obligations about
how the audit is performed, methodological requirements such as sampling
design, professional obligations regarding statements being made which can
result in censure or loss of professional qualification. You have clear
guidelines on reporting and expectations which can be directly mapped into
the reports produced. You also have a clear recognition by WebTrust
auditors about the importance of transparency. They are not a checklist of
things to check, but an entire set of “assume the CA is not doing this”
objectives. And even if all this fails, the WebTrust licensure and review
process provides an incredibly valuable check on shoddy auditors, because
it’s clear they harm the “WebTrust brand”

ETSI ESI-based audits lack all of that. They are primarily targeted at a
different entity - the Supervisory Body within a Member State - and ETSI
auditors fail to recognize that browsers want, and expect, as much detail
as provided to the SB and more. We see the auditors, and the TC, entirely
dismissive to the set of concerns regarding the lack of consistency and
transparency. There is similarly no equivalent set of professional
standards here: this is nominally handled by the accreditation process for
the CAB by the NAB, except that the generic nature upon which ETSI ESI
audits are designed means there are few normative requirements on auditors,
such as sampling and reporting. Unlike WebTrust, where the report has
professional obligations on the auditor, this simply doesn’t exist with
ETSI: if it isn’t a checklist item on 319 403, then the auditor can say
whatever they want and have zero professional obligations or consequences.
At the end of the day, an ETSI audit, objectively, is just a checklist
review: 403 provides too little assurance as to anything else, and lacks
the substance that holistically makes a WebTrust audit.

It is a comparison of “paint by numbers” to an actual creative work of art,
and saying “I don’t understand, they’re both use paint and both are of a
house”. And while it’s true both involve some degree of creative judgement,
and it’s up to
https://mobile.twitter.com/artdecider to sort that out, one of those
paintings is more suited to the fridge than the mantelpiece.

The inclusion of the ETSI criteria, back in v1.0 of the Mozilla Root Store
Policy in 2005, wasn’t based on a deeply methodical examination of the
whole process. It was “Microsoft uses it, so they probably found it
acceptable”. And it’s continuance wasn’t based on it meeting needs, so much
as “it’d be nice to have an alternative to WebTrust for folks to use”. But
both of those statements misunderstood the value ETSI ESI audits provide
and the systemic issues, even if they were well-intentioned. The
contemporary discussions, at that time, both of Scott Perry’s review (and
acceptance) as an auditor independent of the WebTrust/ETSI duo and of the
CACert audit, provide ample insight into the expectations and needs.

I don’t dismiss ETSI ESI for what it is trying to do: serve a legal
framework set of objectives (eIDAS, which is itself neutral with respect to
ETSI ESI audit schemes, as we

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Pedro Fuentes via dev-security-policy
Thanks, Ryan. 
I’m happy we are now in understanding to this respect. 

Then I’d change the literally ongoing plan. We should have the new CAs 
hopefully today. Then I would do maybe also today the reissuance of the bad 
ones and I’ll revoke the offending certificates during the period. 

Best.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Ryan Sleevi via dev-security-policy
On Sat, Jul 4, 2020 at 6:22 AM Pedro Fuentes via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> El viernes, 3 de julio de 2020, 18:18:49 (UTC+2), Ryan Sleevi  escribió:
> > Pedro's option is to reissue a certificate for that key, which as you
> point
> > out, keeps the continuity of CA controls associated with that key within
> > the scope of the audit. I believe this is the heart of Pedro's risk
> > analysis justification.
>
> I didn't want to participate here for now and just learn from other's
> opinions, but as my name has been evoked, I'd like to make a clarification.
>
> My proposal was not JUST to reissue the certificate with the same key. My
> proposal was to reissue the certificate with the same key AND a short
> lifetime (3 months) AND do a proper key destruction after that period.
>
> As I said, this:
> - Removes the offending EKU
> - Makes the certificate short-lived, for its consideration as delegated
> responder
> - Ensures that the keys are destroyed for peace of mind of the community
>
> And all that was, of course, pondering the security risk based on the fact
> that the operator of the key is also operating the keys of the Root and is
> also rightfully operating the OCSP services for the Root.
>
> I don't want to start another discussion, but I just feel necessary making
> this clarification, in case my previous message was unclear.


Thanks! I really appreciate you clarifying, as I had actually missed that
you proposed key destruction at the end of this. I agree, this is a
meaningfully different proposal that tries to balance the risks of
compliance while committing to a clear transition date.

>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Key-destruction audit web-trust vs. ETSI (RE: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert)

2020-07-04 Thread Buschart, Rufus via dev-security-policy
Dear Ryan!

> From: dev-security-policy  On 
> Behalf Of Ryan Sleevi via dev-security-policy
> Sent: Freitag, 3. Juli 2020 23:30
> To: Peter Bowen 
> Cc: Ryan Sleevi ; Pedro Fuentes ; 
> mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous 
> Delegated Responder Cert
> 
> On Fri, Jul 3, 2020 at 4:19 PM Peter Bowen  wrote:
> > I agree that we cannot make blanket statements that apply to all CAs,
> > but these are some examples where it seems like there are alternatives
> > to key destruction.
> >
> 
> Right, and I want to acknowledge, there are some potentially viable paths 
> specific to WebTrust, for which I have no faith with respect
> to ETSI precisely because of the nature and design of ETSI audits, that, in 
> an ideal world, could provide the assurance desired.

Could you elaborate a little bit further, why you don't have "faith in respect 
to ETSI"? I have to admit, I never totally understood your concerns with ETSI 
audits because a simple comparison between WebTrust test requirements and ETSI 
test requirements don't show a lot of differences. If requirements are missing, 
we should discuss them with ETSI representatives to have them included in one 
of the next updates.

With best regards,
Rufus Buschart

Siemens AG
Siemens Operations
Information Technology
Value Center Core Services
SOP IT IN COR
Freyeslebenstr. 1
91058 Erlangen, Germany 
Tel.: +49 1522 2894134
mailto:rufus.busch...@siemens.com
www.twitter.com/siemens

www.siemens.com/ingenuityforlife

Siemens Aktiengesellschaft: Chairman of the Supervisory Board: Jim Hagemann 
Snabe; Managing Board: Joe Kaeser, Chairman, President and Chief Executive 
Officer; Roland Busch, Klaus Helmrich, Cedrik Neike, Ralf P. Thomas; Registered 
offices: Berlin and Munich, Germany; Commercial registries: Berlin 
Charlottenburg, HRB 12300, Munich, HRB 6684; WEEE-Reg.-No. DE 23691322

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Pedro Fuentes via dev-security-policy
El viernes, 3 de julio de 2020, 18:18:49 (UTC+2), Ryan Sleevi  escribió:
> Pedro's option is to reissue a certificate for that key, which as you point
> out, keeps the continuity of CA controls associated with that key within
> the scope of the audit. I believe this is the heart of Pedro's risk
> analysis justification.

I didn't want to participate here for now and just learn from other's opinions, 
but as my name has been evoked, I'd like to make a clarification.

My proposal was not JUST to reissue the certificate with the same key. My 
proposal was to reissue the certificate with the same key AND a short lifetime 
(3 months) AND do a proper key destruction after that period.

As I said, this:
- Removes the offending EKU
- Makes the certificate short-lived, for its consideration as delegated 
responder
- Ensures that the keys are destroyed for peace of mind of the community

And all that was, of course, pondering the security risk based on the fact that 
the operator of the key is also operating the keys of the Root and is also 
rightfully operating the OCSP services for the Root.

I don't want to start another discussion, but I just feel necessary making this 
clarification, in case my previous message was unclear.

Best.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-03 Thread Ryan Sleevi via dev-security-policy
On Fri, Jul 3, 2020 at 4:19 PM Peter Bowen  wrote:

> >> For the certificates you identified in the beginning of this thread,
> >> we know they have a certain level of key protection - they are all
> >> required to be managed using cryptographic modules that are validated
> >> as meeting overall Level 3 requirements in FIPS 140.  We also know
> >> that there CAs are monitoring these keys as they have an obligation in
> >> BR 6.2.6 to "revoke all certificates that include the Public Key
> >> corresponding to the communicated Private Key" if the "Subordinate
> >> CA’s Private Key has been communicated to an unauthorized person or an
> >> organization not affiliated with the Subordinate CA".
> >
> >
> > Sure, but we know that such revocation is largely symbolic in the
> existence of these certificates for the vast majority of clients, and so
> the security goal cannot be reasonably demonstrated while that Private Key
> still exists.
> >
> > Further, once this action is performed according to 6.2.6, it disappears
> with respect to the obligations under the existing auditing/reporting
> frameworks. This is a known deficiency of the BRs, which you rather
> comprehensively tried to address when representing a CA member  of the
> Forum, in your discussion about object hierarchies and signing actions. A
> CA may have provisioned actions within 6.2.10 of their CPS, but that's not
> a consistent baseline that they can rely on.
> >
> > At odds here is how to square with the CA performing the action, but not
> achieving the result of that action?
>
> As long as the key is a CA key, the obligations stand.
>

Right, but we're in agreement that these obligations don't negate the risk
posed - e.g. of inappropriate signing - right? And those obligations
ostensibly end once the CA has revoked the certificate, since the status
quo today doesn't require key destruction ceremonies for CAs. I think that
latter point is something discussed in the CA/Browser Forum with respect to
"Audit Lifecycle", but even then, that'd be difficult to work through for
CA keys that were compromised.

Put differently: If the private key for such a responder is compromised, we
lose the ability to be assured it hasn't been used for shenanigans. As
such, either we accept the pain now, and get assurance it /can't/ be used
for shenanigans, or we have hope that it /won't/ be used for shenanigans,
but are in an even worse position if it is, because this would necessitate
revoking the root.

It is, at the core, a gamble on the order of "Pay me now vs pay me later".
If we address the issue now, it's painful up-front, but consistent with the
existing requirements and mitigates the potential-energy of a security
mistake. If we punt on the issue, we're simply storing up more potential
energy for a more painful revocation later.

I recognize that there is a spectrum here on options. In an abstract sense,
the calculus of "the key was destroyed a month before it would have been
compromised" is the same as "the key was destroyed a minute before it would
have been compromised" - the risk was dodged. But "the key was destroyed a
second after it was compromised" is doom.


> > Pedro's option is to reissue a certificate for that key, which as you
> point out, keeps the continuity of CA controls associated with that key
> within the scope of the audit. I believe this is the heart of Pedro's risk
> analysis justification.
> >   - However, controls like you describe are not ones that are audited,
> nor consistent between CAs
> >   - They ultimately rely on the CA's judgement, which is precisely the
> thing an incident like this calls into question, and so it's understandable
> not to want to throw "good money after bad"
>
> To be clear, I don't necessarily see this as a bad judgement on the
> CA's part.  Microsoft explicitly documented that _including_ the OCSP
> EKU was REQURIED in the CA certificate if using a delegated OCSP
> responder (see
> https://support.microsoft.com/en-us/help/2962991/you-cannot-enroll-in-an-online-certificate-status-protocol-certificate
> ).
> Using a delegated OCSP responder can be a significant security
> enhancement in some CA designs, such as when the CA key itself is
> stored offline.
>

Oh, I agree on the value of delegated responders, for precisely that
reason. I think the bad judgement is not trying to find an alternative
solution. Some CAs did, and I think that highlights strength. Other CAs, no
doubt, simply said "Sorry, we can't do it" or "You need to run a different
platform"

I'm not trying to suggest bad /intentions/, but I am trying to say that
it's bad /judgement/, no different than the IP address discussions had in
the CA/B Forum or internal server names. The intentions were admirable, the
execution was inappropriate.


> > The question of controls in place for the lifetime of the certificate is
> the "cost spectrum" I specifically talked about in
> https://www.mail-archive.com/dev-security-policy@lists.mozilla.org/msg13530.htm

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-03 Thread Peter Bowen via dev-security-policy
On Fri, Jul 3, 2020 at 9:18 AM Ryan Sleevi  wrote:
>
>
>
> On Fri, Jul 3, 2020 at 10:57 AM Peter Bowen  wrote:
>>
>> While it may be viewed as best practice to have short lived responder
>> certificates, it must not be viewed as a hard requirement for the BRs
>> or for the Mozilla program.  As you have pointed out previously, a
>> browser could make this a requirement, but I am unaware of any
>> publicly available requirement ot do so.
>
>
> Thanks, and I think you're making a useful clarification here. The 'need' 
> being talked about is the logical consequence of a secure design, and the 
> assumptions that go with it, as well as the role the BRs play in "profiling 
> down" RFC 5280-and-related into sensible subsets appropriate for their use 
> case.
>
> I think we're in agreement here that nocheck is required, and that the 
> consequences of nocheck's presence, namely:
> CAs issuing such a certificate should realize that a compromise of
>  the responder's key is as serious as the compromise of a CA key
>  used to sign CRLs, at least for the validity period of this
>  certificate. CAs may choose to issue this type of certificate with
>  a very short lifetime and renew it frequently.
>
> As BR-consuming clients, the majority of implementations I looked at don't 
> recursively check revocation for the delegated responder. That is, rather 
> than "nocheck" determining client behaviour (vs its absence), "nocheck" is 
> used to reflect what the clients will do regardless. This can be 
> understandably seen as 'non-ideal', but gets back into some of the discussion 
> with Corey regarding profiles and client behaviours.

So we are in agreement that is a certificate consumer bug if it fails
to check revocation on certificates that do not have nocheck set.
(Yes, I know that is a massive set of negations, sorry.). Good news is
that all the certificates are in the category of "need revocation
checking".


>> > It seems we’re at an impasse for understanding the issue for
>> > internally-operates Sub CAs: this breaks all of the auditable controls and
>> > assurance frameworks, and breaks the security goals of a “correctly”
>> > configured delegated responder, as discussed in the security considerations
>> > throughout RFC 6960.
>>
>> This is where I disagree.  Currently CAs can create as many delegated
>> OCSP responder certificates as they like with as many distinct keys as
>> they like.  There are no public requirements from any browser on the
>> protection of OCSP responder keys, as far as I know.  The few
>> requirements on revocation information are to provide accurate
>> information and provide the information within 10 seconds "under
>> normal operating conditions" (no SLO if the CA determines it is not
>> operating under normal conditions).
>
>
> I suspect this is the reopening of the discussion about "the CA organization" 
> or "the CA certificate"; does 6.2.7 apply to all Private Keys that logically 
> make up the CA "the organization"'s services, or is 6.2.7 only applicable to 
> keys with CA:TRUE. Either extreme is an unsatisfying answer: do the TLS keys 
> need to be on FIPS modules? (no, ideally not). Does this only apply to CA 
> keys and not to delegated responders (no, ideally not).

As I read the BRs today, the requirement only applies to CA keys and
not to keys for delegated responders.  However that does not matter in
this case, because all the certificates you identified are for CAs, so
we know their keys are in HSMs.

> Going back to 6960 and the requirement of pkix-nocheck, we know that such a 
> responder certificate is 'as powerful as' the Private Key associated with the 
> CA Certificate for which the responder is a responder for. Does the 
> short-lived validity eliminate the need for protection?
>
> I suspect when you disagree, is with respect to the auditable 
> controls/assurance frameworks, and less with respect to security goals 
> captured in 6960, since we seemed to agree on those above.
>
>>
>> For the certificates you identified in the beginning of this thread,
>> we know they have a certain level of key protection - they are all
>> required to be managed using cryptographic modules that are validated
>> as meeting overall Level 3 requirements in FIPS 140.  We also know
>> that there CAs are monitoring these keys as they have an obligation in
>> BR 6.2.6 to "revoke all certificates that include the Public Key
>> corresponding to the communicated Private Key" if the "Subordinate
>> CA’s Private Key has been communicated to an unauthorized person or an
>> organization not affiliated with the Subordinate CA".
>
>
> Sure, but we know that such revocation is largely symbolic in the existence 
> of these certificates for the vast majority of clients, and so the security 
> goal cannot be reasonably demonstrated while that Private Key still exists.
>
> Further, once this action is performed according to 6.2.6, it disappears with 
> respect to the obligations under the 

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-03 Thread Ryan Sleevi via dev-security-policy
On Fri, Jul 3, 2020 at 10:57 AM Peter Bowen  wrote:

> While it may be viewed as best practice to have short lived responder
> certificates, it must not be viewed as a hard requirement for the BRs
> or for the Mozilla program.  As you have pointed out previously, a
> browser could make this a requirement, but I am unaware of any
> publicly available requirement ot do so.
>

Thanks, and I think you're making a useful clarification here. The 'need'
being talked about is the logical consequence of a secure design, and the
assumptions that go with it, as well as the role the BRs play in "profiling
down" RFC 5280-and-related into sensible subsets appropriate for their use
case.

I think we're in agreement here that nocheck is required, and that the
consequences of nocheck's presence, namely:
CAs issuing such a certificate should realize that a compromise of
 the responder's key is as serious as the compromise of a CA key
 used to sign CRLs, at least for the validity period of this
 certificate. CAs may choose to issue this type of certificate with
 a very short lifetime and renew it frequently.

As BR-consuming clients, the majority of implementations I looked at don't
recursively check revocation for the delegated responder. That is, rather
than "nocheck" determining client behaviour (vs its absence), "nocheck" is
used to reflect what the clients will do regardless. This can be
understandably seen as 'non-ideal', but gets back into some of the
discussion with Corey regarding profiles and client behaviours.

"need" isn't an explicitly spelled out requirement, I agree, but falls from
the logical consequences of designing such a system and ensuring equivalent
security properties. For example, consider 4.9.10's requirement that OCSP
responses for subordinate CA certificates not have a validity greater than
1 year (!!). We know that's a clear security goal, and so a CA needs to
ensure they can meet that property. If issuing a Delegated Responder, it
logically follows that its validity period should be a year or less,
because if that Delegated Responder is compromised, the objective of 4.9.10
can't be fulfilled. I agree that a CA might argue "well, we published a new
response, and 4.9.10 doesn't say anything about having to result, just
perform the action", but I think we can agree that such an approach, even
if technically precise, calls into question much of their overall
interpretation of the BRs.


> > It seems we’re at an impasse for understanding the issue for
> > internally-operates Sub CAs: this breaks all of the auditable controls
> and
> > assurance frameworks, and breaks the security goals of a “correctly”
> > configured delegated responder, as discussed in the security
> considerations
> > throughout RFC 6960.
>
> This is where I disagree.  Currently CAs can create as many delegated
> OCSP responder certificates as they like with as many distinct keys as
> they like.  There are no public requirements from any browser on the
> protection of OCSP responder keys, as far as I know.  The few
> requirements on revocation information are to provide accurate
> information and provide the information within 10 seconds "under
> normal operating conditions" (no SLO if the CA determines it is not
> operating under normal conditions).
>

I suspect this is the reopening of the discussion about "the CA
organization" or "the CA certificate"; does 6.2.7 apply to all Private Keys
that logically make up the CA "the organization"'s services, or is 6.2.7
only applicable to keys with CA:TRUE. Either extreme is an unsatisfying
answer: do the TLS keys need to be on FIPS modules? (no, ideally not). Does
this only apply to CA keys and not to delegated responders (no, ideally
not).

Going back to 6960 and the requirement of pkix-nocheck, we know that such a
responder certificate is 'as powerful as' the Private Key associated with
the CA Certificate for which the responder is a responder for. Does the
short-lived validity eliminate the need for protection?

I suspect when you disagree, is with respect to the auditable
controls/assurance frameworks, and less with respect to security goals
captured in 6960, since we seemed to agree on those above.


> For the certificates you identified in the beginning of this thread,
> we know they have a certain level of key protection - they are all
> required to be managed using cryptographic modules that are validated
> as meeting overall Level 3 requirements in FIPS 140.  We also know
> that there CAs are monitoring these keys as they have an obligation in
> BR 6.2.6 to "revoke all certificates that include the Public Key
> corresponding to the communicated Private Key" if the "Subordinate
> CA’s Private Key has been communicated to an unauthorized person or an
> organization not affiliated with the Subordinate CA".
>

Sure, but we know that such revocation is largely symbolic in the existence
of these certificates for the vast majority of clients, and so the security
goal 

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-03 Thread Peter Bowen via dev-security-policy
Ryan,

I have read through this thread and am also somewhat perplexed.

I want to be clear, I'm posting only for myself, as an individual, not
on behalf of any current or former employers.

On Fri, Jul 3, 2020 at 4:26 AM Ryan Sleevi via dev-security-policy
 wrote:
> On Fri, Jul 3, 2020 at 3:24 AM Pedro Fuentes via dev-security-policy 
>  wrote:
>
> > >
> > > Yes. But that doesn't mean we blindly trust the CA in doing so. And that's
> > > the "security risk".
> >
> > But the point then is that a delegated responder that had the required
> > "noCheck" extension wouldn't be affected by this issue and CAs wouldn't
> > need to react, and therefore the issue to solve is the "mis-issuance"
> > itself due to the lack of the extension, not the fact that the CA
> > certificate could be used to do delegated responses for the same operator
> > of the Root, which is acceptable, as you said.
>
>
> I don’t understand why this is difficult to understand. If you have the
> noCheck extension, then as RFC 6960, you need to make that certificate
> short lived. The BRs require the cert have the extension.

I think this is difficult to understand because you are adding
requirements that don't currently exist.

There is nothing in the BRs or other obligations of the CA to make a
delegated OCSP responder certificate short-lived.  From 6960:

"CAs may choose to issue this type of certificate with a very short
lifetime and renew it frequently."

6960 explicitly says "may" is defined in RFC 2119, which says:

"MAY   This word, or the adjective "OPTIONAL", mean that an item is
truly optional."

Contrast with "should" which is "This word, or the adjective
"RECOMMENDED", mean that there may exist valid reasons in particular
circumstances to ignore a particular item, but the full implications
must be understood and carefully weighed before choosing a different
course."

While it may be viewed as best practice to have short lived responder
certificates, it must not be viewed as a hard requirement for the BRs
or for the Mozilla program.  As you have pointed out previously, a
browser could make this a requirement, but I am unaware of any
publicly available requirement ot do so.

> Similarly, if something goes wrong with such a responder, you also have to
> consider revoking the root, because it as-bad-as a root key compromise.

I think we can all agree to this point.

> In fact the side effect is that delegated responders operated externally
> > that have the required no check extension don't seem to be affected by the
> > issue and would be deemed acceptable, without requiring further action to
> > CAs, while the evident risk problem is still there.
>
>
> The “nocheck” discussion extension here is to highlight the compliance
> issue.
>
> The underlying issue is a security issue: things capable of providing OCSP
> responses that shouldn’t be.
>
> It seems you understand the security issue when viewing external sub-CAs:
> they can now impact the security of the issuer.
>
> It seems we’re at an impasse for understanding the issue for
> internally-operates Sub CAs: this breaks all of the auditable controls and
> assurance frameworks, and breaks the security goals of a “correctly”
> configured delegated responder, as discussed in the security considerations
> throughout RFC 6960.

This is where I disagree.  Currently CAs can create as many delegated
OCSP responder certificates as they like with as many distinct keys as
they like.  There are no public requirements from any browser on the
protection of OCSP responder keys, as far as I know.  The few
requirements on revocation information are to provide accurate
information and provide the information within 10 seconds "under
normal operating conditions" (no SLO if the CA determines it is not
operating under normal conditions).

For the certificates you identified in the beginning of this thread,
we know they have a certain level of key protection - they are all
required to be managed using cryptographic modules that are validated
as meeting overall Level 3 requirements in FIPS 140.  We also know
that there CAs are monitoring these keys as they have an obligation in
BR 6.2.6 to "revoke all certificates that include the Public Key
corresponding to the communicated Private Key" if the "Subordinate
CA’s Private Key has been communicated to an unauthorized person or an
organization not affiliated with the Subordinate CA".

I agree with Pedro here.  If the CA has control over the keys in the
certificates in question, then I do not see that there is a risk that
is greater than already exists.  The CA can determine that these are
approved OCSP responders and easily assess whether they have controls
in place since the creation of the certificate that provide assurance
that all OCSP responses signed using the key were accurate (if any
such responses exist).  They can also easily validate that they have
controls around these keys to provide assurance that any future OCSP
responses signed using the key will

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-03 Thread Ryan Sleevi via dev-security-policy
On Fri, Jul 3, 2020 at 10:04 AM Arvid Vermote 
wrote:

> GlobalSign recognizes the reported security issue and associated risk, and
> is working on a plan to remediate the impacted CA hierarchies with first
> priority on terminating those branches that include issuing CA with private
> keys outside of GlobalSign's realm. We will soon share an initial plan on
> our Bugzilla ticket https://bugzilla.mozilla.org/show_bug.cgi?id=1649937.
>
> One question we have for the root store operators specifically is what type
> of assurance they are looking for on the key destruction activities. In the
> past we've both done key destruction ceremonies without and with (e.g. in
> the case of addressing a compliance issue like
> https://bugzilla.mozilla.org/show_bug.cgi?id=1591005) an external auditor
> witnessing the destruction and issuing an independent ISAE3000 witnessing
> report.


Since the goal here is to be able to demonstrate, with some reasonable
assurance, that this key will not come back and haunt the ecosystem, my
intent of the suggestion was yes, an independently witnessed ceremony with
an appropriate report to that fact (e.g. ISAE300)

The reason for this is that so much of our current design around controls
and audits assume that once something is revoked, the key can no longer do
harm and is not interesting if it gets compromised. This threat model
defeats that assumption, because for the lifetime of the responder
certificate(s) associated with that key, it can be misused to revoke
itself, or its siblings, and cause pain anew.

I suspect that the necessity of destruction ceremony is probably influenced
by a variety of factors, such as how long the responder cert is valid for.
This is touched on some in RFC 6960. I don’t know what the “right” answer
is, but my gut is that any responder cert valid for more than a year from
now would benefit from such a report. If it’s less than a year out, and
internally operated, that maybe is reasonable to not require a report? I’m
not sure where that line is, and this is where the CAs can share their
analysis of the facts to better inform and find the “right” balance here.

>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-03 Thread Ryan Sleevi via dev-security-policy
On Fri, Jul 3, 2020 at 8:06 AM Pedro Fuentes via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Ryan,
> I don’t think I’m failing to see the security problem, but we evidently
> have different perception of the risk level for the particular case of
> internal delegation.
> Anyway I will just cease in my intent and just act as it’s expected,
> looking as guidance to the reaction of other CAs where possible.


Again, I don’t disagree that it’s reasonable to see a difference between 1P
and 3P if you’re a CA. But I look at the risk of “what attacks are now
enabled if this is unaddressed”?

Put differently, if a CA used a long-lived delegated responder, and I found
out about it, I would absolutely be concerned and expect explanations.
However, if the “worst” thing the delegated responder could do, if
compromised, was “simply” sign OCSP responses, that’s at least better than
the situation we’re in here. Because these delegated responders can, in
many cases, cause new issuance, and because they are actively signing
things other than OCSP responses (e.g. certificates), the threat model
becomes unreasonably complex.

The BRs have clear guidance on who absorbs the risk for a CA’s mistake: the
CA and anyone who has obtained certificates from the CA. Historically,
browsers have rather nobly stepped in and absorbed that risk for the
ecosystem, introducing ever more complex solutions to try to allow site
operators to continue with business as usual with minimal disruption. But
that’s not what the “social contract” of a CA says should happen, and it’s
not what the CP/CPS of the CA says will happen.

It’s true that 4.9.1.2 “only” demands revocation, but any CA not treating
this as recognizing the security impact and the similarity to a key misuse
and/or compromise - even if “hypothetically” - does a disservice to all.

I would just have a last request for you. I would appreciate if you can
> express your views on Ben’s message about Mozilla’s position, in particular
> about the 7-day deadline.
> I think it’s of extreme benefit for all if the different browsers are
> aligned.


I participate here largely in an individual capacity, except as specified.
I’ve already shared my risk analysis with Ben on that thread, but I also
think it would be grossly negligent for a CA to try to look for “browser
alignment” as a justification for ignoring the BRs, especially when so many
platforms and libraries are put at risk by the CA’s misissuance. It sounds
like you’re looking for an explicit “exception”, and while Ben’s message
seems like a divergence from past Mozilla positions, I think it at least
maintains consistency that “this is an incident, no exceptions”.

Again, while wanting to ensure a safe space for questions, as a CA, your
responsibility is to understand these issues and act accordingly. I wholly
appreciate wanting to have an open and transparent discussion about the
facts, and I am quite sensitive to the fact that there very well can be
information being overlooked. As I said, and will say again: this is for
your incident response to demonstrate, and as with any CA, and for any
incident, you will be judged on how well and how timely you respond to the
risks. Similarly, if you fail to revoke on time, how comprehensively you
mitigate the lack of timeliness so that you can comprehensively demonstrate
it will never happen again.

Every time we encounter some non-compliance with an intermediate, CAs push
back on their 4.9.1.2 obligations, saying it would be too disruptive. Heck,
we’re still in a place where CAs are arguing even revoking the Subscriber
cert under 4.9.1.1 is too disruptive, despite the CA claiming they would do
so. This **has** to change, and so it needs to be clear that the first
order is to expect a CA to **do what they said they would**, and revoke on
the timetable defined. If there truly is an exceptional situation that
prevents this, and the CA has a second incident for complying with the BRs
for not revoking, then the only way that can or should not result in
distrust of the CA is if their incident report can show that they
understand the issue sufficiently and can commit to never delaying
revocation again by showing comprehensively the step they are taking.

There is little evidence that the majority of CAs are capable of this, but
it’s literally been a Baseline Requirement since the start. For lack of a
better analogy: it’s a borrower who is constantly asking for more and more
credit to keep things going, and so they don’t default. If they default,
you are unlikely to get your money back, but if you continue to loan them
more and more, you’re just increasing your own risk for if/when they do
default. The CA is the borrower, defaulting is being distrusted, browsers
are the lenders, and the credit being extended is how flexible they are
when misissuance events occur. Taking a delay on revocation for this issue
is asking for a *huge* loan. For some CAs, that’s beyond the credit they
h

RE: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-03 Thread Arvid Vermote via dev-security-policy
GlobalSign recognizes the reported security issue and associated risk, and
is working on a plan to remediate the impacted CA hierarchies with first
priority on terminating those branches that include issuing CA with private
keys outside of GlobalSign's realm. We will soon share an initial plan on
our Bugzilla ticket https://bugzilla.mozilla.org/show_bug.cgi?id=1649937.

One question we have for the root store operators specifically is what type
of assurance they are looking for on the key destruction activities. In the
past we've both done key destruction ceremonies without and with (e.g. in
the case of addressing a compliance issue like
https://bugzilla.mozilla.org/show_bug.cgi?id=1591005) an external auditor
witnessing the destruction and issuing an independent ISAE3000 witnessing
report.

> -Original Message-
> From: dev-security-policy 
On
> Behalf Of Ryan Sleevi via dev-security-policy
> Sent: woensdag 1 juli 2020 23:06
> To: mozilla-dev-security-policy

> Subject: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous
> Delegated Responder Cert
> 
> I've created a new batch of certificates that violate 4.9.9 of the BRs,
which was
> introduced with the first version of the Baseline Requirements as a MUST.
This is
> https://misissued.com/batch/138/
> 
> A quick inspection among the affected CAs include O fields of: QuoVadis,
> GlobalSign, Digicert, HARICA, Certinomis, AS Sertifitseeimiskeskus,
Actalis,
> Atos, AC Camerfirma, SECOM, T-Systems, WISeKey, SCEE, and CNNIC.
> 
> Section 4.9.9 of the BRs requires that OCSP Delegated Responders MUST
> include an id-pkix-ocsp-nocheck extension. RFC 6960 defines an OCSP
> Delegated Responder within Section 4.2.2.2 as indicated by the presence of
the
> id-kp-OCSPSigning as an EKU.
> 
> These certificates lack the necessary extension, and as such, violate the
BRs. As
> the vast majority of these were issued on-or-after 2013-02-01, the
Effective Date
> of Mozilla Root CA Policy v2.1, these are misissued. You could also
consider the
> effective date as 2013-05-15, described later in [1] , without changing
the results.
> 
> This batch is NOT comprehensive. According to crt.sh, there are
approximately
> 293 certificates that meet the criteria of "issued by a Mozilla-trusted,
TLS-capable
> CA, with the OCSPSigning EKU, and without pkix-nocheck". misissued.com had
> some issues with parsing some of these certificates, due to other non-
> conformities, so I only included a sample.
> 
> Censys.io is aware of approximately 276 certificates that meet this
criteria, as you
> can see at [2]. The differences in perspectives underscores the importance
of
> CAs needing to carefully examine the certificates they've issued to
understand.
> 
> It's important for CAs to understand this is Security Relevant. While they
should
> proceed with revoking these CAs within seven (7) days, as defined under
the
> Baseline Requirements Section 4.9.1.2, the degree of this issue likely
also
> justifies requiring witnessed Key Destruction Reports, in order to
preserve the
> integrity of the issuer of these certificates (which may include the CA's
root).
> 
> The reason for this is simple: In every case I examined, these are
certificates that
> appear to nominally be intended as Issuing CAs, not as OCSP Responder
> Certificates. It would appear that many CAs were unfamiliar with RFC 6960
when
> constructing their certificate profiles, and similarly ignored discussion
of this issue
> in the past [3], which highlighted the security impact of this. I've
flagged this as a
> SECURITY matter for CAs to carefully review, because in the cases where a
> third-party, other than the Issuing CA, operates such a certificate, the
Issuing CA
> has delegated the ability to mint arbitrary OCSP responses to this
third-party!
> 
> For example, consider a certificate like https://crt.sh/?id=2657658699 .
> This certificate, from HARICA, meets Mozilla's definition of "Technically
> Constrained" for TLS, in that it lacks the id-kp-serverAuth EKU. However,
> because it includes the OCSP Signing EKU, this certificate can be used to
sign
> arbitrary OCSP messages for HARICA's Root!
> 
> This also applies to non-technically-constrained sub-CAs. For example,
consider
> this certificate https://crt.sh/?id=21606064 . It was issued by DigiCert
to
> Microsoft, granting Microsoft the ability to provide OCSP responses for
any
> certificate issued by Digicert's Baltimore CyberTrust Root. We know from
> DigiCert's disclosures that this is independently operated by Microsoft.
> 
> Unfortunately, revocation of this certificate is simply not enough to
protect Mozilla
> TLS users. This is because this Sub-CA COULD provide OCSP for itself that
> would successfully validate, AND provide OCSP for other revoked sub-CAs,
even
> if it was revoked. That is, if this Sub-CA's key was maliciously used to
sign a
> GOOD response for itself, it would be accepted.
> These security concerns are discussed in Section 4.2.2.2.1 of RFC 6960,

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-03 Thread Rob Stradling via dev-security-policy

On 03/07/2020 12:24, Ryan Sleevi via dev-security-policy wrote:


The key destruction is the only way I can see being able to provide some
assurance that “things won’t go wrong, because it’s impossible for them to
go wrong, here’s the proof”


Ryan, distrusting the root(s) would be another way to provide this 
assurance (for up-to-date clients anyway), although I'd be surprised if 
any of the affected CAs would prefer to go that route!


--
Rob Stradling
Senior Research & Development Scientist
Sectigo Limited

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-03 Thread Pedro Fuentes via dev-security-policy
Ryan,
I don’t think I’m failing to see the security problem, but we evidently have 
different perception of the risk level for the particular case of internal 
delegation. 
Anyway I will just cease in my intent and just act as it’s expected, looking as 
guidance to the reaction of other CAs where possible. 

I would just have a last request for you. I would appreciate if you can express 
your views on Ben’s message about Mozilla’s position, in particular about the 
7-day deadline. 
I think it’s of extreme benefit for all if the different browsers are aligned. 

Thanks,
Pedro
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-03 Thread Ryan Sleevi via dev-security-policy
Hi Pedro,

I’m not sure how best to proceed here. It seems like we’ve reached a point
where you’re wanting to discuss possible ways to respond to this, as a CA,
and it feels like this should be captured on the bug.

I’m quite worried here, because this reply demonstrates that we’re at a
point where there is still a rather large disconnect, and I’m not sure how
to resolve it. It does not seem that there’s an understanding here of the
security issues, and while I want to help as best I can, I also believe
it’s appropriate that we accurately consider how well a CA understands
security issue as part of considering incident response. I want there to be
a safe space for questions, but I’m also deeply troubled by the confusion,
and so I don’t know how to balance those two goals.

On Fri, Jul 3, 2020 at 3:24 AM Pedro Fuentes via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> >
> > Yes. But that doesn't mean we blindly trust the CA in doing so. And
> that's
> > the "security risk".
>
> But the point then is that a delegated responder that had the required
> "noCheck" extension wouldn't be affected by this issue and CAs wouldn't
> need to react, and therefore the issue to solve is the "mis-issuance"
> itself due to the lack of the extension, not the fact that the CA
> certificate could be used to do delegated responses for the same operator
> of the Root, which is acceptable, as you said.


I don’t understand why this is difficult to understand. If you have the
noCheck extension, then as RFC 6960, you need to make that certificate
short lived. The BRs require the cert have the extension.

Similarly, if something goes wrong with such a responder, you also have to
consider revoking the root, because it as-bad-as a root key compromise.

In fact the side effect is that delegated responders operated externally
> that have the required no check extension don't seem to be affected by the
> issue and would be deemed acceptable, without requiring further action to
> CAs, while the evident risk problem is still there.


The “nocheck” discussion extension here is to highlight the compliance
issue.

The underlying issue is a security issue: things capable of providing OCSP
responses that shouldn’t be.

It seems you understand the security issue when viewing external sub-CAs:
they can now impact the security of the issuer.

It seems we’re at an impasse for understanding the issue for
internally-operates Sub CAs: this breaks all of the auditable controls and
assurance frameworks, and breaks the security goals of a “correctly”
configured delegated responder, as discussed in the security considerations
throughout RFC 6960.


>
> >
> > I can understand that our views may differ: you may see 3P as "great
> risk"
> > and 1p as "acceptable risk". However, from the view of a browser or a
> > relying party, "1p" and "3p" are the same: they're both CAs. So the risk
> is
> > the same, and the risk is unacceptable for both cases.
>
> But this is not actually like that, because what is required now to CAs is
> to react appropriately to this incident, and you are imposing a unique
> approach while the situations are fundamentally different. It's not the
> same the derivations of this issue for CAs that had 3P delegation (or a mix
> of 1P and 3P), and the ones, like us, that don't have such delegation.


The burden is for your CA to establish that, in the incident response. I’ve
seen nothing from you to reasonably establish that; you just say “but it’s
different”. And that worries me, because it seems you don’t recognize that
all of the controls and tools and expectations we have, both in terms of
audits but also in all of the checks we make (for example, with crt.sh)
*also* lose their credibility for as long as this exists.

Again, I understand and appreciate the view that you seem to be advocating:
“If nothing goes wrong, no one is harmed. If third-parties were involved,
things could go wrong, so we understand that. But we won’t let anything go
wrong ourselves.”

But you seem to be misunderstanding what I’m saying: “If anything goes
wrong, we will not be able to detect it, and all of our assumptions and
safety features will fail. We could try and design new safety features, but
now we’re having to literally pay for your mistake, which never should have
happened in the first place.”

That is a completely unfair and unreasonable thing for WISeKey to ask of
the community: for everyone to change and adapt because WISeKey failed to
follow the expectations.

The key destruction is the only way I can see being able to provide some
assurance that “things won’t go wrong, because it’s impossible for them to
go wrong, here’s the proof”

Anything short of that is asking the community to either accept the
security risk that things can go wrong, or for everyone to go modify their
code, including their tools to do things like check CT, to appropriately
guard against that. Which is completely unreasonable. That’s how
fundamental this

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-03 Thread Paul van Brouwershaven via dev-security-policy
For those who are interested, in contrast to the direct EKU validation with
Test-Certificate, certutil does validate the OCSP signing EKU on the
delegated OCSP signing certificate but doesn't validate the
certificate chain for the OCSP signing EKU.

Full test script and output can be found here:
https://gist.github.com/vanbroup/84859cd10479ed95c64abe6fcdbdf83d

On Thu, 2 Jul 2020 at 20:42, Paul van Brouwershaven <
p...@vanbrouwershaven.com> wrote:

> When validating the EKU using `Test-Certificate` Windows states it's
> invalid, but when using `certutil` it's accepted or not explicitly checked.
> https://gist.github.com/vanbroup/64760f1dba5894aa001b7222847f7eef
>
> When/if I have time I will try to do some further tests with a custom
> setup to see if the EKU is validated at all.
>
> On Thu, 2 Jul 2020 at 19:26, Ryan Sleevi  wrote:
>
>>
>>
>> On Thu, Jul 2, 2020 at 1:15 PM Paul van Brouwershaven <
>> p...@vanbrouwershaven.com> wrote:
>>
>>> That's not correct, and is similar to the mistake I
 originally/previously made, and was thankfully corrected on, which also
 highlighted the security-relevant nature of it. I encourage you to give
 another pass at Robin's excellent write-up, at
 https://groups.google.com/forum/#!msg/mozilla.dev.security.policy/XQd3rNF4yOo/bXYjt1mZAwAJ

>>>
>>> Thanks, it's an interesting thread, but as shown above, Windows does
>>> validate the EKU chain, but doesn't look to validate it for delegated OCSP
>>> signing certificates?
>>>
>>
>> The problem is providing the EKU as you're doing, which forces chain
>> validation of the EKU, as opposed to validating the OCSP response, which
>> does not.
>>
>> A more appropriate test is to install the test root R as a locally
>> trusted CA, issue an intermediate I (without the EKU/only
>> id-kp-serverAuth), issue an OCSP responder O (with the EKU), and issue a
>> leaf cert L. You can then validate the OCSP response from the responder
>> cert (that is, an OCSP response signed by the chain O-I-R) for the
>> certificate L-I-R.
>>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-03 Thread Pedro Fuentes via dev-security-policy
> 
> Yes. But that doesn't mean we blindly trust the CA in doing so. And that's
> the "security risk". 

But the point then is that a delegated responder that had the required 
"noCheck" extension wouldn't be affected by this issue and CAs wouldn't need to 
react, and therefore the issue to solve is the "mis-issuance" itself due to the 
lack of the extension, not the fact that the CA certificate could be used to do 
delegated responses for the same operator of the Root, which is acceptable, as 
you said.

In fact the side effect is that delegated responders operated externally that 
have the required no check extension don't seem to be affected by the issue and 
would be deemed acceptable, without requiring further action to CAs, while the 
evident risk problem is still there.

> 
> I can understand that our views may differ: you may see 3P as "great risk"
> and 1p as "acceptable risk". However, from the view of a browser or a
> relying party, "1p" and "3p" are the same: they're both CAs. So the risk is
> the same, and the risk is unacceptable for both cases.

But this is not actually like that, because what is required now to CAs is to 
react appropriately to this incident, and you are imposing a unique approach 
while the situations are fundamentally different. It's not the same the 
derivations of this issue for CAs that had 3P delegation (or a mix of 1P and 
3P), and the ones, like us, that don't have such delegation.

In our particular case, where we have three affected CAs, owned and operated by 
WISeKey, we are proposing this action plan, for which we request feedback:
1.- Monday, new CAs will be created with new keys, that will be used to 
substitute the existing ones
2.- Monday, the existing CAs would be reissued with the same keys, removing the 
OCSP Signing EKU and with A REDUCED VALIDITY OF THREE MONTHS
3.- The existing CAs will be disabled for any new issuance, and will only be 
kept operative for signing CRLs and to attend revocation requests
4.- Within the 7 days period, the previous certificate of the CAs will be 
revoked, updating CCADB and OneCRL
5.- Once the re-issued certificates expire, we will destroy the keys and write 
the appropriate report

In my humble opinion, this plan is:
- Solving the BR compliance issue by revoking the offending certificate within 
the required period
- Reducing even more the potential risk of hypothetical misuse of the keys by 
establishing a short life-time

I hope this plan is acceptable.

Best,
Pedro
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-02 Thread Filippo Valsorda via dev-security-policy
2020-07-02 10:40 GMT-04:00 Ryan Sleevi via dev-security-policy 
:
> On Thu, Jul 2, 2020 at 10:34 AM Paul van Brouwershaven via
> dev-security-policy  wrote:
> 
> > I did do some testing on EKU chaining in Go, but from my understand this
> > works the same for Microsoft:
> >
> 
> Go has a bug https://twitter.com/FiloSottile/status/1278501854306095104

Yep. In fact, Go simply doesn't have an OCSP verifier. We should fix that! I 
filed an issue: https://golang.org/issues/40017 


The pieces are there (OCSP request serialization and response parsing, 
signature verification, a chain builder) but the logic stringing them together 
is not. That includes building the chain without requesting the EKU up the 
path, and then checking the EKU only on the Responder itself.

It's unfortunate that the Mozilla requirement (that the Responder must be an 
EE) is not standard, because that would have allowed the OCSP EKU to work like 
any other, nested up the chain, but that's just not how it works and it's too 
late to change, so it has to be special-cased out of the chain nesting 
requirement, or it wouldn't be possible to mint an Intermediate that can in 
turn mint Responders, without making the Intermediate a Responder itself.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-02 Thread Ryan Sleevi via dev-security-policy
Thanks Tim.

It’s deeply reassuring to see DigiCert tackling this problem responsibly
and head-on.

And thank you for particularly calling attention to the fact that blindly
adding id-pkix-ocsp-nocheck to these ICAs introduces worse security
problems. This is why RFC 6960 warns so specifically on this.

What does a robust design look like?
- Omit the EKU for ICAs. You can work around the ADCS issue using Sectigo’s
guidance.
- For your actual delegated responders, omitting OCSP URLs can help “some”
clients, but not all. A sensible minimum profile is:
  - basicConstraints:CA=FALSE
  - extKeyUsage=id-kp-OCSPSigning (and ONLY that)
  - validity period of 90 days or less (30?)
  - id-pkix-ocsp-nocheck

basicConstraints is to guarantee it works with Firefox. EKU so it’s a
delegated responder (and only that). Short lived because nocheck means it’s
high risk.

Invariably, any profile (e.g. in the CABForum) would also need to ensure
that these keys are protected to the same assurance level as CA keys,
because of the similar function they pose. I had previously proposed both
the lifetime and protection requirements in CABF, but met significant
opposition. This still lives in
https://github.com/sleevi/cabforum-docs/pull/2/files , although several of
these changes have found their way in through other ballots, such as SC31
in the SCWG if the CABF.

On Thu, Jul 2, 2020 at 6:37 PM Tim Hollebeek 
wrote:

> So, from our perspective, the security implications are the most important
> here.
> We understand them, and even in the absence of any compliance obligations
> they
> would constitute an unacceptable risk to trustworthiness of our OCSP
> responses,
> so we have already begun the process of replacing the ICAs we are
> responsible for.
> There are already several key ceremonies scheduled and they will continue
> through
> the holiday weekend.  We're prioritizing the ICAs that are under the
> control of third
> parties and/or outside our primary data centers, as they pose the most
> risk.  We are
> actively working to mitigate internal ICAs as well.  Expect to see
> revocations start
> happening within the next day or two.
>
> I understand the attraction of using a BR compliance issue to attract
> attention to
> this issue, but honestly, that shouldn't be necessary.  The BRs don't
> really adequately
> address the risks of the OCSPSigning EKU, and there's certainly lots of
> room for
> improvement there.  I think, especially in the short term, it is more
> important to
> focus on how to mitigate the security risks and remove the inappropriate
> EKU from
> the affected ICAs.  We can fix the BRs later.
>
> It's also important to note that, much like SHA-1, this issue doesn't
> respect the
> normal assumptions about certificate hierarchies.  Non-TLS ICAs can have a
> significant
> impact on their TLS-enabled siblings.  This means that CA review needs to
> extend
> beyond the certificates that would traditionally be in scope for the BRs.
>
> I would also caution CAs to carefully analyze the implications before
> blindly adding the
> pkix-ocsp-nocheck extension to their ICAs.  That might fix the compliance
> issue,
> but in the grand scheme of things probably makes the problem worse, as ICAs
> have fairly long lifetimes, and doing so effectively makes the inadvertent
> delegated
> responder certificate unrevokable.  So while the compliance problems might
> be
> fixed, it makes resolving the security issues much more challenging.
>
> -Tim
>
> > -Original Message-
> > From: dev-security-policy  >
> > On Behalf Of Ryan Sleevi via dev-security-policy
> > Sent: Thursday, July 2, 2020 12:31 AM
> > To: Peter Gutmann 
> > Cc: r...@sleevi.com; Mozilla  > pol...@lists.mozilla.org>
> > Subject: Re: SECURITY RELEVANT FOR CAs: The curious case of the
> > Dangerous Delegated Responder Cert
> >
> > On Wed, Jul 1, 2020 at 11:48 PM Peter Gutmann
> > 
> > wrote:
> >
> > > Ryan Sleevi via dev-security-policy
> > > 
> > > writes:
> > >
> > > >Section 4.9.9 of the BRs requires that OCSP Delegated Responders MUST
> > > include
> > > >an id-pkix-ocsp-nocheck extension. RFC 6960 defines an OCSP Delegated
> > > >Responder within Section 4.2.2.2 as indicated by the presence of the
> > > id-kp-
> > > >OCSPSigning as an EKU.
> > >
> > > Unless I've misread your message, the problem isn't the presence or
> > > not of a nocheck extension but the invalid presence of an OCSP EKU:
> > >
> > > >I've flagged this as a SECURITY matter [...] the Issuing CA has
> > > >delegated
> > > the
> > &

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-02 Thread Ryan Sleevi via dev-security-policy
On Thu, Jul 2, 2020 at 7:13 PM Ben Wilson  wrote:

> We are concerned that revoking these impacted intermediate certificates
> within 7 days could cause more damage to the ecosystem than is warranted
> for this particular problem. Therefore, Mozilla does not plan to hold CAs
> to the BR requirement to revoke these certificates within 7 days. However,
> an additional Incident Report for delayed revocation will still be
> required, as per our documented process[2].  We want to work with CAs to
> identify a path forward, which includes determining a reasonable timeline
> and approach to replacing the certificates that incorrectly have the
> id-kp-OCSPSigning EKU (and performing key destruction for them).
>

I'm not sure I understand this. The measurement is "damage to the
ecosystem", but the justification is "Firefox is protected, even though
many others are not" (e.g. OpenSSL-derived systems, AFAICT), because
AFAICT, Firefox does a non-standard (but quite reasonable) thing.

I can totally appreciate the answer "The risk to Mozilla is low", but this
response seems... different? It also seems to place CAs that adhere to
4.9.1.2, because they designed their systems robustly, at greater
disadvantage from those that did not, and seems like it only encourages the
problem to get worse over time, not better. Regardless, I do hope that any
delay for revocation is not treated as a "mitigate the EKU incident", but
rather more specifically, "what is your plan to ensure every Sub-CA able to
be revoked as required by 4.9.1.2", which almost invariably means
automating certificate issuance and regularly rotating intermediates. If we
continue to allow CAs to place Mozilla, or broadly, browsers, as somehow
responsible for the consequences of the CA's design decisions, things will
only get worse.

Setting aside the security risk factors, which understandably for Mozilla
are seen as low, at its core, this is a design issue for any CA that can't
or doesn't meet the obligations they warranted Mozilla, and the broader
community, that they would meet. Getting to a path where this design issue,
this lack of agility, is remediated is essential, not just in the "oh no,
what if the key is compromised" risk, but within the broader "how do we
have an agile ecosystem?" Weak entropy with serial numbers "should" have
been the wake-up call on investing in this.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-02 Thread Ryan Sleevi via dev-security-policy
On Thu, Jul 2, 2020 at 6:42 PM Pedro Fuentes via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Does the operator of a root and it’s hierarchy have the right to delegate
> OCSP responses to its own responders?
>
> If your answer is “No”, then I don’t have anything else to say, but if
> your answer is “Yes”, then I’ll be having still a hard time to see the
> security risk derived of this issue.
>

Yes. But that doesn't mean we blindly trust the CA in doing so. And that's
the "security risk".

I totally appreciate that your argument is "but we wouldn't misuse the
key". The "risk" that I'm talking about is how can anyone, but the CA, know
that's true? All of the compliance obligations assume certain facts when
the CA is operating a responder. This issue violates those assumptions, and
so it violates the controls, and so we don't have any way to be confident
that the key is not misused.

I think the confusion may be from the overloading of the word "risk". Here,
I'm talking about "the possibility of something bad happening". We don't
have any proof any 3P Sub-CAs have mis-signed OCSP responses: but we seem
to agree that there's risk of that happening. It seems we disagree on
whether there is risk of the CA themselves doing it. I can understand the
view that says "Of course the CA wouldn't", and my response is that the
risk is still the same: there's no way to know, and it's still a
possibility.

I can understand that our views may differ: you may see 3P as "great risk"
and 1p as "acceptable risk". However, from the view of a browser or a
relying party, "1p" and "3p" are the same: they're both CAs. So the risk is
the same, and the risk is unacceptable for both cases.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-02 Thread Ben Wilson via dev-security-policy
All,


Thank you to Ryan for identifying this problem, and to all of you who are
earnestly investigating what this problem means and the impact to your CA
hierarchies. Mozilla::pkix requires that an OCSP responder certificate be
an end entity certificate, so we believe that Firefox and Thunderbird are
not impacted by this problem. Historically, as per
https://bugzilla.mozilla.org/show_bug.cgi?id=991209#c10, Mozilla has
allowed CA certificates to have the OCSP signing EKU because some CAs
reported that some Microsoft server software required CA certificates to
have the id-kp-OCSPSigning EKU.

The comments in the code[1] say

// When validating anything other than an delegated OCSP signing cert,

// reject any cert that also claims to be an OCSP responder, because such

// a cert does not make sense. For example, if an SSL certificate were to

// assert id-kp-OCSPSigning then it could sign OCSP responses for itself,

// if not for this check.

// That said, we accept CA certificates with id-kp-OCSPSigning because

// some CAs in Mozilla's CA program have issued such intermediate

// certificates, and because some CAs have reported some Microsoft server

// software wrongly requires CA certificates to have id-kp-OCSPSigning.

// Allowing this exception does not cause any security issues because we

// require delegated OCSP response signing certificates to be end-entity

// certificates.

Additionally, as you all know, Firefox uses OneCRL for checking the
revocation status of intermediate certificates, so as long as the revoked
intermediate certificate is in OneCRL, the third-party would not be able to
“unrevoke” their certificate (for Firefox). Therefore, Mozilla does not
need the certificates that incorrectly have the id-kp-OCSPSigning EKU to be
revoked within the next 7 days, as per section 4.9.1.2 of the BRs.

However, as Ryan has pointed out in this thread, others may still have risk
because they may not have a OneCRL equivalent, or they may have certificate
verification implementations that behave differently than mozilla::pkix in
regards to processing OCSP responder certificates. Therefore, it is
important to identify a path forward to resolve the security risk that this
problem causes to the ecosystem.

We are concerned that revoking these impacted intermediate certificates
within 7 days could cause more damage to the ecosystem than is warranted
for this particular problem. Therefore, Mozilla does not plan to hold CAs
to the BR requirement to revoke these certificates within 7 days. However,
an additional Incident Report for delayed revocation will still be
required, as per our documented process[2].  We want to work with CAs to
identify a path forward, which includes determining a reasonable timeline
and approach to replacing the certificates that incorrectly have the
id-kp-OCSPSigning EKU (and performing key destruction for them).

Therefore, we are looking forward to your continued input in this
discussion about the proper response for CAs to take to resolve the
security risks caused by this problem, and ensure that this problem is not
repeated in future certificates.  We also look forward to your suggestions
on how we can improve OCSP responder requirements in Mozilla’s Root Store
Policy, and to your continued involvement in the CA/Browser Forum to
improve the BRs.

Thanks,


Ben

[1]
https://dxr.mozilla.org/mozilla-central/rev/c68fe15a81fc2dc9fc5765f3be2573519c09b6c1/security/nss/lib/mozpkix/lib/pkixcheck.cpp#858-869

[2] https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation


On Wed, Jul 1, 2020 at 3:06 PM Ryan Sleevi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I've created a new batch of certificates that violate 4.9.9 of the BRs,
> which was introduced with the first version of the Baseline Requirements as
> a MUST. This is https://misissued.com/batch/138/
>
> A quick inspection among the affected CAs include O fields of: QuoVadis,
> GlobalSign, Digicert, HARICA, Certinomis, AS Sertifitseeimiskeskus,
> Actalis, Atos, AC Camerfirma, SECOM, T-Systems, WISeKey, SCEE, and CNNIC.
>
> Section 4.9.9 of the BRs requires that OCSP Delegated Responders MUST
> include an id-pkix-ocsp-nocheck extension. RFC 6960 defines an OCSP
> Delegated Responder within Section 4.2.2.2 as indicated by the presence of
> the id-kp-OCSPSigning as an EKU.
>
> These certificates lack the necessary extension, and as such, violate the
> BRs. As the vast majority of these were issued on-or-after 2013-02-01, the
> Effective Date of Mozilla Root CA Policy v2.1, these are misissued. You
> could also consider the effective date as 2013-05-15, described later in
> [1] , without changing the results.
>
> This batch is NOT comprehensive. According to crt.sh, there are
> approximately 293 certificates that meet the criteria of "issued by a
> Mozilla-trusted, TLS-capable CA, with the OCSPSigning EKU, and without
> pkix-nocheck". misissued.com had some issues with parsing some of these
> certifi

Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-02 Thread Pedro Fuentes via dev-security-policy
Hello Ryan,
I’m fully understanding your argumentative line, but I’d still have a question 
for you:

Does the operator of a root and it’s hierarchy have the right to delegate OCSP 
responses to its own responders?

If your answer is “No”, then I don’t have anything else to say, but if your 
answer is “Yes”, then I’ll be having still a hard time to see the security risk 
derived of this issue. 

Thanks.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-02 Thread Tim Hollebeek via dev-security-policy
So, from our perspective, the security implications are the most important here.
We understand them, and even in the absence of any compliance obligations they
would constitute an unacceptable risk to trustworthiness of our OCSP responses,
so we have already begun the process of replacing the ICAs we are responsible 
for.
There are already several key ceremonies scheduled and they will continue 
through
the holiday weekend.  We're prioritizing the ICAs that are under the control of 
third 
parties and/or outside our primary data centers, as they pose the most risk.  
We are
actively working to mitigate internal ICAs as well.  Expect to see revocations 
start
happening within the next day or two.

I understand the attraction of using a BR compliance issue to attract attention 
to
this issue, but honestly, that shouldn't be necessary.  The BRs don't really 
adequately
address the risks of the OCSPSigning EKU, and there's certainly lots of room for
improvement there.  I think, especially in the short term, it is more important 
to
focus on how to mitigate the security risks and remove the inappropriate EKU 
from
the affected ICAs.  We can fix the BRs later.

It's also important to note that, much like SHA-1, this issue doesn't respect 
the 
normal assumptions about certificate hierarchies.  Non-TLS ICAs can have a 
significant
impact on their TLS-enabled siblings.  This means that CA review needs to extend
beyond the certificates that would traditionally be in scope for the BRs.

I would also caution CAs to carefully analyze the implications before blindly 
adding the
pkix-ocsp-nocheck extension to their ICAs.  That might fix the compliance issue,
but in the grand scheme of things probably makes the problem worse, as ICAs
have fairly long lifetimes, and doing so effectively makes the inadvertent 
delegated
responder certificate unrevokable.  So while the compliance problems might be
fixed, it makes resolving the security issues much more challenging.

-Tim

> -Original Message-
> From: dev-security-policy 
> On Behalf Of Ryan Sleevi via dev-security-policy
> Sent: Thursday, July 2, 2020 12:31 AM
> To: Peter Gutmann 
> Cc: r...@sleevi.com; Mozilla  pol...@lists.mozilla.org>
> Subject: Re: SECURITY RELEVANT FOR CAs: The curious case of the
> Dangerous Delegated Responder Cert
> 
> On Wed, Jul 1, 2020 at 11:48 PM Peter Gutmann
> 
> wrote:
> 
> > Ryan Sleevi via dev-security-policy
> > 
> > writes:
> >
> > >Section 4.9.9 of the BRs requires that OCSP Delegated Responders MUST
> > include
> > >an id-pkix-ocsp-nocheck extension. RFC 6960 defines an OCSP Delegated
> > >Responder within Section 4.2.2.2 as indicated by the presence of the
> > id-kp-
> > >OCSPSigning as an EKU.
> >
> > Unless I've misread your message, the problem isn't the presence or
> > not of a nocheck extension but the invalid presence of an OCSP EKU:
> >
> > >I've flagged this as a SECURITY matter [...] the Issuing CA has
> > >delegated
> > the
> > >ability to mint arbitrary OCSP responses to this third-party
> >
> > So the problem would be the presence of the OCSP EKU when it shouldn't
> > be there, not the absence of the nocheck extension.
> 
> 
> Not quite. It’s both.
> 
> The BR violation is caused by the lack of the extension.
> 
> The security issue is caused by the presence of the EKU.
> 
> However, since some CAs only view things through the lens of BR/program
> violations, despite the sizable security risk they pose, the compliance 
> incident
> is what is tracked. The fact that it’s security relevant is provided so that 
> CAs
> understand that revocation is necessary, and that it’s also not sufficient,
> because of how dangerous the issue is.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy


smime.p7s
Description: S/MIME cryptographic signature
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-02 Thread Ryan Sleevi via dev-security-policy
On Thu, Jul 2, 2020 at 6:05 PM Pedro Fuentes via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I understand your rational, but my point is that this is happening in the
> same infrastructure where the whole PKI is operated, and under the
> responsibility of the same operator of the Root. In my understanding the
> operator of the Root has full rights to do delegated OCSP responses if
> those responses are produced by its own OCSP responders.
>
> I'm failing to see what is the main problem you don't consider solved. As
> per your own dissertations in the related posts, there are two issues:
>
> 1. The certificate contains incorrect extensions, so it's a misissuance.
> This is solved by revoking the certificate, and this is done not only
> internally in the PKI, but also in OneCRL.
>

This solves *nothing* for anyone not using OneCRL. It doesn't meet the
obligations the CA warranted within its CP/CPS. It doesn't meet the BRs. It
simply "shifts" risk onto everyone else in the ecosystem, and that's a
grossly negligent and irresponsible thing to do.

"Revoking the certificate" is the minimum bar, which is already a promise
the CA made, to everyone who decides to trust that CA, that they will do
within 7 days. But it doesn't mitigate the risk.


> 2. The operator of the SubCA could produce improper revocation responses,
> so this is a security risk. This risk is already difficult to find if the
> operator of the subCA is the same operator of the Root... If such entity
> wants to do a wrongdoing, there are far easier ways to do it, than
> overcomplicated things like unrevoking its own subCA...
>
> Sorry, but I don't see the likeliness of the risks you evoke... I see the
> potential risk in externally operated CAs, but not here.


The risk is just the same! As a CA, I can understand you would say "Surely,
we would never do anything nefarious", but as a member of the community,
why should we trust what you say? Why would the risk be any different with
externally operated CAs? After all, they're audited to, like roots,
shouldn't the risk be the same? Of course you'd realize that no, they're
not the same, because the CA has no way of truly knowing the sub-CA is
being nefarious. The same is true for the Browser trusting the root: it has
no way of knowing you're not being nefarious.

Look, we've had Root CAs that have actively lied in this Forum,
misrepresenting things to the community they later admitted they knew were
false, and had previously been an otherwise CA in good standing (or at
least, no worse standing than other CAs). A CA is a CA, and the risk is
treated the same.

The line of argument being pursued here is a bit like saying "If no one
abuses this, what's the harm?" I've already shown how any attempt to
actually verify it's not abused ends up just shifting whatever cost onto
Relying Parties, when it's the CA and the Subscribers that should bear the
cost, because it's the CA that screwed up. I simply can't see how "just
trust us" is better than objective verification, especially when "just
trust us" got us into this mess in the first place. How would you provide
assurances to the community that this won't be abused? And how is the cost
for the community, in risk, better?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-02 Thread Pedro Fuentes via dev-security-policy
I understand your rational, but my point is that this is happening in the same 
infrastructure where the whole PKI is operated, and under the responsibility of 
the same operator of the Root. In my understanding the operator of the Root has 
full rights to do delegated OCSP responses if those responses are produced by 
its own OCSP responders.

I'm failing to see what is the main problem you don't consider solved. As per 
your own dissertations in the related posts, there are two issues:

1. The certificate contains incorrect extensions, so it's a misissuance. This 
is solved by revoking the certificate, and this is done not only internally in 
the PKI, but also in OneCRL.

2. The operator of the SubCA could produce improper revocation responses, so 
this is a security risk. This risk is already difficult to find if the operator 
of the subCA is the same operator of the Root... If such entity wants to do a 
wrongdoing, there are far easier ways to do it, than overcomplicated things 
like unrevoking its own subCA...

Sorry, but I don't see the likeliness of the risks you evoke... I see the 
potential risk in externally operated CAs, but not here.


El jueves, 2 de julio de 2020, 23:33:05 (UTC+2), Ryan Sleevi  escribió:
> On Thu, Jul 2, 2020 at 5:30 PM Pedro Fuentes via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
> > Hello Ryan,
> > Thanks for your detailed response.
> >
> > Just to be sure that we are in the same page. My question was about
> > reissuing a new CA using the same key pair, but this implies also the
> > revocation of the previous version of the certificate.
> >
> 
> Right, but this doesn't do anything, because the previous key pair can be
> used to sign an OCSP response that unrevokes itself.
> 
> This is the problem and why "key destruction" is the best of the
> alternatives (that I discussed) for ensuring that this doesn't happen,
> because it doesn't shift the cost to other participants.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-02 Thread Ryan Sleevi via dev-security-policy
On Thu, Jul 2, 2020 at 5:30 PM Pedro Fuentes via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hello Ryan,
> Thanks for your detailed response.
>
> Just to be sure that we are in the same page. My question was about
> reissuing a new CA using the same key pair, but this implies also the
> revocation of the previous version of the certificate.
>

Right, but this doesn't do anything, because the previous key pair can be
used to sign an OCSP response that unrevokes itself.

This is the problem and why "key destruction" is the best of the
alternatives (that I discussed) for ensuring that this doesn't happen,
because it doesn't shift the cost to other participants.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


  1   2   >