Re: AIA CA Issuers URL gives 403 (Microsoft)

2020-05-13 Thread Ryan Sleevi via dev-security-policy
On Wed, May 13, 2020 at 9:00 PM Matt Palmer via dev-security-policy
 wrote:
> On the contrary, unless there's an override of RFC5280 4.2.2.1 in the BRs
> that I can't find, the requirement of universal access does exist.  RFC5280
> 4.2.2.1 says, in relevant part:
>
> "Where the information is available via HTTP or FTP, accessLocation MUST be
> a uniformResourceIdentifier and the URI MUST point to either a single DER
> encoded certificate [or a "certs-only" CMS]"
>
> A CA is permitted to carefully weigh "the full implications" before deciding
> to not abide by the SHOULD in BRs 7.1.2.3(c), but if they choose to put
> caIssuers into AIA, it is an "absolute requirement" that the URI provide a
> DER-encoded certificate (or a "certs-only" CMS).  A URI that returns a 403
> is not a URI that "point[s] to [...] a single DER encoded certificate".
>
> That sounds to me an *awful* lot like a requirement that the URL be
> accessible and return a DER-encoded certificate (and, by construction, "that
> they’re prohibited from including URLs [I] can’t access").

I'm afraid this grossly misrepresents what URLs are, and RFC 5280. A
URL describes a resource at a location, it does not describe the
potential responses for retrieving that URL. RFC 5280 places a
requirement on the content of the resource at the location, but that
does not mean or constrain the possible responses for that resource.
While it's true a 404 indicates that no such resource exists, and thus
/is/ a violation, a 403 indicating that the resource is forbidden from
access is not, in and of itself, a violation, no different than
responding 301 or 302, or 300 indicating multiple resources.

Put differently, the language you're citing a statement about the
content of the URL when that resource is *successfully* dereferenced,
without an obligation upon that URL to be successfully dereferenced by
the client.

The canonical example is for internally-operated CAs, it's perfectly
fine to put an intranet URL in. In fact, every local CA will do that
(e.g. Active Directory Certificate Services does that by default).
That's the interoperability that 5280 permits, and you can't simply
say those certificates don't conform with the profile.

So no, it's *nothing* like a requirement the URL be accessible, and
that's why I'm wanting to push to see if there's a more compelling
argument here. Right now it just seems an emotional response based on
personal opinion, but not much to support the opinion, and that seems
unfortunate.

> > > 2. wget is a legitimate tool for downloading files, thus blocking the
> > > wget user agent is denying legitimate users access to the resource.
> >
> > This seems to be saying that there can be zero negative side-effects,
> > regardless of the abuse. I don’t find this compelling either.
>
> You appear to be trying to extract a general rule from a specific
> statement.

While cute, your own sentence below shows that you were making a
generalized statement.
1. You cannot deny legitimate users access to the resource.
2. Legitimate users use legitimate tools
3. Wget is a legitimate tool that legitimate users use
Ergo ("thus")
4. You cannot block wget from access to the source

I'm pointing out the flaw here, which is the assumption that wget is a
tool only legitimate users use is a statement without evidence (and
"obviously" false). Yet for the above to hold, despite this, then it's
saying that "You cannot deny legitimate users access to the resource"
is a higher principle and priority than "wget is a tool sometimes used
by illegitimate users", and that's a problematic statement to make.
Which you double down on, below.

> I stated that the apparently-permanent blocking of a general purpose UA from
> being able to access a caIssuers URL was unacceptable.  I gave several
> reasons why, in my professional experience dealing with Internet abuse,
> blocking a general purpose UA has both noticeable impact on legitimate
> users, and a very limited impact on attackers.

On its face, the argument remains problematic, because 'legitimate'
tools can and are used for problematic purposes, and suggesting that
some tools ("legitimate ones") cannot be blocked (because they deny
"legitimate users access to the resource") instantly creates and
incentivizes illegitimate tools to disguise themselves as legitimate
tools.

You haven't really addressed this. Yes, I don't disagree that blocking
by UA has side-effects, but in the above argument, you're again
suggesting that there's some rule making about when such side-effects
are acceptable and not acceptable, without actually spelling out what
those are. That's why I pushed in my previous mail, and why I'm still
pushing to see if there's something being overlooked. Below, you set
an entirely unreasonable standard, by suggesting that if, taken as
written, at least two users are affected, the CA must show everything
possible to prevent that. I don't find that compelling, and certainly
not compelling enough to suggest 

Re: AIA CA Issuers URL gives 403 (Microsoft)

2020-05-13 Thread Matt Palmer via dev-security-policy
On Wed, May 13, 2020 at 08:28:03AM -0400, Ryan Sleevi wrote:
> On Tue, May 12, 2020 at 11:47 PM Matt Palmer via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> > 1. As Hanno said, it's a public resource, and as such it should, in
> > general,
> > be available to the public.
> 
> This is worded as a statement of fact, but it’s really an opinion, right?

It's a statement of fact, in my opinion.  

> You might think I’m nitpicking, but this is actually extremely relevant
> and meaningful. The requirements in 7.1.2 are only a SHOULD level, and do
> not currently specify access requirements. Your position seems to be that
> they’re better by omitting AIA than including a URL you can’t access, or
> that they’re prohibited from including URLs you can’t access, and neither
> of those requirements actually exist.

On the contrary, unless there's an override of RFC5280 4.2.2.1 in the BRs
that I can't find, the requirement of universal access does exist.  RFC5280
4.2.2.1 says, in relevant part:

"Where the information is available via HTTP or FTP, accessLocation MUST be
a uniformResourceIdentifier and the URI MUST point to either a single DER
encoded certificate [or a "certs-only" CMS]"

A CA is permitted to carefully weigh "the full implications" before deciding
to not abide by the SHOULD in BRs 7.1.2.3(c), but if they choose to put
caIssuers into AIA, it is an "absolute requirement" that the URI provide a
DER-encoded certificate (or a "certs-only" CMS).  A URI that returns a 403
is not a URI that "point[s] to [...] a single DER encoded certificate".

That sounds to me an *awful* lot like a requirement that the URL be
accessible and return a DER-encoded certificate (and, by construction, "that
they’re prohibited from including URLs [I] can’t access").

Of course, one could argue that blocking attackers violates this.  If you
block an attacker, but nobody else hears, does it make a violation?  I guess
we'll find out when a botnet operator complains to m.d.s.p that they got
blocked.

> > 2. wget is a legitimate tool for downloading files, thus blocking the
> > wget user agent is denying legitimate users access to the resource.
> 
> This seems to be saying that there can be zero negative side-effects,
> regardless of the abuse. I don’t find this compelling either.

You appear to be trying to extract a general rule from a specific
statement.

I stated that the apparently-permanent blocking of a general purpose UA from
being able to access a caIssuers URL was unacceptable.  I gave several
reasons why, in my professional experience dealing with Internet abuse,
blocking a general purpose UA has both noticeable impact on legitimate
users, and a very limited impact on attackers.

If you'd like a more general rule to go by, here's my take: if a CA's method
of blocking abuse negatively impacts legitimate users, the CA has an
obligation to demonstrate that no other method of blocking abuse, one with
less negative impact on legitimate users, is capable of reasonably dealing
with the threat.

I use the present tense in the preceding paragraph, but it's past tense in
this particular case -- as far as I can discern, the UA blocking has been
removed from the URLs listed in the OP.

> If we say it’s ok to not be accessible to any, because it’s not present,
> where’s the harm in not being accessible to some, when it is?

Conversely, if there's no harm in it not being available to some, where's
the harm in it not being available to any?

- Matt

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla's Expectations for OCSP Incident Reporting

2020-05-13 Thread Ryan Sleevi via dev-security-policy
On Wed, May 13, 2020 at 12:12 AM Peter Gutmann 
wrote:

> Ryan Sleevi  writes:
>
> >>Following up on this, would it be correct to assume that, since no-one
> has
> >>pointed out any impact that this had on anything, that it's more a
> >>certificational issue than anything with real-world consequences?
> >
> >That seems quite a suppositional leap, don't you think?
>
> It's been more than two weeks since the issue was first reported, if
> no-one's
> been able to identify any actual impact in that time - compare this to say
> certificate-induced outages which make the front page of half the tech news
> sites on the planet when they occur - then it seems reasonable to assume
> that
> the impact is minimal if not nonexistent.
>
> In any case I was inviting people to provide information on whether there's
> been any adverse effect in order to try and gauge the magnitude, or lack
> thereof, of this event.


I would hardly say it’s reasonable to conclude whatever you want simply
because no one has personally engaged with you for two days, and worse,
that others support that view. I appreciate it as a rhetorical technique to
try and force a reply, which it obviously did, but only to point out how
deeply flawed the argument is, at least in a venue where replying to you is
optional.

A better approach would be to examine what and how clients would have been
affected, and then how to quantify and measure that, as well as what the
impact of that affect would be. Posing that you didn’t see a news story and
you didn’t get any replies to your email on a relatively obscure newsgroup
that it is proof there was no impact or harm ... isn’t that 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: AIA CA Issuers URL gives 403 (Microsoft)

2020-05-13 Thread Ryan Sleevi via dev-security-policy
On Tue, May 12, 2020 at 11:47 PM Matt Palmer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Tue, May 12, 2020 at 11:37:23PM -0400, Ryan Sleevi wrote:
> > On Tue, May 12, 2020 at 10:30 PM Matt Palmer via dev-security-policy
> >  wrote:
> > >
> > > On Tue, May 12, 2020 at 07:35:50AM +0200, Hanno Böck via
> dev-security-policy wrote:
> > > > After communicating with Microsoft it turns out this is due to user
> > > > agent blocking, the URLs can be accessed, but not with a wget user
> > > > agent.
> > > > Microsoft informed me that "the wget agent is explicitly being
> blocked
> > > > as a bot defense measure."
> > > >
> > > > I leave it up to the community to discuss whether this is acceptable.
> > >
> > > I'm firmly on the "nope, unacceptable" side of the fence on this one.
> >
> > Could you share your reasoning?
>
> Sure, plenty of reasons:
>
> 1. As Hanno said, it's a public resource, and as such it should, in
> general,
> be available to the public.


This is worded as a statement of fact, but it’s really an opinion, right?

You might think I’m nitpicking, but this is actually extremely relevant and
meaningful. The requirements in 7.1.2 are only a SHOULD level, and do not
currently specify access requirements. Your position seems to be that
they’re better by omitting AIA than including a URL you can’t access, or
that they’re prohibited from including URLs you can’t access, and neither
of those requirements actually exist.

2. wget is a legitimate tool for downloading files, thus blocking the wget
> user agent is denying legitimate users access to the resource.


This seems to be saying that there can be zero negative side-effects,
regardless of the abuse. I don’t find this compelling either.

Taken to its logical conclusion, any blocking of any DDOS traffic by IP or
form would be problematic, because the traffic “could” be legitimate.

There’s understandably a balance, but you’ve seemingly ignored that balance
and put the argument as an extreme with zero interest in finding that
balance. I think that does more harm to your position, and more broadly,
harm to finding that balance.

3. For a miscreant, blocking by user agent is barely a speed bump, as
> changing UA to something innocuous / harder to block is de rigeur.


So? That’s largely hypothetical. Does it matter if the miscreants they’re
concerned about don’t? There’s an argument you’re not making and not
articulating, which is to suggest the positive benefits the CA seems from
such blocking are outweighed by the negative side-effects, and that the
balance isn’t being struck. That would be a compelling argument to try and
find what the balance should be. But, as I said, you’re arguing an
intractable extreme instead, and that makes it difficult to agree with you.


On principle, I’m uneasy with the UA blocking. However, I also have trouble
arguing it is or should be forbidden, especially given that taken to the
extreme, it would remove important controls for DDoS protection. This is
because assumptions here seem to rely on “traffic properties you’re allowed
to filter on (and drop)” and “traffic properties you are not allowed to”.
I’m hoping those who feel strongly that this is bad can put a more cogent
argument forward, especially given that this is, at present, only a SHOULD
(ergo, optional). If we say it’s ok to not be accessible to any, because
it’s not present, where’s the harm in not being accessible to some, when it
is?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Request to Include certSIGN Root CA G2 certificate

2020-05-13 Thread Gabriel Petcu via dev-security-policy
On Saturday, May 9, 2020 at 12:56:00 AM UTC+3, Wayne Thayer wrote:
> The ETSI audit attestation statement referenced by Ben [1] lists 6
> non-conformities that were to be corrected within 3 months of the onsite
> audit that occurred on 2020-02-10 until 2020-02-14:
> 
> Findings with regard to ETSI EN 319 401:
> -REQ-7.8-06–Documentation shall be improved
> 
> Findings with regard to ETSI EN 319 411-1:
> -REG-6.3.1-01–Implementation shall be improved
> -GEN-6.5.1-04-Implementation shall be improved
> 
> Findings with regard to ETSI EN 319 411-2:
> -SDP-6.5.1-02 -Implementation shall be improved
> -GEN-6.6.1-05–Documentation shall be improved
> -CSS-6.3.10-13–Documentation shall be improved
> 
> I'm particularly concerned about GEN-6.5.1-04: The CA key pair used for
> signing certificates shall be created under, at least, dual control.
> 
> I'd like to see an explanation of these non-conformities and the
> remediation from certSIGN, and confirmation from LSTI that they have been
> fixed.
> 
> - Wayne
> 
> [1] https://bug1632406.bmoattachments.org/attachment.cgi?id=9142635
> 
> On Wed, May 6, 2020 at 4:59 PM Ben Wilson via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
> 
> > This request is for inclusion of the certSIGN Root CA G2 certificate and to
> > turn on the Websites trust bit and for EV treatment.
> >
> >
> > The request is documented in Bugzilla and in the CCADB as follows:
> >
> > https://bugzilla.mozilla.org/show_bug.cgi?id=1403453
> >
> >
> > https://ccadb-public.secure.force.com/mozilla/PrintViewForCase?CaseNumber=0403
> >
> > (Summary of info gathered and verified, URLs for test websites, etc.)
> >
> >
> >
> > * certSIGN’s BR Self Assessment is here:
> >
> > https://bugzilla.mozilla.org/attachment.cgi?id=9052673
> >
> > The Certsign document repository can be found here:
> >
> > https://www.certsign.ro/en/certsign-documents/policies-procedures
> >
> > * Root Certificate Locations:
> >
> > http://crl.certsign.ro/certsign-rootg2.crt
> >
> > http://registru.certsign.ro/certcrl/certsign-rootg2.crt
> >
> > http://www.certsign.ro/certcrl/certsign-rootg2.crt
> >
> >
> > https://crt.sh/?q=657CFE2FA73FAA38462571F332A2363A46FCE7020951710702CDFBB6EEDA3305
> >
> >
> > https://censys.io/certificates/657cfe2fa73faa38462571f332a2363a46fce7020951710702cdfbb6eeda3305/pem
> >
> >
> > * EV Policy OID:   2.23.140.1.1
> >
> > * CRL URL: http://crl.certsign.ro/certsign-rootg2.crl
> >
> > * OCSP URL: http://ocsp.certsign.ro
> >
> >
> >
> > * Audit: See https://bugzilla.mozilla.org/attachment.cgi?id=9142635 (
> >
> > http://lsti-certification.fr/images/LSTI_Audit_Atttestation_Letter_1612-163_V10_Certsign_S.pdf
> > )
> > which shows that a recent annual audit was performed on the certSIGN Root
> > CA G2 by LSTI Group according to ETSI EN 319 411-2, V2.2.2 (2018-04)”,
> > “ETSI EN 319 411-1, V1.2.2 (2018-04)” and “ETSI EN 319 401, V2.2.1
> > (2018-04)” as well as the CA/Browser Forum’s “EV SSL Certificate
> > Guidelines, version 1.7.1” and “Baseline Requirements, version 1.6.7”
> > considering the requirements of the “ETSI EN 319 403, V2.2.2 (2015-08)” for
> > the Trust Service Provider Conformity Assessment.
> >
> >
> > * CP/CPS Review
> >
> > Ryan Sleevi conducted a preliminary review the PKI Disclosure Statement and
> > CPS - https://bugzilla.mozilla.org/show_bug.cgi?id=1403453#c13
> >
> > I followed up, and now Comment #24 in Bugzilla shows the latest responses
> > from Certsign - https://bugzilla.mozilla.org/show_bug.cgi?id=1403453#c24
> >
> >
> >
> > This begins the 3-week comment period for this request.
> >
> > I will greatly appreciate your thoughtful and constructive feedback on the
> > acceptance of this root into the Mozilla CA program.
> >
> > Thanks,
> > Ben
> > ___
> > dev-security-policy mailing list
> > dev-security-policy@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-security-policy
> >

We, certSIGN, are waiting for the formal document assessing the closure of all 
the minor non-conformities by the LSTI auditor, and we will publish this report 
immediately.
As requested by Wayne, we present more details on the minor non-conformity 
no.1, GEN-6.5.1-04.
In the audit detailed report LSTI auditors stated on **GEN-6.5.1-04: The CA key 
pair used for signing certificates shall be created under, at least, dual 
control**,  the following: 
CA keys were generated by personnel in trusted roles under dual control, and an 
external witness was involved. The CARL was also generated by two persons in 
trusted roles. Some issues could however be noticed during the audit, which led 
to the following deviation: Deviation no 1: A full traceability of the usage of 
the PKI’s secrets is not guaranteed:
- The inventory of the secrets is not complete nor up to date (e.g. credentials 
(PIN codes) or backups of CA private keys are not explicitely identified, some 
secrets were allocated to the wrong persons in the inventory);
- No measures 

Re: AIA CA Issuer field pointing to PEM encoded certs

2020-05-13 Thread Hanno Böck via dev-security-policy
Update:
All 4 CAs have corrected the certs and are now serving DER
encoded intermediates at the URLs.

-- 
Hanno Böck
https://hboeck.de/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla's Expectations for OCSP Incident Reporting

2020-05-13 Thread Hanno Böck via dev-security-policy
On Wed, 13 May 2020 02:29:07 +
Peter Gutmann via dev-security-policy
 wrote:

> Following up on this, would it be correct to assume that, since
> no-one has pointed out any impact that this had on anything, that
> it's more a certificational issue than anything with real-world
> consequences?

I have reported (and noticed) it because it had an impact.

The impact it had was a monitoring system that checked whether the
certificate of a host was okay, using gnutls-cli with ocsp enabled
(which also uncovered a somewhat unexpected inconsistency in how
the gnutls cli tool behaves[1]).

Not saying this is a particularly severe impact, however it took me
some time figuring out what's going on there.
It may very well that others have experienced impact that they were
unable to explain.


[1] https://gitlab.com/gnutls/gnutls/-/issues/981
-- 
Hanno Böck
https://hboeck.de/
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy