On Tue, May 12, 2020 at 11:47 PM Matt Palmer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Tue, May 12, 2020 at 11:37:23PM -0400, Ryan Sleevi wrote:
> > On Tue, May 12, 2020 at 10:30 PM Matt Palmer via dev-security-policy
> > <dev-security-policy@lists.mozilla.org> wrote:
> > >
> > > On Tue, May 12, 2020 at 07:35:50AM +0200, Hanno Böck via
> dev-security-policy wrote:
> > > > After communicating with Microsoft it turns out this is due to user
> > > > agent blocking, the URLs can be accessed, but not with a wget user
> > > > agent.
> > > > Microsoft informed me that "the wget agent is explicitly being
> blocked
> > > > as a bot defense measure."
> > > >
> > > > I leave it up to the community to discuss whether this is acceptable.
> > >
> > > I'm firmly on the "nope, unacceptable" side of the fence on this one.
> >
> > Could you share your reasoning?
>
> Sure, plenty of reasons:
>
> 1. As Hanno said, it's a public resource, and as such it should, in
> general,
> be available to the public.


This is worded as a statement of fact, but it’s really an opinion, right?

You might think I’m nitpicking, but this is actually extremely relevant and
meaningful. The requirements in 7.1.2 are only a SHOULD level, and do not
currently specify access requirements. Your position seems to be that
they’re better by omitting AIA than including a URL you can’t access, or
that they’re prohibited from including URLs you can’t access, and neither
of those requirements actually exist.

2. wget is a legitimate tool for downloading files, thus blocking the wget
> user agent is denying legitimate users access to the resource.


This seems to be saying that there can be zero negative side-effects,
regardless of the abuse. I don’t find this compelling either.

Taken to its logical conclusion, any blocking of any DDOS traffic by IP or
form would be problematic, because the traffic “could” be legitimate.

There’s understandably a balance, but you’ve seemingly ignored that balance
and put the argument as an extreme with zero interest in finding that
balance. I think that does more harm to your position, and more broadly,
harm to finding that balance.

3. For a miscreant, blocking by user agent is barely a speed bump, as
> changing UA to something innocuous / harder to block is de rigeur.


So? That’s largely hypothetical. Does it matter if the miscreants they’re
concerned about don’t? There’s an argument you’re not making and not
articulating, which is to suggest the positive benefits the CA seems from
such blocking are outweighed by the negative side-effects, and that the
balance isn’t being struck. That would be a compelling argument to try and
find what the balance should be. But, as I said, you’re arguing an
intractable extreme instead, and that makes it difficult to agree with you.


On principle, I’m uneasy with the UA blocking. However, I also have trouble
arguing it is or should be forbidden, especially given that taken to the
extreme, it would remove important controls for DDoS protection. This is
because assumptions here seem to rely on “traffic properties you’re allowed
to filter on (and drop)” and “traffic properties you are not allowed to”.
I’m hoping those who feel strongly that this is bad can put a more cogent
argument forward, especially given that this is, at present, only a SHOULD
(ergo, optional). If we say it’s ok to not be accessible to any, because
it’s not present, where’s the harm in not being accessible to some, when it
is?
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to