On Thu, Aug 13, 2020 at 11:48 PM Ronald Crane via dev-security-policy
<dev-security-policy@lists.mozilla.org> wrote:
>
> On 8/13/2020 2:25 PM, Tobias S. Josefowitz via dev-security-policy wrote:
> > Detecting phishing domains by "looking at them as strings" may thus be
> > futile, and "blocking obvious phishing domains" may be a not so
> > entertaining but ultimately pointless game of whack a mole for CAs;
> > and that is especially since there is not that much to actually
> > suggest that CAs are the best place to whack moles *to prevent users
> > from being phished* **in their webbrowsers**, which I believe is
> > actually what we are discussing here anyway.
>
> But it could be that examining domains as strings usefully impedes
> (though of course does not eliminate) phishing. Impeding internet
> malefactors is _always_ a game of whack-a-mole. If it become harder
> successfully to phish with official-appearing domains, phishers will try
> something else, and the guardians of the internet (such as there are)
> will have to counter that tactic. [1] It is not a question of what's
> "the best place" to counter phishing, but whether it's useful for
> registrars, CAs, and hosts to do some of the work.

So then, assuming we don't know, I don't think it would be appropriate
to just wish for the best, task the CAs to do it anyway, with the
option of threatening them with distrust later on if they are just!
not! good! enough! at it for some reason. Even if examining domains as
strings usefully *should* impede phishing, that still leaves the
questions of why browsers would have the CAs do that for them as
opposed to running the phish-decider themselves.

When it comes to whack-a-moling in general, on the internet, I
disagree. Not with the fact that it is maybe predominantly how
problems are attacked necessarily, but I do disagree in playing
whack-a-mole being the best, or even a good enough idea.

Not to bore anyone with my strange tales, but my goto example is
graylisting (emails), i.e. the practice of refusing email delivery
"the first time". As probably most everybody here knows anyway, at the
time, spammers rarely used actual mail server software to send their
SPAM emails, but simple scripts/programs that would not technically
properly understand the protocol involved in delivering email.
Discarding all emails with a temporary error at first would cause SPAM
to not reach you but real mail by real mailservers being re-sent after
a delay of a few minutes and eventually being accepted by your mail
server. And for a while, it was heaven! SPAM be gone! That is, unless
larger players in the mail game started doing it, too, incentivising
spammers to use real mail servers or otherwise requeue or re-send SPAM
mails. These days, graylisting is useless but gives us inconvenient
delays in email delivery.

It was obvious that graylisting had such an underlying tragedy of the
commons issue, but big players simply used it anyway. Really, they
should not have, for we are now in a worse place because of it than we
would have been without it.

In the same sense I believe we must seek to make improvements to
internet security that are fundamental, not an arms race, as an arms
race never really gets you far from where you started out anyway but
consumes tremendous resources.

> Along those lines, do you know of any research on whether "phishy"
> domains are more effective than non-"phishy" ones?

I do not currently have any publicly available and/or sufficiently
"just the data/analysis we needed"-type material to reference,
unfortunately.

Tobi
_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to