> On Oct 2, 2019, at 3:18 PM, Ronald Crane via dev-security-policy 
> <dev-security-policy@lists.mozilla.org> wrote:
> 
> 
> On 10/2/2019 2:47 PM, Paul Walsh via dev-security-policy wrote:
>> On Oct 2, 2019, at 1:16 PM, Ronald Crane via dev-security-policy 
>> <dev-security-policy@lists.mozilla.org> wrote:
>>> On 10/1/2019 6:56 PM, Paul Walsh via dev-security-policy wrote:
>>>> New tools such as Modlishka now automate phishing attacks, making it 
>>>> virtually impossible for any browser or security solution to detect -  
>>>> bypassing 2FA. Google has admitted that it’s unable to detect these 
>>>> phishing scams as they use a phishing domain but instead of a fake 
>>>> website, they use the legitimate website to steal credentials, including 
>>>> 2FA. This is why Google banned its users from signing into its own 
>>>> websites via mobile apps with a WebView. If Google can prevent these 
>>>> attacks, Mozilla can’t.
>>> I understand that Modlishka emplaces the phishing site as a MITM. This is 
>>> yet another reason for browser publishers to help train their users to use 
>>> only authentic domain names, and also to up their game on detecting and 
>>> banning phishing domains. I don't think it says much about the value, or 
>>> lack thereof, of EV certs. As has been cited repeatedly in this thread, 
>>> most phishing sites don't even bother to use SSL, indicating that most 
>>> users who can be phished aren't verifying the correct domain.
>> Ronald - it’s virtually impossible for anyone to spot well designed phishing 
>> attacks. Teaching people to check the URL doesn’t work - I can catch out 99% 
>> with a single test, every time.
> 
> "Virtually impossible"? "Anyone"? Really? Those are big claims that need real 
> data. I'm pretty sure I haven't been phished yet.

Yes :)

I have results from 1,845 people so far. I published the test on Twitter, 
inside our Telegram group and presented it many times at many blockchain 
conferences around the world. Only 4 people got it right - and I also put it in 
front of great security professionals. My point is that it’s virtually 
impossible to spot some phishing scams for almost everyone. I’ve seen some top 
social engineers own up to falling for a phishing test at work on Twitter - and 
this is what they do for a living. It’s not a measurement of experience or 
expertise. 

Here’s one test https://twitter.com/Paul__Walsh/status/1174359874932621316?s=20

> 
> In any case, have we ever really tried to teach users to use the correct 
> domain? As I noted in a recent response, many site owners do things -- such 
> as using multiple domains for a single entity, using URL-shortening services, 
> using QR codes, etc. -- that habituate users to the idea that there's more 
> than one correct domain, and/or that they can get it from untrustworthy 
> sources. Once they have that idea, phishing is easy.

This won’t resolve the problem unfortunately. Companies that use few domains 
are high profile targets. 

> 
>> It’s the solution if users had a reliable way to check website identity as 
>> I’ve explained....
> And EV certs do this how? Please address https://stripe.ian.sh .

I already addressed this by asking for a single instance of where an attacker 
used an EV certificate. I provide quite a lot of text around this point - 
pointing out that just because you can prove something can be done, doesn’t 
mean it will be. No security solution on the market is 100%. No company is 
hack-proof. Threat actors will only spend time, energy and cost if it’s worth 
it. 

From a security POV, the bar to attaining an EV cert is too high for it to be a 
real threat. They have to setup a real company and it can only be used once. So 
when the cert is revoked that’s the end of it. But, the process could be 
improved.

Back to my question, can you provide examples of attacks that used an EV cert?

I’m not here to defend EV certs or CAs. I’m here to ask that you stop and 
rethink your decision to remove UI for website identity. This isn’t to say that 
we can’t rethink the CA model and tech. From what I can see, browser vendors 
are railroading everyone, including CAs. There’s no collaboration here. Just a 
few people who *think* they know what’s best. I see no evidence to substantive 
any decisions. 


>> Perhaps you can comment on my data about users who do rely on a new visual 
>> indicator and the success that has seen?
> Please post a link to a paper describing it, including the methodology you 
> used.

I’ve already published the methodology used on a thread in this forum with all 
the data collected in relation to this point. I just haven’t taken the time to 
PDF it and stick it on a website. It will however, be published on a website in 
the form of a guest post - later this week. 


>> Any opinion I’ve read is just that, opinion, with zero data/evidence to 
>> substantiate anything cited. The closest I’ve seen is exceptionally old 
>> research that’s more than 10 years old.
> Um, 
> https://casecurity.org/wp-content/uploads/2017/09/Incidence-of-Phishing-Among-DV-OV-and-EV-Websites-9-13-2017-short-ve....pdf
>  (see table on p.2) is from 2017. That is not "more than 10 years old" nor 
> just "opinion, with zero data/evidence to substantiate anything cited". Let's 
> debate the merits with more light and less heat.

Sorry, I should have been more clear. While the CAs have furnished lots of 
data, people with opposing views have not. I can’t find any research carried 
out by browser vendors. I’ve asked many times.

>> According to Webroot 93% of all new phishing sites have an SSL certificate. 
>> According to MetaCert it’s more than 96%. This is increasing as Let’s 
>> Encrypt issues more free certs.
> 
> Please link the surveys you cite. In any case, the Lets Encrypt issue *does* 
> appear to be a problem, as you noted elsewhere. Does Google Safe Browsing 
> automatically add these (fake Paypal and similar) domains to its 
> probable-phish list? They should.

Sorry, I’ll link to everything from now on. In regards to Webroot, it’s a 
picture of a slide taken from their presentation at the RSA conference this 
year. No, Google does not auto add anything. They will only add URLs that are 
reported to them - and then it takes days to evaluate - by which time almost 
all the damage is done. It’s pretty much the same for other URL-based threat 
intelligence systems. AI isn’t solving this problem.

According to Google, 7mins is the life of a “boutique” phishing attack, 13hours 
for a bulk campaign. 

https://elie.net/talk/deconstructing-the-phishing-campaigns-that-target-gmail-users/
 
<https://elie.net/talk/deconstructing-the-phishing-campaigns-that-target-gmail-users/>

> 
>> If you want to talk about certificate issuance that’s broken, look at how 
>> Let’s Encrypt has issued more than 14,000 DV certs to domains with PayPal in 
>> it.
> 
> I'm agnostic on the EV UI, but have seen little evidence that it's useful. 
> Maybe your paper will help convince me otherwise.

I agree. It’s not useful but that’s not because “EV” is broken. It’s due to the 
poor design implementation of the UI and UX by Mozilla (and all other browser 
vendors).

> 
> -R
> 
> 
> _______________________________________________
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy

_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to