Re: top and other spammy TLDs

2017-02-24 Thread Alex
Hi,

On Fri, Feb 24, 2017 at 7:33 PM, Benny Pedersen  wrote:
> Alex skrev den 2017-02-25 01:18:
>
>> Is there something more that needs to be done than the above?
>
>
> what sa version ?
>
> i know it works with 3.4.1
>
> but have disabled my own rules again

This is a relatively recent svn release, but I've just searched a
3.4.1 tree and there's no occurrence of NEWSPAMMY there either. This
is the config I'm using to start:

# (this is loaded in v320.pre)
# loadplugin Mail::SpamAssassin::Plugin::WLBLEval
#Then add the TLDs to a URI_HOST list:
enlist_uri_host (NEWSPAMMY) top
enlist_uri_host (NEWSPAMMY) date
enlist_uri_host (NEWSPAMMY) faith
enlist_uri_host (NEWSPAMMY) racing
#These can then be used with eval rules:
#To check all URIs:
header   PDS_OTHER_BAD_TLD eval:check_uri_host_listed('NEWSPAMMY')
scorePDS_OTHER_BAD_TLD 0.1
describe PDS_OTHER_BAD_TLD Other untrustworthy TLDs
#if you just want to check From address:
header   PDS_FROM_OTHER_BAD_TLD eval:check_from_in_list('NEWSPAMMY')

Thanks,
Alex


Re: top and other spammy TLDs

2017-02-24 Thread Benny Pedersen

Alex skrev den 2017-02-25 01:18:


Is there something more that needs to be done than the above?


what sa version ?

i know it works with 3.4.1

but have disabled my own rules again


Re: top and other spammy TLDs

2017-02-24 Thread Alex
Hi,

On Tue, Feb 21, 2017 at 12:57 PM, Paul Stead
 wrote:
> I’ve posted this before, this is how I manage these nasty TLDs:
>
> Make sure WLBLEval is enabled:
>
> loadplugin Mail::SpamAssassin::Plugin::WLBLEval
>
> Then add the TLDs to a URI_HOST list:
>
> enlist_uri_host (NEWSPAMMY) top
> enlist_uri_host (NEWSPAMMY) date
> enlist_uri_host (NEWSPAMMY) faith
> enlist_uri_host (NEWSPAMMY) racing
>
> These can then be used with eval rules:
>
> To check all URIs:
>
> header   PDS_OTHER_BAD_TLD eval:check_uri_host_listed('NEWSPAMMY')
> scorePDS_OTHER_BAD_TLD 0.1
> describe PDS_OTHER_BAD_TLD Other untrustworthy TLDs
>
> if you just want to check From address:
>
> header   PDS_FROM_OTHER_BAD_TLD eval:check_from_in_list('NEWSPAMMY')

I thought I would try and get this going, and despite not fully
understanding the comments you made in the bugreport, it doesn't seem
to work:

# spamassassin --lint
Feb 24 19:14:50.396 [14090] warn: eval: could not find list NEWSPAMMY
at /usr/share/perl5/vendor_perl/Mail/SpamAssassin/Plugin/WLBLEval.pm
line 112.

Is there something more that needs to be done than the above?


Re: Google anti-phishing code project

2017-02-24 Thread Alex
Hi,

On Fri, Feb 24, 2017 at 1:24 PM, Dianne Skoll  wrote:
> On Fri, 24 Feb 2017 18:07:50 +
> RW  wrote:
>
>> > OK.  Any FPs, though?  That's the other half of the test.
>
>> No, but it's pretty unlikely there would be.
>
> Actually, it's very likely there will be a lot of FPs, but it's also
> very likely that any given user of the list won't see them.  That's
> because when someone's email address gets compromised and then the
> system administrator clears it up, the only recipients to suffer
> false-positives are those with whom the sender would normally
> correspond.
>
> We have seen a few of these cases happen.

We've actually had false-positives due to how the list is built into
rules. In other words, "i...@ca.com" is still on the list from 2011.
They're also not bounded by default, so noi...@ca.com and
morei...@ca.com would also be caught, for example.

>> It seems like a lot of hassle for little benefit.
>
> The APER doesn't catch all that much, nor do the known-phishing URLs catch
> much, but every little bit helps.

How do you build the phishing URLs list into rules similar to how the
addresses2spamassassin.pl does for the phishing emails?

> As a data point, one of our installations scanned 4 million messages
> yesterday.  Of those, only 262 hit our known-phishing URL list (which
> uses APER and additional sources) and 155 hit APER's known-phishing
> email address list.
>
> But maybe those few hundred were really worth stopping because they
> prevented phishing attacks.  Who knows?

The phishing_emails file builds almost 1100 meta rules. Is there a
point where it's too many and affects processing? I mean, of course
there's a point, but does 1100 plus all others approach that on any
reasonable system?


Re: Google anti-phishing code project

2017-02-24 Thread Dianne Skoll
On Fri, 24 Feb 2017 18:07:50 +
RW  wrote:

> > OK.  Any FPs, though?  That's the other half of the test.

> No, but it's pretty unlikely there would be. 

Actually, it's very likely there will be a lot of FPs, but it's also
very likely that any given user of the list won't see them.  That's
because when someone's email address gets compromised and then the
system administrator clears it up, the only recipients to suffer
false-positives are those with whom the sender would normally
correspond.

We have seen a few of these cases happen.

> It seems like a lot of hassle for little benefit.

The APER doesn't catch all that much, nor do the known-phishing URLs catch
much, but every little bit helps.

As a data point, one of our installations scanned 4 million messages
yesterday.  Of those, only 262 hit our known-phishing URL list (which
uses APER and additional sources) and 155 hit APER's known-phishing
email address list.

But maybe those few hundred were really worth stopping because they
prevented phishing attacks.  Who knows?

Regards,

Dianne.


Re: Google anti-phishing code project

2017-02-24 Thread RW
On Wed, 22 Feb 2017 15:22:17 -0500
Dianne Skoll wrote:

> On Wed, 22 Feb 2017 20:14:33 +
> RW  wrote:
> 
> > FWIW I ran that list against 3k spams received from late 2015
> > onwards. I got 2 hits on 2 separate addesses both timestamped with
> > 2012.  
> 
> OK.  Any FPs, though?  That's the other half of the test.


No, but it's pretty unlikely there would be. 

It seems like a lot of hassle for little benefit.