Re: RBL Spam question

2010-11-05 Thread Henrik K
On Fri, Nov 05, 2010 at 09:11:39AM -0500, Stan Hoeppner wrote:
> Henrik K put forth on 11/5/2010 2:49 AM:
> 
> > Did you happen to notice the absolutely generic expressions in the SA file,
> > unlike your file which mostly lists specific domains?
> 
> The bulk of them are specific to a given ISP.  I saw a half dozen that
> are generic.

And the generic ones probably match the bulk of your rules. Your "1600
rules" comparison didn't make any sense.

Of course you are free to file a SA bug/feature or even become a committer
yourself so you can mass check the rules. But it's beyond this list.



Re: RBL Spam question

2010-11-05 Thread Stan Hoeppner
Henrik K put forth on 11/5/2010 2:49 AM:

> Did you happen to notice the absolutely generic expressions in the SA file,
> unlike your file which mostly lists specific domains?

The bulk of them are specific to a given ISP.  I saw a half dozen that
are generic.

> Not that I don't agree the whole SA file should be revamped, but you are
> again jumping the gun.

I don't see how contacting them and suggesting they might benefit from
additional regexes is premature in any way.  If you think this is
premature, does that mean you believe I should contact them later
instead of sooner?  Should I be waiting for some event to occur that
signals the timing is correct at that point, instead of being premature?

-- 
Stan




Re: RBL Spam question

2010-11-05 Thread Henrik K
On Fri, Nov 05, 2010 at 02:01:19AM -0500, Stan Hoeppner wrote:
> Michael Orlitzky put forth on 11/5/2010 1:39 AM:
> > On 11/05/10 00:11, Stan Hoeppner wrote:
> >> Michael Orlitzky put forth on 11/4/2010 8:06 PM:
> >>> On 11/04/2010 12:39 AM, Stan Hoeppner wrote:
>  Ned Slider put forth on 11/3/2010 6:33 PM:
> 
> > My other thought was to simply comment (or document) ranges known to
> > contain FPs and then the user can make a judgement call whether they
> > want to comment out that particular regex based on their circumstances.
> > Not a very elegant solution.
> 
>  I'm starting to wonder, considering your thoughts on FPs, if this might
>  be better implemented, for OPs concerned with potential FPs, via a
>  policy daemon, or integrated into SA somehow and used for scoring
>  instead of outright blocking.  I don't have the programmatic skill to
>  implement such a thing.
> >>>
> >>>
> >>> http://wiki.apache.org/spamassassin/Rules/RDNS_DYNAMIC
> >>
> >> Any idea where I can get a look that the regexes they use in this rule?
> >>
> > 
> > I think this is the latest:
> > 
> > http://svn.apache.org/repos/asf/spamassassin/rules/branches/3.2/20_dynrdns.cf
> 
> Did you happen to notice the absolutely tiny number of expressions in
> the SA file, as compared to the ~1600 in the file whose use I promote
> here?  Maybe I should get in contact with someone in the project.  If
> only half were deemed usable by them it would be a huge improvement over
> what they have.

Did you happen to notice the absolutely generic expressions in the SA file,
unlike your file which mostly lists specific domains?

Not that I don't agree the whole SA file should be revamped, but you are
again jumping the gun.



Re: RBL Spam question

2010-11-05 Thread Michael Orlitzky
On 11/05/10 03:01, Stan Hoeppner wrote:
>>
>> http://svn.apache.org/repos/asf/spamassassin/rules/branches/3.2/20_dynrdns.cf
> 
> Did you happen to notice the absolutely tiny number of expressions in
> the SA file, as compared to the ~1600 in the file whose use I promote
> here?  Maybe I should get in contact with someone in the project.  If
> only half were deemed usable by them it would be a huge improvement over
> what they have.
> 

Some guy named Stan Hoeppner suggested that the OP might want to use the
list for scoring in SpamAssassin. My point was that if he wants to do
that, he could just add them to the existing 20_dynrdns.cf file =)


Re: RBL Spam question

2010-11-05 Thread Stan Hoeppner
Michael Orlitzky put forth on 11/5/2010 1:39 AM:
> On 11/05/10 00:11, Stan Hoeppner wrote:
>> Michael Orlitzky put forth on 11/4/2010 8:06 PM:
>>> On 11/04/2010 12:39 AM, Stan Hoeppner wrote:
 Ned Slider put forth on 11/3/2010 6:33 PM:

> My other thought was to simply comment (or document) ranges known to
> contain FPs and then the user can make a judgement call whether they
> want to comment out that particular regex based on their circumstances.
> Not a very elegant solution.

 I'm starting to wonder, considering your thoughts on FPs, if this might
 be better implemented, for OPs concerned with potential FPs, via a
 policy daemon, or integrated into SA somehow and used for scoring
 instead of outright blocking.  I don't have the programmatic skill to
 implement such a thing.
>>>
>>>
>>> http://wiki.apache.org/spamassassin/Rules/RDNS_DYNAMIC
>>
>> Any idea where I can get a look that the regexes they use in this rule?
>>
> 
> I think this is the latest:
> 
> http://svn.apache.org/repos/asf/spamassassin/rules/branches/3.2/20_dynrdns.cf

Did you happen to notice the absolutely tiny number of expressions in
the SA file, as compared to the ~1600 in the file whose use I promote
here?  Maybe I should get in contact with someone in the project.  If
only half were deemed usable by them it would be a huge improvement over
what they have.

-- 
Stan



Re: RBL Spam question

2010-11-04 Thread Michael Orlitzky
On 11/05/10 00:11, Stan Hoeppner wrote:
> Michael Orlitzky put forth on 11/4/2010 8:06 PM:
>> On 11/04/2010 12:39 AM, Stan Hoeppner wrote:
>>> Ned Slider put forth on 11/3/2010 6:33 PM:
>>>
 My other thought was to simply comment (or document) ranges known to
 contain FPs and then the user can make a judgement call whether they
 want to comment out that particular regex based on their circumstances.
 Not a very elegant solution.
>>>
>>> I'm starting to wonder, considering your thoughts on FPs, if this might
>>> be better implemented, for OPs concerned with potential FPs, via a
>>> policy daemon, or integrated into SA somehow and used for scoring
>>> instead of outright blocking.  I don't have the programmatic skill to
>>> implement such a thing.
>>
>>
>> http://wiki.apache.org/spamassassin/Rules/RDNS_DYNAMIC
> 
> Any idea where I can get a look that the regexes they use in this rule?
> 

I think this is the latest:

http://svn.apache.org/repos/asf/spamassassin/rules/branches/3.2/20_dynrdns.cf


Re: RBL Spam question

2010-11-04 Thread Stan Hoeppner
Michael Orlitzky put forth on 11/4/2010 8:06 PM:
> On 11/04/2010 12:39 AM, Stan Hoeppner wrote:
>> Ned Slider put forth on 11/3/2010 6:33 PM:
>>
>>> My other thought was to simply comment (or document) ranges known to
>>> contain FPs and then the user can make a judgement call whether they
>>> want to comment out that particular regex based on their circumstances.
>>> Not a very elegant solution.
>>
>> I'm starting to wonder, considering your thoughts on FPs, if this might
>> be better implemented, for OPs concerned with potential FPs, via a
>> policy daemon, or integrated into SA somehow and used for scoring
>> instead of outright blocking.  I don't have the programmatic skill to
>> implement such a thing.
> 
> 
> http://wiki.apache.org/spamassassin/Rules/RDNS_DYNAMIC

Any idea where I can get a look that the regexes they use in this rule?

-- 
Stan


Re: RBL Spam question

2010-11-04 Thread Michael Orlitzky
On 11/04/2010 12:39 AM, Stan Hoeppner wrote:
> Ned Slider put forth on 11/3/2010 6:33 PM:
> 
>> My other thought was to simply comment (or document) ranges known to
>> contain FPs and then the user can make a judgement call whether they
>> want to comment out that particular regex based on their circumstances.
>> Not a very elegant solution.
> 
> I'm starting to wonder, considering your thoughts on FPs, if this might
> be better implemented, for OPs concerned with potential FPs, via a
> policy daemon, or integrated into SA somehow and used for scoring
> instead of outright blocking.  I don't have the programmatic skill to
> implement such a thing.


http://wiki.apache.org/spamassassin/Rules/RDNS_DYNAMIC


Re: RBL Spam question

2010-11-03 Thread Stan Hoeppner
Ned Slider put forth on 11/3/2010 6:33 PM:

> My other thought was to simply comment (or document) ranges known to
> contain FPs and then the user can make a judgement call whether they
> want to comment out that particular regex based on their circumstances.
> Not a very elegant solution.

I'm starting to wonder, considering your thoughts on FPs, if this might
be better implemented, for OPs concerned with potential FPs, via a
policy daemon, or integrated into SA somehow and used for scoring
instead of outright blocking.  I don't have the programmatic skill to
implement such a thing.

> Indeed, lets not detract from the fact that these regexes are very
> effective. You implied earlier in your reply that this wasn't a
> "sophisticated" solution but I have to admit I'm surprised just how
> effective they are and just how *few* FPs there are for something not
> sophisticated.

As we've all read, the vast majority of spam sent comes from bots.
These regexes target mostly bots, at least, transitively.  So it
shouldn't be terribly surprising that they stop a lot of bot
connections, especially if put ahead of most other restrictions.  If you
stick a big net into a salmon stream during spawn you will catch a ton
of salmon.  ;)

> I'm also mindful that we might be getting off topic for the postfix
> users list?

A little maybe.  We are discussing fighting spam with Postfix smtpd
restrictions and PCRE tables.  And I think the effectiveness of this
table, and our constructive criticism of this type of table, may be
beneficial to others, maybe more for those Googling for "Postfix PCRE
table" in the future than current subscribers.  Who knows, our
discussion may be seen by the right eyes and that someone may take off
with this and improve many times over, in both catch rate and FP reduction.

-- 
Stan




Re: RBL Spam question

2010-11-03 Thread Walter Pinto
I was able to accomplish that as well using fail2ban and some custom
regex rules for it. It can be setup to use iptables or /etc/hosts.deny

http://www.fail2ban.org/


Re: RBL Spam question

2010-11-03 Thread JunkYardMail1
One of my favorite anti spam measures is auto add repeat RBL hits, no PTR 
hits, etc. to system firewall.


Here are a few entire network permanent firewall blocks for example as well.
ARIN--Level3-Sendlabs-DynDNS.org___-CIDR[63.209.253.224/27]
ARIN--Level3-Sendlabs-DynDNS.org___-CIDR[63.211.192.128/26]
APNIC-HINET-NET-Chughaw_Telecom-CIDR[118.160.0.0/13]
APNIC-HINET-NET-Chughaw_Telecom-CIDR[118.168.0.0/14]
APNIC-HINET-NET-Chughaw_Telecom-CIDR[122.120.0.0/13]
APNIC-CHINANET-GD-China_Telecom-CIDR[61.140.0.0/14]
APNIC-CHINANET-GD-China_Telecom-CIDR[61.144.0.0/15]
APNIC-CHINANET-GD-China_Telecom-CIDR[61.146.0.0/16]
ARIN--Liquid_Web,_Inc._-CIDR[69.167.169.128/25]
APNIC-AIMS-MY-DIA-NET-Malaysia_-CIDR[110.74.129.0/24]
ARIN--Managed_Solutions_Group,_Inc.-CIDR[205.209.161.0/24]
APNIC-YAHOO-MAIL-Teipei,_Taiwan-CIDR[203.188.200.0/22]
APNIC-YAHOO-MAIL-Teipei,_Taiwan-CIDR[203.188.192.0/20]
APNIC-YAHOO-MAIL-Teipei,_Taiwan-CIDR[116.214.0.0/20]




Re: RBL Spam question

2010-11-03 Thread João Gouveia
Hi Jack,

- "Jack"  wrote:

> Hello All,
> 
>  
> 
> I'm just checking all my spam settings on my postfix servers and I
> wanted to
> know if anyone is using any newer RBL's than below?
> 
> (which have a low false positive rate)

My opinion is of course biased since we run Mailspike IP reputation, but I'd 
suggest that you give it a try.
Most likely it won't make much difference since you already have tons of DNSBLs 
there, but there's no harm trying :-)
I would also be very interested in seeing the results, if that's something that 
you can share.
There are some details on the available zones here: 
http://mailspike.org/anubis/about_data.html and 
http://mailspike.org/anubis/implementation_zones.html

>  
> 
>reject_rbl_client zen.spamhaus.org,
> 
>reject_rbl_client bl.spamcop.net,
> 
>reject_rbl_client psbl.surriel.com,
> 
>reject_rbl_client ix.dnsbl.manitu.net,
> 
>reject_rbl_client b.barracudacentral.org,
> 
>  
> 
> Thanks!
> 
> Jack

-- 
João Gouveia
AnubisNetworks - MailSpike


RE: RBL Spam question

2010-11-03 Thread Mark Scholten


> -Original Message-
> From: owner-postfix-us...@postfix.org [mailto:owner-postfix-
> us...@postfix.org] On Behalf Of Stan Hoeppner
> Sent: Wednesday, November 03, 2010 8:05 PM
> To: postfix-users@postfix.org
> Subject: Re: RBL Spam question
> 
> Charles Marcus put forth on 11/3/2010 8:49 AM:
> > On 2010-11-02 10:07 PM, Stan Hoeppner wrote:
> >> Last, but not least important by any means (understatement), you may
> >> wish to try out:
> >> http://www.hardwarefreak.com/fqrdns.pcre
> >>
> >> Implement this as:
> >>
> >> smtpd_recipient_restrictions
> >>permit_mynetworks
> >>permit_sasl_authenticated
> >>reject_unauth_destination
> >>...
> >>check_client_access pcre:/etc/postfix/fqrdns.pcre
> >>...
> >
> > I keep meaning to say/ask - thanks for this - and do you update this
> > frequently/ever? Meaning, would anyone using it want to download new
> > versions periodically (I'm thinking the answer is no, but just want
> to
> > confirm)?
> 
> I've only added a single entry, the very last one, which breaks
> tradition with the rest of the file.  The expression I added targets a
> snowshoe operation, whereas the rest target PBL type hosts.  Other than
> that all I've done is clean up a few errors in the original expressions
> so the file could run as a PCRE instead of a regexp.
> 
> Ned Slider asked the same thing off list recently regarding making this
> a project.  I think it would be a great idea for this to become a
> larger
> community project.  I'm just not in a position to lead it or host it.
> I'd probably try to contribute a little to it here and there though.

With some limitations I would be happy to host it. Currently I am thinking
about starting to host another list and having multiple useful lists would
be nice. The targets/content of each list would be (partly) different. Below
is how I'd do it if I'd be hosting it, on request I'm happy to host it and
to coordinate it.

The limitations:
- It would get multiple output options, including one where you can use it
to use custom actions (like greylisting).
- Downloads could be limited to hourly or daily downloads per IP (to prevent
people using up to many resources).
- People could sign up to a newsletter to get notifications of policy
changes.
- Downloading/using it is free and on the risk of the person/company that
downloads and uses it.
- Contributing to the list could be done by sending an e-mail and/or using
an online form.

Also note:
- Donations/money won't be accepted.
- On some pages regarding this free service I might put ads (the files with
the information won't contain any ads that could cause problems with
postfix).

I also want to publish a list with known not working rDNS hosts with mail
servers to lower the number of F/P while blocking on not matching rDNS.

With kind regards,

Mark Scholten



Re: RBL Spam question

2010-11-03 Thread Ned Slider

On 03/11/10 21:54, Stan Hoeppner wrote:

Ned Slider put forth on 11/3/2010 3:11 PM:


Stan, and others who are using this file - have any of you looked at the
overlap with greylisting? I would imaging that the vast majority of
clients with dynamic/generic rDNS would be spambots and as such I would
expect greylisting to block the vast majority anyway, and without the
risk of FPs. IOW what percentage of connection attempts from clients
with dynamic/generic rDNS will retry? Of course the benefits of growing
such a list now would become immediately apparent the day spambots learn
to retry to overcome greylisting.


Hmm.  The CBL still exists.  The PBL and other "dynamic" dnsbls still
exist.  I've guess they've not heard of this master of zombie killers
called greylisting. ;)

The performance impact of greylisting is substantially higher than an
access table lookup, yes, even the caching ones such as postgrey.  You
also have the retry interval finger tapping with greylisting, waiting
for order confirmation, list subscription, airline reservation, etc.
Greylisting is simply not an option at some organizations due to
management policy.  Greylisting is not a panacea.


Yes, I take your point, greylisting is not for everyone and can in 
itself cause issues.




If this expression file is to evolve, some of the first additions will
likely be patterns matching snowshoe farms and other spam sources
different from the generic broadband type hosts targeted by the current
expressions.


Personally, I'd be in favour of splitting things like snowshoe into a 
separate file rather than having one file for everything. WRT snowshoe, 
I've had more success listing by IP (cidr table) rather than trying to 
list/match domains but I do block by domain where blocking by IP isn't 
effective -  I guess a case of using the right tool for the job in hand 
(for example, Spamhaus CSS vs DBL)




Regarding the FP issue you raise, I think you're approaching it from the
wrong perspective.  These are regular expressions.  They match rDNS
naming patterns.  We're not talking about something sophisticated like
Spamhaus Zen or the CBL which exclusively use spamtraps and auto expire
listings a short while after the emission stops.  For just about every
/21 consumer/small biz broadband range that any of these patterns may
match, there is likely going to be a ham server or 3 nestled in there,
maybe small biz or a kid with a Linux box and Postfix in the basement.



I understand your view, I think it's just I have a lower tolerance for 
collateral damage if that's an appropriate term. Having looked for 
ranges that are likely to be sending *me* ham, I could probably 
whitelist or comment out a handful of major UK ISPs and I'd be good to 
go. The chances of me receiving ham from a small biz or a kid with a 
Linux box and Postfix in the basement in Russia or China or many Eastern 
European countries is extremely slim.


The main "issue" I have is that I'm largely able to filter spam whilst 
staying within the confines of the RFCs. I'm not aware of an RFC that 
states the rDNS of an smtp server should be of the form mail.example.com 
and not dynamic/generic in nature. Sure, we all understand that's best 
practice and desirable but it's also not always the reality as you 
yourself know only too well.



That's why I say "whitelist where necessary" when promoting use of this
regex set.  I haven't checked the entire list, but I'm pretty sure all
the patterns match mostly residential type ranges.  Some ISPs mix small
biz in with residential, which is stupid, but they do it.  My
residential ISP is an example of such, as we discussed.  With a method
this basic, there's no way around rejecting the ham servers without
whitelisting.  If you start removing patterns due to what you call FPs,
pretty soon there may be few patterns left.  If you start adding
patterns to the top of the file to specifically whitelist those ham
sources, you're now starting to duplicate the DNSWL project, with the
exception that such regex patterns will only be realized retroactively
after an FP.  Such a method of weeding out the ham servers is absolutely
the opposite of scale.  Any ham server within a residential type range
should be listed by its OP at dnswl anyway.  Do you query dnswl Ned?  If
not, I wonder how many of your FPs wouldn't be rejected if you did.



Yes, I have no problem whitelisting - as you say it's knowing who/what 
to whitelist in advance so you don't lose mail. I do check DNSWL (I'm 
listed there myself), but I check and score it in SpamAssassin, not from 
within Postfix. Unfortunately none of the FPs I discovered were listed 
on DNSWL - all but one were hosted on BTOpenWorld.com. My guess is that 
those who know about rDNS PTR records but are unable to change them 
might request a listing on DNSWL, and then there's the rest. 
Unfortunately I suspect the vast majority have no clue what a rDNS PTR 
record is.


My interest is not so much in how I whitelist and sol

Re: RBL Spam question

2010-11-03 Thread Stan Hoeppner
Ned Slider put forth on 11/3/2010 3:11 PM:

> Stan, and others who are using this file - have any of you looked at the
> overlap with greylisting? I would imaging that the vast majority of
> clients with dynamic/generic rDNS would be spambots and as such I would
> expect greylisting to block the vast majority anyway, and without the
> risk of FPs. IOW what percentage of connection attempts from clients
> with dynamic/generic rDNS will retry? Of course the benefits of growing
> such a list now would become immediately apparent the day spambots learn
> to retry to overcome greylisting.

Hmm.  The CBL still exists.  The PBL and other "dynamic" dnsbls still
exist.  I've guess they've not heard of this master of zombie killers
called greylisting. ;)

The performance impact of greylisting is substantially higher than an
access table lookup, yes, even the caching ones such as postgrey.  You
also have the retry interval finger tapping with greylisting, waiting
for order confirmation, list subscription, airline reservation, etc.
Greylisting is simply not an option at some organizations due to
management policy.  Greylisting is not a panacea.

If this expression file is to evolve, some of the first additions will
likely be patterns matching snowshoe farms and other spam sources
different from the generic broadband type hosts targeted by the current
expressions.

Regarding the FP issue you raise, I think you're approaching it from the
wrong perspective.  These are regular expressions.  They match rDNS
naming patterns.  We're not talking about something sophisticated like
Spamhaus Zen or the CBL which exclusively use spamtraps and auto expire
listings a short while after the emission stops.  For just about every
/21 consumer/small biz broadband range that any of these patterns may
match, there is likely going to be a ham server or 3 nestled in there,
maybe small biz or a kid with a Linux box and Postfix in the basement.

That's why I say "whitelist where necessary" when promoting use of this
regex set.  I haven't checked the entire list, but I'm pretty sure all
the patterns match mostly residential type ranges.  Some ISPs mix small
biz in with residential, which is stupid, but they do it.  My
residential ISP is an example of such, as we discussed.  With a method
this basic, there's no way around rejecting the ham servers without
whitelisting.  If you start removing patterns due to what you call FPs,
pretty soon there may be few patterns left.  If you start adding
patterns to the top of the file to specifically whitelist those ham
sources, you're now starting to duplicate the DNSWL project, with the
exception that such regex patterns will only be realized retroactively
after an FP.  Such a method of weeding out the ham servers is absolutely
the opposite of scale.  Any ham server within a residential type range
should be listed by its OP at dnswl anyway.  Do you query dnswl Ned?  If
not, I wonder how many of your FPs wouldn't be rejected if you did.

To date, I can't recall a single FP here due to these regexes.  This is
one of the reasons I like it so well and promote it.  As always, YMMV.

-- 
Stan


Re: RBL Spam question

2010-11-03 Thread Ned Slider

On 03/11/10 19:04, Stan Hoeppner wrote:

Charles Marcus put forth on 11/3/2010 8:49 AM:

On 2010-11-02 10:07 PM, Stan Hoeppner wrote:

...
check_client_access pcre:/etc/postfix/fqrdns.pcre
...


I keep meaning to say/ask - thanks for this - and do you update this
frequently/ever? Meaning, would anyone using it want to download new
versions periodically (I'm thinking the answer is no, but just want to
confirm)?


I've only added a single entry, the very last one, which breaks
tradition with the rest of the file.  The expression I added targets a
snowshoe operation, whereas the rest target PBL type hosts.  Other than
that all I've done is clean up a few errors in the original expressions
so the file could run as a PCRE instead of a regexp.

Ned Slider asked the same thing off list recently regarding making this
a project.  I think it would be a great idea for this to become a larger
community project.  I'm just not in a position to lead it or host it.
I'd probably try to contribute a little to it here and there though.



Yes, to me this looked like just the type of thing that might benefit 
from multiple contributors in terms of growing the list. I had a quick 
look through my own logs and made a few additions so there's certainly 
lots of room for growth. I also spent some time searching my inbox(es) 
for false positives (ham from servers who would have been blocked) and 
identified a couple ranges (these will most likely depend on your mail 
flow and the country you're in), and have been giving some thought as to 
how best to handle these. In terms of contributing data, it could be as 
easy as deploying the file before any DNSBLs in postfix restrictions and 
then submitting missed dynamic/generic rDNS strings that subsequently 
get blocked by the DNSBLs (from pflogsumm or similar).


However, I'm in two minds. For my own personal usage (small home server) 
there's little point atm as I'm easily able to block the vast majority 
of spam using standard postfix restrictions and greylisting - even the 
DNSBLs only see crumbs now I've switched postgrey to return "451", and 
what does make it through wouldn't be caught by this list. I simply 
don't need to deploy this measure and risk the concomitant false 
positives that it might create. OTOH the concept appeals and I can see 
how such a file might be invaluable for smaller organisations who don't 
qualify for free Spamhaus usage and can't afford/don't want to pay for a 
subscription.


Stan, and others who are using this file - have any of you looked at the 
overlap with greylisting? I would imaging that the vast majority of 
clients with dynamic/generic rDNS would be spambots and as such I would 
expect greylisting to block the vast majority anyway, and without the 
risk of FPs. IOW what percentage of connection attempts from clients 
with dynamic/generic rDNS will retry? Of course the benefits of growing 
such a list now would become immediately apparent the day spambots learn 
to retry to overcome greylisting.




Re: RBL Spam question

2010-11-03 Thread Stan Hoeppner
Charles Marcus put forth on 11/3/2010 8:49 AM:
> On 2010-11-02 10:07 PM, Stan Hoeppner wrote:
>> Last, but not least important by any means (understatement), you may
>> wish to try out:
>> http://www.hardwarefreak.com/fqrdns.pcre
>>
>> Implement this as:
>>
>> smtpd_recipient_restrictions
>>  permit_mynetworks
>>  permit_sasl_authenticated
>>  reject_unauth_destination
>>  ...
>>  check_client_access pcre:/etc/postfix/fqrdns.pcre
>>  ...
> 
> I keep meaning to say/ask - thanks for this - and do you update this
> frequently/ever? Meaning, would anyone using it want to download new
> versions periodically (I'm thinking the answer is no, but just want to
> confirm)?

I've only added a single entry, the very last one, which breaks
tradition with the rest of the file.  The expression I added targets a
snowshoe operation, whereas the rest target PBL type hosts.  Other than
that all I've done is clean up a few errors in the original expressions
so the file could run as a PCRE instead of a regexp.

Ned Slider asked the same thing off list recently regarding making this
a project.  I think it would be a great idea for this to become a larger
community project.  I'm just not in a position to lead it or host it.
I'd probably try to contribute a little to it here and there though.

> Thanks again Stan,

You're welcome.  I'm glad it helps others besides me.  Also thank the
anonymous soul who donated it to spam-l so long ago.

-- 
Stan


Re: RBL Spam question

2010-11-03 Thread Charles Marcus
On 2010-11-02 10:07 PM, Stan Hoeppner wrote:
> Last, but not least important by any means (understatement), you may
> wish to try out:
> http://www.hardwarefreak.com/fqrdns.pcre
> 
> Implement this as:
> 
> smtpd_recipient_restrictions
>   permit_mynetworks
>   permit_sasl_authenticated
>   reject_unauth_destination
>   ...
>   check_client_access pcre:/etc/postfix/fqrdns.pcre
>   ...

I keep meaning to say/ask - thanks for this - and do you update this
frequently/ever? Meaning, would anyone using it want to download new
versions periodically (I'm thinking the answer is no, but just want to
confirm)?

Thanks again Stan,

-- 

Best regards,

Charles


Re: RBL Spam question

2010-11-02 Thread Stan Hoeppner
Jack put forth on 11/2/2010 3:56 PM:

> I'm just checking all my spam settings on my postfix servers and I wanted to
> know if anyone is using any newer RBL's than below?
> 
> (which have a low false positive rate)

Low FP noted, FSVO "low FP".

>reject_rbl_client zen.spamhaus.org,
>reject_rbl_client bl.spamcop.net,
>reject_rbl_client psbl.surriel.com,
>reject_rbl_client ix.dnsbl.manitu.net,
>reject_rbl_client b.barracudacentral.org,

This response may be a little more than what you're asking for, but if
you implement of all of these suggestions you may be pleasantly
surprised by how little spam you see afterward. :)

That's probably plenty you have there already.  In fact it may be too
many.  I'd suggest analyzing your logs and eliminating any of those that
aren't catching anything regularly.  You've got a lot of overlap there,
each successive query having less and less of a chance to do you any
good, at the cost of 5 dns lookups per inbound ham.  I did such an
analysis over a year ago and cut my reject_rbl_client config from 6 to
only 2 dnsbls.  Currently I use zen and surriel and they don't even
catch much due to the rest of my A/S config, although zen catches more
than surriel.  YMMV and all that.

I suggest you try out the following as well and see what results you
get.  The DBL should catch a bit of pesky snowshoe spam that the dnsbl
combo above does not:

reject_rhsbl_client dbl.spamhaus.org
reject_rhsbl_sender dbl.spamhaus.org
reject_rhsbl_helo dbl.spamhaus.org

You may also consider implementing Sahil Tandon's header check tcp
server daemon which also catches a fair amount of snowshow and other
spam.  It checks the from, message-id, and reply-to headers against
Spamhaus DBL, SURBL, and URIBL.  Instructions for using it are comments
in the perl file itself.

http://people.freebsd.org/~sahil/scripts/checkdbl.pl.txt

And, as always, for performance and other reasons, you should be using a
local dns resolver, not your ISP's, for dns queries originating from
your Postfix server.  If you're not already using a local resolver I
recommend you install a caching resolver directly on the Postfix server
and change your /etc/resolv.conf to "nameserver 127.0.0.1".  You may
need to restart Postfix for this change to take effect, especially if
running in a chroot.  I use pdns-recursor on my Postfix host.  Combining
the rhsbl checks to dbl.spamhaus.org and the query to the same dnsbl
made by Sahil's daemon yields _four_ queries to the same destination for
the same information.  Using a local resolver cuts this to a single
remote query and 3 cached queries, decreasing your response times,
increasing Postfix performance, and decreasing the load on the dnsbl
servers and the public infrastructure.  Use something like
pdns-recursor.  It's a win for everyone, esp you.

Last, but not least important by any means (understatement), you may
wish to try out:
http://www.hardwarefreak.com/fqrdns.pcre

Implement this as:

smtpd_recipient_restrictions
permit_mynetworks
permit_sasl_authenticated
reject_unauth_destination
...
check_client_access pcre:/etc/postfix/fqrdns.pcre
...

Put it above all of your other anti-spam checks (not including the
inbuilt Postfix restrictions such as reject_unlisted_recipient) since
local table lookups are infinitely faster than dns lookups, although not
so much so if you have pdns-recursor.  This set of PCREs targets rdns
names of mostly residential broadband IPs in various countries.  It's
target is zombie spam sources.  It catches things Spamhaus' PBL doesn't
and can stop zombie spam before the CBL lists the IP, which is very
nice.  I used to frequently get hit by zombie spam here before it was
listed in CBL.  No more.  This set has pretty much put an end to that.

It has been noted that these PCREs will infrequently FP on some MTAs
sending from within ISP IP space that contains both residential and
small business customers on an ADSL or cable network.  If such cases
arise, simply whitelist the IP.  For each one of these that may occur,
you're protecting your mailbixes from thousands of zombies within the
same ADSL/cable network.

-- 
Stan