> 
> James Harper <[email protected]>
> writes:
> 
> > . Use greylisting. I wrote my own here that has some smarts about
> > trusting domains (eg bigpond) once a certain number of senders have
> > been seen. I used to greylist for an hour but only 15 minutes now, and
> > only for email with a spamassassin score above some threshold. The
> > idea being that by waiting a bit the sender may get blacklisted in
> > that time if I am the recipient of a new spam run.
> 
> IIRC we greylist for one second.  The fact that they're retrying *at
> all* shows they're not spammers.  We also have to whitelist bigpond :-/

My solution doesn't require whitelisting bigpond because it sees enough 'good' 
emails that get whitelisted directly because they have enough emails with low 
spamassassin stores that it sorts itself out within a week or so, probably 
less. Optus is (was?) the same in that they'd retry from different IP addresses.

My reasoning for greylisting for longer is that a new spam run can take a while 
to appear on the blacklists and other checksum validation sites, so delaying 
suspect email helps a bit, although I haven't done any measurement on this in 
years.

> Other things you didn't mention are:
> 
> Laying your MXs out like this stops spammers that don't try >1 MX and
> that try MXs in reverse order.
> 
>     10 null-mx.cyber.com.au.         <--- always closed 25
>     20 mail.cyber.com.au.            <--- one of the middle pair
>     30 exetel.cyber.com.au.          <---   ought to always work
>     40 tarbaby.junkemailfilter.com.  <--- teergrube
> 

I did that in the late 90's, mainly because we were on a crap ISDN connection 
and Telstra (with no spam protection at all) was our secondary MX, so all the 
spam just went there.

My greylist filter communicates between the primary and the secondary too so 
the databases keep in sync. One addition I have wanted to make for a while is 
like your setup above where it could track the connections between the MX's, so 
if I had a setup like yours:

10 then 20 = good (maybe reduce the spam score by a bit)
20 or 30 without trying 10 first = bad (maybe increase the spam score a bit)
40 without 10-30 = bad (maybe add to a blacklist score in the greylist database)

That by itself would be easy enough to implement given that I already 
communicate between them, but it's the exceptions that make it hard:
1. some MX's remember that the primary is down so go straight to the secondary 
for a bit until the negative cache entry times out
2. what if 10 is broken and so I don't see that it hit 10 first then 20?
3. what if 10-30 are all unreachable?

MX's that violate the standards is the main frustration I'm seeing. I'd love to 
say that "people who violate RFC's get what they deserve" but when the RFC 
violators are big companies like Telstra (for example, I think they've been 
pretty good lately though), your users aren't interested in detailed 
explanations about standards and why sticking to them is a good idea, they just 
want their email.

> We also use reject_unauth_pipelining to throw away peers if they don't
> wait for the server's response when they should.
> 

Yes not waiting for a response is a big giveaway that you're talking to a 
spambot!

James

_______________________________________________
luv-main mailing list
[email protected]
http://lists.luv.asn.au/listinfo/luv-main

Reply via email to