> > This is much more than rumor.  In addition to regex style filters
> > that look for generic/dynamic looking PTRs, more and more sites
> > are also blocking if the PTR does not match the A.
> 
>   The later is nothing new -- it's called a "double reverse lookup".
> That's been around since at least the mid 1990's.
[...]
> (Still of questionable effectiveness -- spammers buy domain names,
> too -- but at least it's doing *something*.)

I didn't claim it in its self was new.  What is relatively new is that
it's becoming more widespread to outright block because of it.  Even just
2-3 years ago it was pretty much only the so called "lunatic fringe" of
spam fighters that would 5xx if they didn't match.

Botnet spam is the primary target of this type of filtering because
virtually none of these machines are in IP space where the crooks can
control the PTR (hence PTR doesn't match A).  On SPAM-L people
consistently post that just this method alone blocks anywhere from 40-80%
of their entire spam load so I wouldn't say that it's of limited
effectiveness.  This is not hard for me to believe since the majority of
spam still seems to come via bot-nets.  I don't have hard stats here, but
I can say that all the machines that actually get to our content filters
are   This method, of course, does nothing to stem the tide of spam
relayed from Google, Yahoo, and the other webmail providers (a now rapidly
growing category now that captchas are being broken and peoples accounts
are being phished).

The main problem seems to be the "false positives" (which has a variety of
definitions depending on your outlook) such as the case here with the OP.

>  Indeed, just
> checking for the existence of a PTR record is pretty useless, since
> anyone can put anything they want for IP address space they control.

I agree the PTR existence check is limited now that more generic style ptr
records are in place.  Once upon a time it was more prevalent for dynamic
nodes to have no PTR at all.  AOL is a prime example of a site which does
this "existence only" checking and rejecting.  At the time, when Carl
Hutzler was at the helm, it was an effective method for them (this was 3-4
years ago IIRC) and it does have a relatively low filtering cost and
minimal chance of false positive.  I would be interested to hear from the
current AOL postmaster team on its effectiveness in current times.

>   Pattern matching in an attempt to identify domain names which "look
> funny" is something I haven't encountered myself, which is why I
> qualified it that way.

It is not quite as widespread due to a variety of reasons (regex
complexity being right up there I'm sure) but here's a page that describes
it in some detail along with some interesting stats.
<http://www.mostlygeek.com/2007/02/09/most-effective-header-filtering-rule
s/>  People do claim it's fairly effective, though I for one, am leery to
implement it myself particularly since our filtering is good enough as it
is.  Plus, regex starts to hit the CPU more...

> > Fixing this is not as big of a problem as it was a couple years back
> > if you have a business level account.
> 
>   Unfortunately, one still encounters problems when there are multiple
> layers between the person finding the problem and the person who can
> fix it.

I feel your pain.  It still amazes me that sometimes the largest
organizations (which typically have the largest budget for experienced
I.T. personnel and equipment) are often the worst at managing their
network.  Sites like *******.com (a major/international GPS/GIS vendor)
are doing C/R in an implementation that features egregious backscattering
potential, another software vendor had dns that was completely hosed by a
consultant.  With some patience and handholding, they were able to fix
things.


I'm off to TechEd next week and will definitely be saying ehelo to the
Exchange folks. :)  Anyone else going?

~JasonG

-- 

~ Ninja Email Security with Cloudmark Spam Engine Gets Image Spam ~
~             http://www.sunbeltsoftware.com/Ninja                ~

Reply via email to