-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Kai writes:
> On 3/29/2004 at 1:31 PM, "Justin Mason" <[EMAIL PROTECTED]> wrote:
> 
> > Tony Finch writes:
> >> On Mon, 29 Mar 2004, Jeff Chan wrote:
> >> >
> >> > So a technique to defeat the randomizers greater count is to look
> >> > at the higher levels of the domain, under which SURBL will always
> >> > count the randomized children of the "bad" parent.  In this case
> >> > the URI diversity created through randomization hurts the spammer
> >> > by increasing the number of unique reports and increasing the
> >> > report count of their parent domain, making them more likely to
> >> > be added to SURBL.  (Dooh, this paragraph is redundant...)
> >> 
> >> Another approach is to blacklist nameservers that host spamvertized
> >> domains. If an email address or a URI uses a domain name whose nameservers
> >> are blacklisted (e.g. the SBL has appropriate listing criteria), or if the
> >> reverse DNS is hosted on blacklisted nameservers, these may be grounds for
> >> increasing the score.
> >> 
> >> I don't know if SA does this check yet.
> 
> > Yep, it does -- that's what the URIBL plugin does currently.
> 
> sorry, I still haven't been able to test the -current version, but it
> was my understanding based on the discussion under [Bug 1375] that URIBL
> checks only check the RECURSIVELY RESOLVED 'A' records of the host part
> of a URI against DNSBLs.
> 
> Justin: could you write a 5-liner saying what the final version of the
> module does, and in what order?

Sure, maybe when I get some free time ;)

In the meantime, the POD docs in Mail::SpamAssassin::Plugin::URIBL
should help.

> I had previously  thought this up as a defense against Ralsky's new randomized
> URLs like: http://[a-z]{14}.hfgr33.us :
> 
> 1 - do non-recursive resolution of NS records for the second-level domain ONLY
>    (I have a list of third-level delegations for various country TLD's, too,
>     if required, but I have yet to see any spamvertized URLs in 3rd-level
>     delegation domains),
>    to avoid triggering their 'web bug' by way of them loging+tracking DNS 
> lookups.
>    This is a very fast query, always and only going to nameservers of TLDs,
>    which will (hopefully) never be under the control of spammers playing DoS 
> games.

Interesting.  Can you suggest how to do that with Net::DNS?  I'm not
sure we have control to that degree.

> 2 - If we don't do this, they can easily tell what recipient MX hosts use
>     this functionality, and singling these specific ones out with messages
>     crafted to DoS this scheme in the future. This is especially important
>     as long as these URIBL method(s) are not widely adopted - and I am not
>     exclusively talking about SA here.

My take is to cut off at the registrar-registered portion: e.g.
"foo.co.uk", "foo.biz" etc., and use stringent timeouts.   The
scanning will always kill any pending lookups 2 seconds after
the normal DNSBL lookups complete.

> 3 - do DNSBL lookups on all such nameserver IPs - if there's more than say: 4,
>     pick N random ones from the list, where N = 4 + ((number_of_NS_hosts - 4) 
> / 2 )
>     (e.g.: only every second one beyond the magic number of 4).

It performs lookups on all in parallel.

>   - if you get DNSBL hits for 2 or more of the nameservers, abort
>     all further lookups and return a match on the rule.

Well, the timeout provides this as well; if I recall correctly it'll
complete as many as it can, no matter how many other hits there are.

> 4 - when all nameserver IP's are "clear" of DNSBL hits, proceed to query
>     them for A records for the hostname part of the URIs (or the IP numbers
>     in numeric URLs, in which case you never went through steps 1-3), and do
>     the DNSBL queries against them.

Currently it just does NS queries, on the basis that A lookups of the
hostname parts are easily evadable by using a randomized hostname portion
beyond the registrar domain portion, and that if we A-lookup'd that
hostname, we'd provide a back channel to the spammers for address
confirmation etc. Hence I've deliberately avoided A lookups.

Can you think of an algorithm that'll do this reliably?   If
we can limit recursive lookups to the roots, it may help.   But
I wasn't sure if this was possible.

I have a vague feeling that there are a wide range of evasions for this --
e.g. a spammer could set extremely low TTLs so the roots will time out
non-auth data very quickly; our non-recursive lookups would return no
match, but a recursive lookup from a MUA would cause the root to query the
NS, and return a match.

> There is plenty of evidence that major numbers of spamvertized
> domains/websites are using (*) SBL-listed nameservers and networks:
> stop worrying about FP's for innocent domains hosted on these nameservers:
> it will become impossible for providers to run nameservers that are 
> SBL-listed,
> and be ignorant about it. If they were quick to terminate spamsites pointing 
> to
> their NS's, they'd have never landed in the SBL in the first place.
> 
>  (*) listing policies of other DNSBLs may have substantially different results
>      when used as URiBL query targets.

SBL is certainly working very well in these lookups; they're doing a
good job avoiding FPs.

In terms of worrying about FPs -- let mass-check worry about it ;)
There's no URI BL providing dangerous FPs at the moment.

> The only thing I am unsure how to do with the above is: how to communicate
> that a rule match happened based on NS DNSBL hits, or based on DNSBL hits for
> A records for the host(s)?

well, given the NS-only nature, that's not a problem right now. ;)

- --j.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)
Comment: Exmh CVS

iD8DBQFAaM8CQTcbUG5Y7woRAmvVAJ9qlUY6MdDFFk4UTHQYvfX1xJcAagCffIrZ
khD4b8G20XKdGsQ0lPwqRR4=
=2tE/
-----END PGP SIGNATURE-----

Reply via email to