----- Original Message ----- 
From: "Chris Santerre" <[EMAIL PROTECTED]>

> >-----Original Message-----
> >From: Bill Landry [mailto:[EMAIL PROTECTED]
> >Sent: Wednesday, December 08, 2004 11:04 AM
> >To: users@spamassassin.apache.org; [EMAIL PROTECTED]
> >Subject: Re: Feature Request: Whitelist_DNSRBL
> >
> >
> >----- Original Message ----- 
> >From: "Daryl C. W. O'Shea" <[EMAIL PROTECTED]>
> >
> >>  >> Was the whitelist you were referring to really the SURBL
> >server-side
> >> whitelist?
> >>  >
> >>  >
> >>  > Yes! But local SURBL whitelists are needed to reduce
> >traffic and time.
> >>
> >>
> >> I'd much rather see SURBL respond with 127.0.0.0 with a
> >really large TTL
> >> for white listed domains.  Any sensible setup will run a
> >local DNS cache
> >> which will take care of the load and time issue.
> >
> >I agree, and have suggested a whitelist SURBL several times on
> >the SURBL
> >discussion list, but it has always fallen on deaf ears - nary
> >a response.
> >It would be nice if someone would at least respond as to why
> >this is not a
> >reasonable suggestion.
>
> Well we have talked about it and .... didn't come up with a solid answer.
> The idea would cause more lookups and time for those who don't cache dns.
We
> do have a whitelist that our private research tools do poll. The idea is
> that if it isn't in SURBL then it is white.
>
> This also puts more work to the already overworked contributors. ;)

Actually, I was thinking of the whitelist that Jeff has already compiled at
http://spamcheck.freeapp.net/whitelist-domains.sort (currently over 66,500
whitelisted domains).  If you set a long TTL on the query responses, it
would certainly cut down on follow-up queries for anyone that is running a
caching dns.  It would also be a lot less resource intensive then trying to
run a local whitelist.cf of over 66,500 whitelisted domains.

Anyway, just a thought...

Bill

Reply via email to