I know some of the discussions in the past about usage of Sorbs RBLs
in Spamassassin. The scores today are as follows:

score RCVD_IN_SORBS_BLOCK 0 # n=0 n=1 n=2 n=3
score RCVD_IN_SORBS_DUL 0 0.001 0 0.001 # n=0 n=2
score RCVD_IN_SORBS_HTTP 0 2.499 0 0.001 # n=0 n=2
score RCVD_IN_SORBS_MISC 0 # n=0 n=1 n=2 n=3
score RCVD_IN_SORBS_SMTP 0 # n=0 n=1 n=2 n=3
score RCVD_IN_SORBS_SOCKS 0 2.443 0 1.927 # n=0 n=2
score RCVD_IN_SORBS_WEB 0 0.614 0 0.770 # n=0 n=2
score RCVD_IN_SORBS_ZOMBIE 0 # n=0 n=1 n=2 n=3

The 0-Scores for DUL was done because lot of people thought there 
were too much false positives within that (I dont see so, but ok).
Another Argument for 0-Scoring or not using sorbs was that the rbl
contains a lot of old (meaning not actual) entries in the spam
section (in mind of the dislist policy). Ok.

But today I take a deeper look at the sorbs rbls and found, that
there is a very simple misconfigration in the SA rules. The rbl
check is done against the big 'dnsbl.sorbs.net' zone:
eval:check_rbl('sorbs', 'dnsbl.sorbs.net.')

And _that_ in my opinion is wrong. The rbl lookup should be done
against the rbl 'safe.dnsbl.sorbs.net' instead. This rbl is a
compilation of most of the sorbs partial lists as dnsbl.sorbs.net
but with a simple difference: In opposite to dnsl.sorbs.net it
does not contain the 'recent.spam' and the 'old.spam' partial
lists, which are contained in 'dnsbl.sorbs.net'. The only spam
listed in this 'safe.dnsbl.sorbs.net' contains spam of the last
24 hours, so the arguments against using sorbs especially because
of its spam delisting policy do not exist. One could simply change
the rbl lookup to the right zone and so also score spams within
that rbl (low).

Description of the different sorbs partial-zones as of the 
aggregate zones here:  https://www.sorbs.net/using.shtml

Reply via email to