From: "Matt Kettler" <[EMAIL PROTECTED]>
Sent: Friday, June 03, 2005 9:30 PM


Kevin Sullivan wrote:
On Jun 2, 2005, at 8:27 PM, Matt Kettler wrote:

If one's wrong, they are ALL wrong.

SA's rule scores are evolved based on a real-world test of a
hand-sorted corpus of fresh spam and ham. The whole scoreset is
evolved simultaneously to optimize the placement pattern.

Of course, one thing that can affect accuracy is if some spams are
accidentally misplaced into the ham pile it can cause some heavy score
biasing to occur. A little bit of this is unavoidable, as human
mistakes happen, but a lot of it will cause deflated scores and a lot
of FNs.


The rule scores are optimized for the spam which was sent at the time
that version of SA was released (actually, at the time the rule scoreset
was calculated).  Since then, the static SA rules have become less
useful since spammers now write their messages to avoid them.  The only
rules which spammers cannot easily avoid are the dynamic ones:  bayes
and network checks (RBLs, URIBLs, razor, etc).

On my systems, I raise the scores for the dynamic tests since they are
the only ones which hit a lot of today's spam.


Very true. Most of the static tests (ie: body rule sets like antidrug) spammers quickly adapt to after a SA release, and they loose some effectiveness over time.


Maybe we have to make a separate version of the score-file.
So you could install an official SA 3.0.3 release and download a score-file say version 3.0.3-date. And ones every month there will be another official score-file. Spammers can adjust there spam to pass the "static" tests but the score will be changed. And after the score-file change

Now we have to wait for 3.0.4 before there will be any change in the static score's

With kind regards,
Met vriendelijke groet,

Maurice Lucas
TAOS-IT


Reply via email to