On Thu, 25 Jan 2007 10:28:21 -0500, Andy Figueroa
<[EMAIL PROTECTED]> wrote:

>Thanks, Matt.  That sounds like a good suggestion.
>
>Nigel, since you have the emails, if you could capture the debug output 
>in a file and post like you did the messages, perhaps someone wise could 
>evaluate what is going on.
>
>You can capture the debug output by using:
>spamassassin -D -t < message1 2> debug1.txt
>
>Andy Figueroa
>
>Matt Kettler wrote:
>> Andy Figueroa wrote:
>>> Matt (but not just to Matt), I don't understand your reply (though I
>>> am deeply in your dept for the work you do for this community).  The
>>> sample emails that Nigel posted are identical in content, including
>>> obfuscation.  I've noted the same situation.  Yet, the scoring is
>>> really different. On the low scoring ones, DCC and RAZOR2 didn't hit,
>>> and the BAYES score is different.  The main differences are in the
>>> headers' different forged From and To addresses.  I thought these
>>> samples were worthy of deeper analysis.
>> 
>> Well, there might be other analysis worth making.
>> 
>>  However,  Nigel asked why the drugs rules weren't matching. I answered
>> that question alone.
>> 
>> Not sure why the change in razor/dcc happend.
>> 
>> BAYES changes are easily explained by the header changes, but a deeper
>> analysis would involve running through spamassassin -D bayes and looking
>> at the exact tokens.
>> 

I'll sit down with a beer later and run the debug on them. In the
meantime Steve Basford from sanesecurity.com has added them to the
Clam add on I mentioned a while back. 

Their main download point is
http://sanesecurity.com/clamav/downloads.htm (in my experience here
it's worked very well indeed). For those of you that are interested
and are running multiple servers contact me off list for the URL to
the scripts James Rallo mod'd for updating multiple backend servers
(or you can hunt back through the mail archives for it :-D).

Kind regards

Nigel

Reply via email to