Re: Parsing DCC

2006-04-30 Thread Dan

Nevermind, I found the entry:


use_dcc { 0 | 1 } (default: 1)
Whether to use DCC, if it is available.

dcc_timeout n (default: 10)
How many seconds you wait for dcc to complete before you go on  
without the results.


dcc_body_max NUMBER
dcc_fuz1_max NUMBER
dcc_fuz2_max NUMBER
DCC (Distributed Checksum Clearinghouse) is a system similar to  
Razor. This option sets how often a message's body/fuz1/fuz2 checksum  
must have been reported to the DCC server before SpamAssassin will  
consider the DCC check as matched.
As nearly all DCC clients are auto-reporting these checksums you  
should set this to a relatively high value, e.g. 99 (this is  
DCC's MANY count).

The default is 99 for all these options.


Re: Parsing DCC

2006-04-30 Thread Dan
All that said, I can't see why you'd want to do anything else with  
DCC.

The FP rate on DCC, even with the defaults of |99 for fuzz counts,
is significant. In the SA 3.1.0 set3 mass-checks, DCC_CHECK had a S/O
of| 0.979, meaning that 2.1% of email matched by it was nonspam.


So more detail is not needed.  Is the level you're describing  
equivalent to "many"?


Dan



Re: Parsing DCC

2006-04-30 Thread Matt Kettler
Dan wrote:
>>> 1) Is capturing header output text the best way to implement DCC in SA?
>>
>> No, using the DCC plugin that already comes with SA is the best way.
>>
>> Edit your v310.pre and load the dcc plugin. SA already has pre-scored
>> and tested rules built in. No further work needed.
>
> Excellent Matt.  Is there a way to process the various DCC outputs
> with this architecture?  Searching the "factory" configuration, this
> entry seems to handle scoring?:
>
> ifplugin Mail::SpamAssassin::Plugin::DCC
> score DCC_CHECK 0 1.37 0 2.17
> endif # Mail::SpamAssassin::Plugin::DCC
>
> This looks a bit inflexible, can the plugin do more than take a single
> DCC score and assign 3 weights to the output? 

No.. at this time the DCC plugin is either hit, or not. You can adjust
the fuzz threshold with the dcc_*_max options. See the plugin docs at:

http://spamassassin.apache.org/full/3.1.x/dist/doc/Mail_SpamAssassin_Plugin_DCC.html


All that said, I can't see why you'd want to do anything else with DCC.
The FP rate on DCC, even with the defaults of |99 for fuzz counts,
is significant. In the SA 3.1.0 set3 mass-checks, DCC_CHECK had a S/O
of| 0.979, meaning that 2.1% of email matched by it was nonspam.

|I can't see how any lower fuzz values would be of any use, as they
should, theoretically, have lower S/O's, and would only be worth small
fractions of a point.



|


Re: Parsing DCC

2006-04-30 Thread Dan
1) Is capturing header output text the best way to implement DCC  
in SA?


No, using the DCC plugin that already comes with SA is the best way.

Edit your v310.pre and load the dcc plugin. SA already has pre-scored
and tested rules built in. No further work needed.


Excellent Matt.  Is there a way to process the various DCC outputs  
with this architecture?  Searching the "factory" configuration, this  
entry seems to handle scoring?:


ifplugin Mail::SpamAssassin::Plugin::DCC
score DCC_CHECK 0 1.37 0 2.17
endif # Mail::SpamAssassin::Plugin::DCC

This looks a bit inflexible, can the plugin do more than take a  
single DCC score and assign 3 weights to the output?


Thanks!
Dan


Re: intercource oriented newsgroups

2006-04-30 Thread jdow

"Review your spam bucket, compadre."
{o.o}
- Original Message - 
From: "Igor Chudov" <[EMAIL PROTECTED]>

To: "Spamassassin Mailing List" 
Sent: Sunday, April 30, 2006 20:21
Subject: intercource oriented newsgroups



A few of my clients are moderated newsgroups that have graphic posts
describing certain sexual perversions. They receive posts via email
and approve/reject them.

Their posts trip spamassassin sometimes, understandably, they talk
about big reproducting o rgans, arouzal, etc. 


So... What can I do, is the only option is to basically turn off SA
for them? Or is there some rule like no_sex_filters = 1?

i


RE: intercource oriented newsgroups

2006-04-30 Thread Dallas L. Engelken
> -Original Message-
> From: Igor Chudov [mailto:[EMAIL PROTECTED] 
> Sent: Sunday, April 30, 2006 22:22
> To: Spamassassin Mailing List
> Subject: intercource oriented newsgroups
> 
> A few of my clients are moderated newsgroups that have 
> graphic posts describing certain sexual perversions. They 
> receive posts via email and approve/reject them.
> 
> Their posts trip spamassassin sometimes, understandably, they 
> talk about big reproducting o rgans, arouzal, etc. 
> 
> So... What can I do, is the only option is to basically turn 
> off SA for them? Or is there some rule like no_sex_filters = 1?
> 

skip SA on newsgroup mail (or whitelist_from_rcvd)...  if the reason for
running newsgroup mail through SA is because your newsgroups get
spammed, then you have a bigger problem to fix first.

d


intercource oriented newsgroups

2006-04-30 Thread Igor Chudov
A few of my clients are moderated newsgroups that have graphic posts
describing certain sexual perversions. They receive posts via email
and approve/reject them.

Their posts trip spamassassin sometimes, understandably, they talk
about big reproducting o rgans, arouzal, etc. 

So... What can I do, is the only option is to basically turn off SA
for them? Or is there some rule like no_sex_filters = 1?

i


Re: Parsing DCC

2006-04-30 Thread Matt Kettler
Matt Kettler wrote:
> 1) Is capturing header output text the best way to implement DCC in SA?
>   
>
> No, using the DCC plugin that already comes with SA is the best way.
>
> Edit your v310.pre and load the dcc plugin. SA already has pre-scored
> and tested rules built in. No further work needed.
>
>   
One more note.. When you load the DCC plugin, SA will actually call DCC
itself, so you can remove whatever is adding those headers.

SA will attempt to find a dccifd socket, and use that if present. If
dccifd is not running, SA will call dccproc.



Re: Parsing DCC

2006-04-30 Thread Matt Kettler
Dan wrote:
> This is partly about DCC and partly about regex (yes, I've ordered two
> more regex books).  
>
>
> First, there's the basic all or nothing output:
>
> X-DCC-servers-Metrics: ui1 1049; bulk Body=many Fuz1=many Fuz2=many
> X-DCC-servers-Metrics: ui1 1049; bulk Body=0 Fuz1=0 Fuz2=0
>
> ...that can be captured with basic rules:
>
> header DCCBODY_m ALL =~ /X-DCC-.{1,500}Body=many/i
> header DCCFUZ1_m ALL =~ /X-DCC-.{1,500}Fuz1=many/i
> header DCCFUZ2_m ALL =~ /X-DCC-.{1,500}Fuz2=many/i
>
> 1) Is capturing header output text the best way to implement DCC in SA?

No, using the DCC plugin that already comes with SA is the best way.

Edit your v310.pre and load the dcc plugin. SA already has pre-scored
and tested rules built in. No further work needed.


Re: Those "Re: good obfupills" spams

2006-04-30 Thread jdow

From: "Matt Kettler" <[EMAIL PROTECTED]>


jdow wrote:

And it is scored LESS than BAYES_95 by default. That's a clear signal
that the theory behind the scoring system is a little skewed and needs
some rethinking.


No.. It does not mean there's a problem with the scoring system. It
means you're trying to apply a simple linear model to something which is
inherently not linear, nor simple.  This is a VERY common misconception. 


I have a few more thoughts that are probably more "constructive" than
merely saying that the perceptron model is "obviously" wrong where the
rubber meets the road.

It seems to me that the observed operation of the perceptron is driving
scores towards the minimum amount over 5.0 that can be managed and still
capture most of the spam.

I've been operating here on a slightly different principle, at least
for my own rules. I work to drive scores away from 5.0, in both
directions as needed. If I see a low scoring captured spam being
always scored greater than 8 or 10 I am pleased. When I see items
in the 5 to 10 range I figure out what I can do to drive it to the
correct direction, ham or spam. (Bayes is usually my choice of
action. I usually discover another email that has a mid level Bayes
score rather than an extreme level. And I wish I could codify how I
choose to feed Bayes. I feed it almost on an intuitive level, "This
is Bayes food" or "Bayes already has a lot of this food and is
obviously a little confused for my mail mix." That's hardly a good
"rule" for feeding that I can pass on to people. )

So rather than having perceptron try to push towards a relatively
smooth curve of all scores it should work to push the overall score
profile into what one wag in an SF story called a "brassiere curve",
which is wonderfully descriptive when you think of some of the 50's
and 60's fashions. {^_-} If it can create a viable valley with very
few messages scoring near 5.0 and as wide a variance between the ham
peak and the spam peak it may act better.

THAT said, I note that I use meta rules regularly to generate some
modest negative scores as well as positive scores. This has had some
good side effects on the reliability of scoring here. I've noticed that
a small few of the SARE rules, over time, decayed into being fairly
good indications of ham rather than spam. Since SARE is more "agile"
than the basic SA rule sets it might be good if the SARE people took
this as a tool for  lift and separation on the ham and spam
peaks. It might be interesting to notice if the obverse of "in this
BL" is a decent indication of "not spam" and give that a modest bit
of negative score for some cases.

I just pulled RATWR10a_MESSID because it was hitting 13% of ham and
4% of spam, for example. Perhaps I should have given it a very small
negative score instead. I note right now that SPF_PASS seems to hit
50% (!) of ham and only 4% of spam. Perhaps it, too, should have a
slight negative score to help increase the span between the ham peak
and the spam peak.

It does seem clear to me that the objective is not to create minimum
score to mark as spam so much as to create as large a separation between
typical ham and spam scores as possible. The more reliable rules should
have higher negative and positive scores as appropriate.

And of course, the final caveat, is that I am running a two person
install of SpamAssassin with per user rules and scores with two fairly
intelligent (although some people question that about me) people running
their own user rules and Bayes. I also do not use automatic anything.
I cannot get over the idea that automatic whitelist and automatic learning
are not necessarily stable concepts UNTIL you have a very reliable BAYES
setup and set of rules from manual training. I have that and still cannot
convince myself to "fix what isn't broken."

{^_^}   Joanne


Re: Those "Re: good obfupills" spams

2006-04-30 Thread jdow

From: "Matt Kettler" <[EMAIL PROTECTED]>


jdow wrote:

And it is scored LESS than BAYES_95 by default. That's a clear signal
that the theory behind the scoring system is a little skewed and needs
some rethinking.


No.. It does not mean there's a problem with the scoring system. It
means you're trying to apply a simple linear model to something which is
inherently not linear, nor simple.  This is a VERY common misconception. 


Please bear with me for a minute as I explain some things.

This is more-or-less the same misconception as expecting rules with
higher S/O's to always score higher than those with lower S/O's.
Generally this is true, but there's more to consider that can cause the
opposite to be true.

The score of a rule in SA is not a function of the performance of that
one rule, nor should it be. The score of a SA rule is a function of what
combinations of rules it matches in conjunction with. This creates a
"real world fit" of a complex set of rules against real-world behavior.

This complex interaction between rules results in most of the "problems"
people see. People inherently expect simple linearity. However, consider
that SA scoring is a function of  several hundred variable equation
attempting to perform an approximation of optimal  fit to a sampling of
human behavior. Why, based on that, would you ever expect the score two
of those hundreds of variables to be linear as a function of spam hit rate?

It is perfectly reasonable to assume that most of the mail matching
BAYES_99 also matches a large number of the stock spam rules that SA
comes with. These highly-obvious mails are the model after which most SA
rules are made in the first place. Thus, these mails need less score
boost, as they already have a lot of score from other rules in the ruleset.

However, mails matching BAYES_95 are more likely to be "trickier", and
are likely to match fewer other rules. These messages are more likely to
require an extra boost from BAYES_95's score than those which match
BAYES_99.


Matt, I understand the model. I believe it is the wrong model to apply.
Experience indicates this is very much the case. And I must remind you
that an ounce of actual experience is worth a neutron star worth of
theory. When I raise the score of BAYES_99 and 95 to be monotonically
increasing with 99 at or very near to 5.0 I demonstrably get far fewer
escaped spams at a cost of VERY few (low enough to be unnoticed)
caught hams. When experience disagrees with the model some extra thought
is required with regards to the model.

As far as I can see the perceptron does not handle single factors that
are exceptionally good at catching spam with exceptionally few false
alarms AND is often the ONLY marker for actual spam that is caught. This
latter is very often the case here with regards to BAYES_99. (The logged
hams caught as spam are escaped spams or else cases that are impossible
to catch correctly without complex meta rules, such as LKML or other
technical code, patch, and diff bearing mailing lists that also do not
adequately filter being relayed through. For these lists I have actually
had to artificially rescore all the BAYES scores using meta rules. I am
fine tuning these alterations at the moment. I've had some spams escape.
My OWN number of mismarked hams has become vanishingly small. Loren does
not have these rules yet. If he wants 'em I'll give them to him quickly.)
Note the "goodness" of BAYES_99 here - stats including me and Loren over
80,000 messages total.

  1BAYES_99 20156 4.88   25.08   91.610.07
  1BAYES_00 4610715.54   57.360.07   78.98

The BAYES_99's *I* have seen on "ham" are running exclusively to spams
that managed to fire a negative scoring rule for mailing lists. LKML and
FreeBSD are the two lists so affected.

Now, in the last two days I have had some ham come in as spam, not due
to BAYES_9x at all. It was a political discussion that happened to
trigger a lot of the mortgage spam rules. "Cain't do much about that!"
(At least not without giving Yahoo Groups an utterly unwarranted negative
score.)

Based on *MY* experience the perceptron performance model was not the
appropriate model to choose.

{^_^}


Re: SQLite

2006-04-30 Thread Jonas Eckerman

Michael Parker wrote:


On a stable system with working backup routines running SQLite with
'PRAGMA SYNCHRONOUS=OFF' for bayes makes a lot of sense.



It has been awhile, but I believe you just need to do this at create
time, so you'd only need a proper .sql file that did it.


I think that might have been true for the pragma "default_sunchronous" (or 
something like that) in SQLite 2.*.

In SQLite 3.' there is no persistant setting for this so the command must be given for 
each "connection" to the database.


That said, that doesn't mean that I wouldn't welcome a contribution from
someone who went off and did the work, so feel free to create the module
and do the testing.


I created a small module for SQLlite that simply inherits almost everything 
from Mail::SpamAssassin::SQL. It seems to work and I've done some benchmarks 
with it.

I did notice some two things though:

1: In phase 2 and 5 there's an enormous amount of calls to _db_connect.

IIRC "connecting" to a SQLite databse can be potentially time consuming, so 
using more persistant database connections *might* give a SQLite bayes-store better 
performance. Actually, a more persistant connection makes sense for other SQL modules as 
well.

I might try the benchmarjs again with an override so that the untie_db method 
doesn't really disconnect from the databse in the SQLite module.

2: In phase 5 I see a number of "warn: closing dbh with active statement handles" in 
"output.txt". While such warning *can* indicate a potential memory leak, I have no idea 
wether it is any problem or not in this case.


Submit a bug with the code and results attached and


I'll do some more testing, before doing that. Here's the benchmarks so far:

Total times:
SDBM:  44:07
DB_File:   49:43
SQLite:  2:26_03

Detailed Times:
Phase   SDBMDB_File SQLite
1.a 305,95  375,05  1719,04
1.b 234,67  308,42  759,78
2.- 934,95  939,6   923,21
?   1,191,181,23
3.- 11,41   24,38   28,84
4.a 213,47  235,41  2982,64
4.b 110,09  122,56  737,73
5.a 484,73  578,32  1139,67
5.b 349,21  396,2   470,01
?   1,441,581,24
Total   2647,13 2982,7  8763,4

Obviously SQLite is nowehere near SDBM or DB_File when just using the standard 
SQL module this way.

Notable though is that SQLite actually performed best in phase 2 though, and 
that's one of the phases where I saw a big number of calls to _connect_db.


discovered it just was not worth it, you were better off sticking with
Berkeley DBD or the MUCH faster SDBM.


I believe you're right.

I don't have any normal SQL server on the machine I'm testing this on. 
Otherwise it would make sense comparing the bechmarks for SQLite with MySQL and 
PostGreSQL.

I was a bit surprised that the differenmce between SDBM and DB_File wasn't 
bigger though. I had the impression that the difference would be bigger. It is 
possible that some parts of the becnhmark isn't working right for me, I guess. 
Or Berkley DB has become faster than it was. Or it just fits well together with 
FreeBSDs file system.


Improvements to the
benchmark are also more than welcome.


I added helper and tests files for the SQLite module, and will send them later.

Regards
/Jonas

--
Jonas Eckerman, FSDB & Fruktträdet
http://whatever.frukt.org/
http://www.fsdb.org/
http://www.frukt.org/



Parsing DCC

2006-04-30 Thread Dan
This is partly about DCC and partly about regex (yes, I've ordered two more regex books).  First, there's the basic all or nothing output:	X-DCC-servers-Metrics: ui1 1049; bulk Body=many Fuz1=many Fuz2=many	X-DCC-servers-Metrics: ui1 1049; bulk Body=0 Fuz1=0 Fuz2=0...that can be captured with basic rules:	header DCCBODY_m ALL =~ /X-DCC-.{1,500}Body=many/i	header DCCFUZ1_m ALL =~ /X-DCC-.{1,500}Fuz1=many/i	header DCCFUZ2_m ALL =~ /X-DCC-.{1,500}Fuz2=many/i1) Is capturing header output text the best way to implement DCC in SA?Then there are variations in between 0 and many (these are actual):	X-DCC-servers-Metrics: ui1 1049; bulk Body=0 Fuz1=0 Fuz2=1027	X-DCC-servers-Metrics: ui1 1049; bulk Body=many Fuz1=many Fuz2=230	X-DCC-CTc-dcc2-Metrics: ui1 1031; bulk Body=40 Fuz1=0 Fuz2=0	X-DCC-servers-Metrics: ui1 1049; bulk Body=0 Fuz1=0 Fuz2=2	X-DCC-servers-Metrics: ui1 1049; bulk Body=0 Fuz1=1 Fuz2=12) Are DCC scores less than many or 1000's worth valuing, particularly 1's and 2's?3) If so, is their relevancy (likely ham or likely spam) linear and segment-able into 1's, 10's, 100's, 1000's, such that this might work?:	header DCCBODY_4 ALL =~ /X-DCC-.{1,500}Body=[0-9]{4}\b/i	header DCCFUZ1_4 ALL =~ /X-DCC-.{1,500}Fuz1=[0-9]{4}\b/i	header DCCFUZ2_4 ALL =~ /X-DCC-.{1,500}Fuz2=[0-9]{4}\b/i	header DCCBODY_3 ALL =~ /X-DCC-.{1,500}Body=[0-9]{3}\b/i	header DCCFUZ1_3 ALL =~ /X-DCC-.{1,500}Fuz1=[0-9]{3}\b/i	header DCCFUZ2_3 ALL =~ /X-DCC-.{1,500}Fuz2=[0-9]{3}\b/i	header DCCBODY_2 ALL =~ /X-DCC-.{1,500}Body=[0-9]{2}\b/i	header DCCFUZ1_2 ALL =~ /X-DCC-.{1,500}Fuz1=[0-9]{2}\b/i	header DCCFUZ2_2 ALL =~ /X-DCC-.{1,500}Fuz2=[0-9]{2}\b/i	header DCCBODY_1 ALL =~ /X-DCC-.{1,500}Body=[1-9]{1}\b/i	header DCCFUZ1_1 ALL =~ /X-DCC-.{1,500}Fuz1=[1-9]{1}\b/i	header DCCFUZ2_1 ALL =~ /X-DCC-.{1,500}Fuz2=[1-9]{1}\b/i 4) If so, is this the way to do it?5) Are these regex's adequate for what I want and do not want to "see" and can they be improved?Thanks!Dan

Re: Those "Re: good obfupills" spams

2006-04-30 Thread Matt Kettler
jdow wrote:
> And it is scored LESS than BAYES_95 by default. That's a clear signal
> that the theory behind the scoring system is a little skewed and needs
> some rethinking.

No.. It does not mean there's a problem with the scoring system. It
means you're trying to apply a simple linear model to something which is
inherently not linear, nor simple.  This is a VERY common misconception. 

Please bear with me for a minute as I explain some things.

This is more-or-less the same misconception as expecting rules with
higher S/O's to always score higher than those with lower S/O's.
Generally this is true, but there's more to consider that can cause the
opposite to be true.

The score of a rule in SA is not a function of the performance of that
one rule, nor should it be. The score of a SA rule is a function of what
combinations of rules it matches in conjunction with. This creates a
"real world fit" of a complex set of rules against real-world behavior.

This complex interaction between rules results in most of the "problems"
people see. People inherently expect simple linearity. However, consider
that SA scoring is a function of  several hundred variable equation
attempting to perform an approximation of optimal  fit to a sampling of
human behavior. Why, based on that, would you ever expect the score two
of those hundreds of variables to be linear as a function of spam hit rate?

It is perfectly reasonable to assume that most of the mail matching
BAYES_99 also matches a large number of the stock spam rules that SA
comes with. These highly-obvious mails are the model after which most SA
rules are made in the first place. Thus, these mails need less score
boost, as they already have a lot of score from other rules in the ruleset.

However, mails matching BAYES_95 are more likely to be "trickier", and
are likely to match fewer other rules. These messages are more likely to
require an extra boost from BAYES_95's score than those which match
BAYES_99.










qmail auth mail received as spam

2006-04-30 Thread hamann . w
Here is a piece of mail that gets classified as spam,
although it should not.
I have replaced a few email addresses by fake ones, they dont contribute to the 
problem.

So what happens: a mail is sent via an authenticated session, to a qmail / 
qmail-scanner
setup running at mydomain.de 
Here qmail adds its received header
Received: from p5499d2c7.dip.t-dialin.net (HELO test) ([EMAIL PROTECTED])
identifying that the mail originates from a dynamic ip. Note that out of many 
smtp auth
patches only this one seems to put the ESMTPA keyword in the header.

Next, qmail-scanner adds its own header
Received: from 84.153.210.199 ([EMAIL PROTECTED]) by mail3 (envelope-from 
<[EMAIL PROTECTED]>, uid 0) with qmail-scanner-2.01 
saying essentially the same things again (dynamic ip, known user name) just 
with the dynamic
dns name of the sender replaced by its ip address. 

The mail is sent to its destination, another qmail machine running SA
Here, SA assigns score to DUL lists, and to a numeric ip in helo (which
was only added by qmail-scanner, The HOST_EQ_D_D_D_D and HOST_EQ_D_D_D_DB
entries also seem to be triggered by the qmail-scanner header

Wolfgang Hamann

>> X-Spam-Level: 
>> X-Spam-Checker-Version: SpamAssassin 3.1.1 (2006-03-10) on mailserver
>> X-Spam-Flag: YES
>> X-Spam-Status: Yes, score=8.2 required=5.2 tests=DK_SIGNED,HELO_EQ_IP_ADDR,
>>  HOST_EQ_D_D_D_D,HOST_EQ_D_D_D_DB,NO_REAL_NAME,RCVD_IN_NJABL_DUL,
>>  RCVD_IN_SORBS_DUL,RCVD_NUMERIC_HELO autolearn=no version=3.1.1
>> X-Spam-Report: 
>>  *  0.7 HOST_EQ_D_D_D_D HOST_EQ_D_D_D_D
>>  *  1.1 HELO_EQ_IP_ADDR HELO using IP Address (not private)
>>  *  0.6 NO_REAL_NAME From: does not include a real name
>>  *  0.9 HOST_EQ_D_D_D_DB HOST_EQ_D_D_D_DB
>>  *  0.0 DK_SIGNED Domain Keys: message has an unverified signature
>>  *  1.3 RCVD_NUMERIC_HELO Received: contains an IP address used for HELO
>>  *  2.0 RCVD_IN_SORBS_DUL RBL: SORBS: sent directly from dynamic IP 
>> address
>>  *  [84.153.210.199 listed in dnsbl.sorbs.net]
>>  *  1.7 RCVD_IN_NJABL_DUL RBL: NJABL: dialup sender did non-local SMTP
>>  *  [84.153.210.199 listed in combined.njabl.org]
>> Received: (qmail 13293 invoked by uid 0); 30 Apr 2006 10:13:00 -
>> Received: from [EMAIL PROTECTED] by mail1 by uid 81 with 
>> qmail-scanner-1.20rc2 
>>  (clamdscan: 0.88. hbedv: AntiVir / Linux Version 2.1.6-23 spamassassin: 
>> 3.1.1.  Clear:RC:0:. 
>>  Processed in 3.195621 secs); 30 Apr 2006 10:13:00 -
>> Received: from shared3.provider.de (HELO shared3.provider.de) ([EMAIL 
>> PROTECTED])
>>   by mail1.provider.de with AES256-SHA encrypted SMTP; 30 Apr 2006 10:12:56 
>> -
>> Received: (qmail 30465 invoked by uid 0); 30 Apr 2006 10:12:55 -
>> Received: from 84.153.210.199 ([EMAIL PROTECTED]) by shared3 (envelope-from 
>> <[EMAIL PROTECTED]>, uid 0) with qmail-scanner-2.01 
>>  (clamdscan: 0.88.1/1426. hbedv: 6.34.1.27/6.34.1.12. spammassassin: 3.1.1  
>>  Clear:RC:0(84.153.210.199):. 
>>  Processed in 1.306009 secs); 30 Apr 2006 10:12:55 -
>> X-Qmail-Scanner-Mail-From: [EMAIL PROTECTED] via shared3
>> X-Qmail-Scanner: 2.01 (Clear:RC:0(84.153.210.199):. Processed in 1.306009 
>> secs)
>> Comment: DomainKeys? See http://antispam.yahoo.com/domainkeys
>> DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws;
>>   s=default; d=mydomain.de;
>>   
>> b=ur58T/KNSEqQPRhnEoNUvKyvKlEhz9l5nbRkZCcpUuuKn+CDCuuSMRpRRPVeBInvhGF5Z/j8dRxEfZL74d3A/A36I4dxQuqQZHNPJ8aLTzIqQRnv76ynl4CB+zDzo/VGsYiLD3R07lOe+BTwtknoSdTQ3ENbHp37KnDE37mZHXo=
>>   ;
>> Received: from p5499d2c7.dip.t-dialin.net (HELO test) ([EMAIL PROTECTED])
>>   by www.mydomain.de with ESMTPA; 30 Apr 2006 10:11:39 -
>> From: [EMAIL PROTECTED]
>> To: 



Re: Tracking Compound Meta's

2006-04-30 Thread Dan

What about using the SA 'test rule' mechanism?
(IE use "T_testA1" rather than "__testA1").
Effectivly the micro weighting done automagically and in a  
standardized way.


Nice, micro weighting without the required score lines.  Now I just  
need to ignore or absorb the extra scores.


Dan