Re: dkim-test valid but spamassassin scores DKIM_INVALID

2023-10-25 Thread Niels Kobschätzki
> Matus UHLAR - fantomas  hat am 25.10.2023 16:11 CEST 
> geschrieben:
> 
>  
> >Matus UHLAR - fantomas skrev den 2023-10-25 09:36:
> >>I have:
> >>50_scores.cf:score DKIM_VALID -0.1
> >>
> >>check if you really haven't set score for DKIM_VALID anywhere, since 
> >>SA complains about it being zero.
> >>
> >>I guess this may cause DKIM_INVALID misfiring
> 
> On 25.10.23 13:08, Benny Pedersen wrote:
> >imho no, DKIM_INVALID have 0.1 in score, both should not be changed
> >
> >its just a result tag, not a policy of any kind
> 
> This looks like OP has changed score of DKIM_VALID to 0:
> 
> > >Oct 25 07:10:54.364 [1687666] info: rules: meta test DKIM_INVALID has 
> > >dependency 'DKIM_VALID' with a zero score
> 
> and since  DKIM_INVALID depends on it:
> 
> meta DKIM_INVALIDDKIM_SIGNED && !DKIM_VALID
> 
> ...it would make sense DKIM_INVALID to hit whenever DKIM_SIGNED does
> since DKIM_VALID apparently was made not to fire ever.

Thanks for your help everybody. After further inspection I found a file that 
must originated a long time ago. The problem with inherited systems.
I grepped only the files I usually modify (local.cf and some files that have a 
common file-name prefix for custom files) and in /var/lib/spamassassin

After greping more thoroughly I found the perpetrator.

Thanks a lot again,

Niels


Re: dkim-test valid but spamassassin scores DKIM_INVALID

2023-10-25 Thread Matus UHLAR - fantomas

Matus UHLAR - fantomas skrev den 2023-10-25 09:36:

I have:
50_scores.cf:score DKIM_VALID -0.1

check if you really haven't set score for DKIM_VALID anywhere, since 
SA complains about it being zero.


I guess this may cause DKIM_INVALID misfiring


On 25.10.23 13:08, Benny Pedersen wrote:

imho no, DKIM_INVALID have 0.1 in score, both should not be changed

its just a result tag, not a policy of any kind


This looks like OP has changed score of DKIM_VALID to 0:


>Oct 25 07:10:54.364 [1687666] info: rules: meta test DKIM_INVALID has 
dependency 'DKIM_VALID' with a zero score


and since  DKIM_INVALID depends on it:

meta DKIM_INVALIDDKIM_SIGNED && !DKIM_VALID

...it would make sense DKIM_INVALID to hit whenever DKIM_SIGNED does
since DKIM_VALID apparently was made not to fire ever.



--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I wonder how much deeper the ocean would be without sponges.


Re: dkim-test valid but spamassassin scores DKIM_INVALID

2023-10-25 Thread Benny Pedersen

Matus UHLAR - fantomas skrev den 2023-10-25 09:36:


I have:
50_scores.cf:score DKIM_VALID -0.1

check if you really haven't set score for DKIM_VALID anywhere, since SA 
complains about it being zero.


I guess this may cause DKIM_INVALID misfiring


imho no, DKIM_INVALID have 0.1 in score, both should not be changed

its just a result tag, not a policy of any kind



Re: dkim-test valid but spamassassin scores DKIM_INVALID

2023-10-25 Thread Benny Pedersen

jdow skrev den 2023-10-25 09:07:


Methinks you have here a very good clue to set a non-zero value,
perhaps (most likely), a modest negative score.


change of that score is a fail on its own

use welcomelist_from_dkim instaed



Re: dkim-test valid but spamassassin scores DKIM_INVALID

2023-10-25 Thread Benny Pedersen

Niels Kobschätzki skrev den 2023-10-25 08:46:


did you set score of DKIM_VALID do 0 ?


DKIM_VALID is not overwritten by any of my local rules. So I would 
expect that this is the case. But even if I set for example


score DKIM_VALID 0
in local.cf there is no change


rules is loaded in sequence order, so 00_local is first while 99_local 
is last, try add in last in same dir as local.cf


of that works, grep DKIM_VALID in all dirs with spamassassin rules, to 
confirm where the stupid error is :=)


was it DKIM_INVALID ?


Re: dkim-test valid but spamassassin scores DKIM_INVALID

2023-10-25 Thread Matus UHLAR - fantomas

On 25.10.23 07:21, Niels Kobschätzki wrote:
>I'm having here a mail that scores as DKIM_INVALID.  I tried sending the
> same mail to gmail for example and it tells me that DKIM is valid.  Now I
> put it through "spamassassin -D" and I am even more baffled because the
> debug seems to say that DKIM is valid but then scores as INVALID.

>Any idea why this could be?
>
>debug-output from "spamassassin -t -D dkim < message":
>
>Oct 25 07:10:52.341 [1687666] dbg: dkim: VALID DKIM, i=@my.domain.com, 
d=my.domain.com, s=inx, a=rsa-sha256, c=relaxed/relaxed, key_bits=2048, pass, 
matches author domain
>Oct 25 07:10:52.342 [1687666] dbg: dkim: signature verification result: PASS
>Oct 25 07:10:52.342 [1687666] dbg: dkim: adsp not retrieved, author domain 
signature is valid
>Oct 25 07:10:52.342 [1687666] dbg: dkim: adsp result: - (valid a. d. 
signature), author domain 'my.domain.com'
>Oct 25 07:10:52.352 [1687666] dbg: dkim: VALID signature by my.domain.com, 
author m...@my.domain.com, no valid matches
>Oct 25 07:10:52.352 [1687666] dbg: dkim: author m...@my.domain.com, not in any 
dkim whitelist
>Oct 25 07:10:54.125 [1687779] info: util: setuid: ruid=0 euid=0 rgid=0 0 
egid=0 0

>Oct 25 07:10:54.364 [1687666] info: rules: meta test DKIM_INVALID has 
dependency 'DKIM_VALID' with a zero score



Matus UHLAR - fantomas  hat am 25.10.2023 08:16 CEST 
geschrieben:
did you set score of DKIM_VALID do 0 ?


On 25.10.23 08:46, Niels Kobschätzki wrote:

DKIM_VALID is not overwritten by any of my local rules. So I would expect that 
this is the case. But even if I set for example

score DKIM_VALID 0
in local.cf there is no change


I have:
50_scores.cf:score DKIM_VALID -0.1

check if you really haven't set score for DKIM_VALID anywhere, since SA 
complains about it being zero. 


I guess this may cause DKIM_INVALID misfiring
--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I drive way too fast to worry about cholesterol.


Re: dkim-test valid but spamassassin scores DKIM_INVALID

2023-10-25 Thread jdow


On 20231024 23:46:18, Niels Kobschätzki wrote:

Matus UHLAR - fantomas  hat am 25.10.2023 08:16 CEST 
geschrieben:

  
On 25.10.23 07:21, Niels Kobschätzki wrote:

I'm having here a mail that scores as DKIM_INVALID.  I tried sending the
same mail to gmail for example and it tells me that DKIM is valid.  Now I
put it through "spamassassin -D" and I am even more baffled because the
debug seems to say that DKIM is valid but then scores as INVALID.
Any idea why this could be?

debug-output from "spamassassin -t -D dkim < message":

Oct 25 07:10:52.341 [1687666] dbg: dkim: VALID DKIM,i=@my.domain.com, 
d=my.domain.com, s=inx, a=rsa-sha256, c=relaxed/relaxed, key_bits=2048, pass, 
matches author domain
Oct 25 07:10:52.342 [1687666] dbg: dkim: signature verification result: PASS
Oct 25 07:10:52.342 [1687666] dbg: dkim: adsp not retrieved, author domain 
signature is valid
Oct 25 07:10:52.342 [1687666] dbg: dkim: adsp result: - (valid a. d. 
signature), author domain 'my.domain.com'
Oct 25 07:10:52.352 [1687666] dbg: dkim: VALID signature by my.domain.com, 
autho...@my.domain.com, no valid matches
Oct 25 07:10:52.352 [1687666] dbg: dkim: autho...@my.domain.com, not in any 
dkim whitelist
Oct 25 07:10:54.125 [1687779] info: util: setuid: ruid=0 euid=0 rgid=0 0 egid=0 0
Oct 25 07:10:54.364 [1687666] info: rules: meta test DKIM_INVALID has 
dependency 'DKIM_VALID' with a zero score

did you set score of DKIM_VALID do 0 ?

DKIM_VALID is not overwritten by any of my local rules. So I would expect that 
this is the case. But even if I set for example

score DKIM_VALID 0
in local.cf there is no change

Best,

Niels


Methinks you have here a very good clue to set a non-zero value, perhaps (most 
likely), a modest negative score.


{o.o}   Diving back into obscurity


Re: dkim-test valid but spamassassin scores DKIM_INVALID

2023-10-25 Thread Niels Kobschätzki
> Matus UHLAR - fantomas  hat am 25.10.2023 08:16 CEST 
> geschrieben:
> 
>  
> On 25.10.23 07:21, Niels Kobschätzki wrote:
> >I'm having here a mail that scores as DKIM_INVALID.  I tried sending the 
> > same mail to gmail for example and it tells me that DKIM is valid.  Now I 
> > put it through "spamassassin -D" and I am even more baffled because the 
> > debug seems to say that DKIM is valid but then scores as INVALID.
> 
> >Any idea why this could be?
> >
> >debug-output from "spamassassin -t -D dkim < message":
> >
> >Oct 25 07:10:52.341 [1687666] dbg: dkim: VALID DKIM, i=@my.domain.com, 
> >d=my.domain.com, s=inx, a=rsa-sha256, c=relaxed/relaxed, key_bits=2048, 
> >pass, matches author domain
> >Oct 25 07:10:52.342 [1687666] dbg: dkim: signature verification result: PASS
> >Oct 25 07:10:52.342 [1687666] dbg: dkim: adsp not retrieved, author domain 
> >signature is valid
> >Oct 25 07:10:52.342 [1687666] dbg: dkim: adsp result: - (valid a. d. 
> >signature), author domain 'my.domain.com'
> >Oct 25 07:10:52.352 [1687666] dbg: dkim: VALID signature by my.domain.com, 
> >author m...@my.domain.com, no valid matches
> >Oct 25 07:10:52.352 [1687666] dbg: dkim: author m...@my.domain.com, not in 
> >any dkim whitelist
> >Oct 25 07:10:54.125 [1687779] info: util: setuid: ruid=0 euid=0 rgid=0 0 
> >egid=0 0
> 
> >Oct 25 07:10:54.364 [1687666] info: rules: meta test DKIM_INVALID has 
> >dependency 'DKIM_VALID' with a zero score
> 
> did you set score of DKIM_VALID do 0 ?

DKIM_VALID is not overwritten by any of my local rules. So I would expect that 
this is the case. But even if I set for example

score DKIM_VALID 0
in local.cf there is no change

Best,

Niels


Re: dkim-test valid but spamassassin scores DKIM_INVALID

2023-10-25 Thread Matus UHLAR - fantomas

On 25.10.23 07:21, Niels Kobschätzki wrote:
I'm having here a mail that scores as DKIM_INVALID.  I tried sending the 
same mail to gmail for example and it tells me that DKIM is valid.  Now I 
put it through "spamassassin -D" and I am even more baffled because the 
debug seems to say that DKIM is valid but then scores as INVALID.



Any idea why this could be?

debug-output from "spamassassin -t -D dkim < message":

Oct 25 07:10:52.341 [1687666] dbg: dkim: VALID DKIM, i=@my.domain.com, 
d=my.domain.com, s=inx, a=rsa-sha256, c=relaxed/relaxed, key_bits=2048, pass, 
matches author domain
Oct 25 07:10:52.342 [1687666] dbg: dkim: signature verification result: PASS
Oct 25 07:10:52.342 [1687666] dbg: dkim: adsp not retrieved, author domain 
signature is valid
Oct 25 07:10:52.342 [1687666] dbg: dkim: adsp result: - (valid a. d. 
signature), author domain 'my.domain.com'
Oct 25 07:10:52.352 [1687666] dbg: dkim: VALID signature by my.domain.com, 
author m...@my.domain.com, no valid matches
Oct 25 07:10:52.352 [1687666] dbg: dkim: author m...@my.domain.com, not in any 
dkim whitelist
Oct 25 07:10:54.125 [1687779] info: util: setuid: ruid=0 euid=0 rgid=0 0 egid=0 0



Oct 25 07:10:54.364 [1687666] info: rules: meta test DKIM_INVALID has 
dependency 'DKIM_VALID' with a zero score


did you set score of DKIM_VALID do 0 ?


Return-path: 
X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on one.ofmyhosts.com
X-Spam-Level: *
X-Spam-Status: No, score=1.6 required=5.0 tests=ALL_TRUSTED,DKIM_INVALID,
   DKIM_SIGNED,KAM_DMARC_REJECT,KAM_DMARC_STATUS autolearn=disabled
   version=3.4.6


--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
42.7 percent of all statistics are made up on the spot.


dkim-test valid but spamassassin scores DKIM_INVALID

2023-10-24 Thread Niels Kobschätzki
Hi,

I'm having here a mail that scores as DKIM_INVALID. I tried sending the same 
mail to gmail for example and it tells me that DKIM is valid. Now I put it 
through "spamassassin -D" and I am even more baffled because the debug seems to 
say that DKIM is valid but then scores as INVALID.
Any idea why this could be?

debug-output from "spamassassin -t -D dkim < message":

Oct 25 07:10:52.337 [1687666] dbg: dkim: using Mail::DKIM version 1.20200907
Oct 25 07:10:52.337 [1687666] dbg: dkim: providing our own resolver: 
Mail::SpamAssassin::DnsResolver
Oct 25 07:10:52.339 [1687666] dbg: dkim: performing public key lookup and 
signature verification
Oct 25 07:10:52.341 [1687666] dbg: dkim: VALID DKIM, i=@my.domain.com, 
d=my.domain.com, s=inx, a=rsa-sha256, c=relaxed/relaxed, key_bits=2048, pass, 
matches author domain
Oct 25 07:10:52.342 [1687666] dbg: dkim: signature verification result: PASS
Oct 25 07:10:52.342 [1687666] dbg: dkim: adsp not retrieved, author domain 
signature is valid
Oct 25 07:10:52.342 [1687666] dbg: dkim: adsp result: - (valid a. d. 
signature), author domain 'my.domain.com'
Oct 25 07:10:52.352 [1687666] dbg: dkim: VALID signature by my.domain.com, 
author m...@my.domain.com, no valid matches
Oct 25 07:10:52.352 [1687666] dbg: dkim: author m...@my.domain.com, not in any 
dkim whitelist
Oct 25 07:10:54.125 [1687779] info: util: setuid: ruid=0 euid=0 rgid=0 0 egid=0 0
Oct 25 07:10:54.277 [1687666] info: rules: meta test FROM_GOV_DKIM_AU has 
dependency 'DKIM_VALID_AU' with a zero score
Oct 25 07:10:54.281 [1687666] info: rules: meta test GOOG_REDIR_NORDNS has 
dependency 'RDNS_NONE' with a zero score
Oct 25 07:10:54.284 [1687666] info: rules: meta test KAM_CARD has dependency 
'KAM_RPTR_SUSPECT' with a zero score
Oct 25 07:10:54.286 [1687666] info: rules: meta test __FORM_FRAUD has 
dependency 'EMRCP' with a zero score
Oct 25 07:10:54.286 [1687666] info: rules: meta test __FORM_FRAUD has 
dependency 'T_LOTTO_AGENT_FM' with a zero score
Oct 25 07:10:54.290 [1687666] info: rules: meta test KAM_DMARC_REJECT has 
dependency 'DKIM_VALID_AU' with a zero score
Oct 25 07:10:54.293 [1687666] info: rules: meta test FROM_GOV_REPLYTO_FREEMAIL 
has dependency 'DKIM_VALID_AU' with a zero score
Oct 25 07:10:54.303 [1687666] info: rules: meta test __MONEY_FRAUD_3 has 
dependency 'EMRCP' with a zero score
Oct 25 07:10:54.304 [1687666] info: rules: meta test __MONEY_FRAUD_3 has 
dependency 'T_LOTTO_AGENT_FM' with a zero score
Oct 25 07:10:54.306 [1687666] info: rules: meta test TO_NO_BRKTS_HTML_ONLY has 
dependency 'RDNS_NONE' with a zero score
Oct 25 07:10:54.308 [1687666] info: rules: meta test KAM_UAH_YAHOOGROUP_SENDER 
has dependency 'DKIM_VALID' with a zero score
Oct 25 07:10:54.310 [1687666] info: rules: meta test KAM_BAD_DNSWL has 
dependency 'URIBL_SBL' with a zero score
Oct 25 07:10:54.313 [1687666] info: rules: meta test KAM_SALE has dependency 
'BODY_8BITS' with a zero score
Oct 25 07:10:54.314 [1687666] info: rules: meta test KAM_QUITE_BAD_DNSWL has 
dependency 'URIBL_SBL' with a zero score
Oct 25 07:10:54.316 [1687666] info: rules: meta test __MONEY_FRAUD_5 has 
dependency 'EMRCP' with a zero score
Oct 25 07:10:54.316 [1687666] info: rules: meta test __MONEY_FRAUD_5 has 
dependency 'T_LOTTO_AGENT_FM' with a zero score
Oct 25 07:10:54.320 [1687666] info: rules: meta test PDS_BRAND_SUBJ_NAKED_TO 
has dependency 'MAILING_LIST_MULTI' with a zero score
Oct 25 07:10:54.321 [1687666] info: rules: meta test FROM_BANK_NOAUTH has 
dependency 'DKIM_VALID_AU' with a zero score
Oct 25 07:10:54.322 [1687666] info: rules: meta test XPRIO has dependency 
'DKIM_VALID' with a zero score
Oct 25 07:10:54.322 [1687666] info: rules: meta test XPRIO has dependency 
'DKIM_VALID_AU' with a zero score
Oct 25 07:10:54.329 [1687666] info: rules: meta test __MONEY_FRAUD_8 has 
dependency 'EMRCP' with a zero score
Oct 25 07:10:54.329 [1687666] info: rules: meta test __MONEY_FRAUD_8 has 
dependency 'T_LOTTO_AGENT_FM' with a zero score
Oct 25 07:10:54.332 [1687666] info: rules: meta test KAM_PAYROLL_SCANNER has 
dependency 'KAM_IFRAME' with a zero score
Oct 25 07:10:54.333 [1687666] info: rules: meta test CONTENT_AFTER_HTML_WEAK 
has dependency 'MAILING_LIST_MULTI' with a zero score
Oct 25 07:10:54.335 [1687666] info: rules: meta test FORGED_MUA_EUDORA has 
dependency 'MAILING_LIST_MULTI' with a zero score
Oct 25 07:10:54.337 [1687666] info: rules: meta test OBFU_UNSUB_UL has 
dependency 'MAILING_LIST_MULTI' with a zero score
Oct 25 07:10:54.338 [1687666] info: rules: meta test KAM_BENEFICIARY2 has 
dependency 'GMD_PDF_EMPTY_BODY' with a zero score
Oct 25 07:10:54.338 [1687666] info: rules: meta test HAS_X_OUTGOING_SPAM_STAT 
has dependency 'MAILING_LIST_MULTI' with a zero score
Oct 25 07:10:54.341 [1687666] info: rules: meta test KAM_NOTIFY2 has dependency 
'KAM_IFRAME' with a zero score
Oct 25 07:10:54.342 [1687666] info: rules: meta test KAM_DMARC_STATUS has 
dependency 'DKIM_VALID_AU' with a zero sc

Re: BAYES scores

2023-03-01 Thread Benny Pedersen

joe a skrev den 2023-02-28 17:37:

Curious as to why these scores, apparently "stock" are what they are.
I'd expect BAYES_999 BODY to count more than BAYES_99 BODY.

Noted in a header this morning:

*  3.5 BAYES_99 BODY: Bayes spam probability is 99 to 100%
*  [score: 1.]
*  0.2 BAYES_999 BODY: Bayes spam probability is 99.9 to 100%
*  [score: 1.]

Was this discussed recently?  I added a local score to mollify my
sense of propriety.


what does it solve for you ?

maybe it could be changed to not overlap on scores, but what should 
scores change ?






Re: BAYES scores

2023-02-28 Thread Loren Wilton

From: "Bill Cole" 

It is my understanding that an automated rescoring job was run quite some 
time ago (before I was on the PMC) to generate the Bayes scores, which 
determined that to be the best supplemental score to give to the greater 
certainty.


I was around in those days. My memory isn't the greatest anymore, but what I 
recall was that they did automatic rescoring, and then manually tweaked a 
few of the values, basically to make them look pretty by rounding off long 
fractions. BAYES_999 may have been scored almost completely manually, I 
can't quite recall.


   Loren



Re: BAYES scores

2023-02-28 Thread Benny Pedersen

joe a skrev den 2023-02-28 17:37:

Curious as to why these scores, apparently "stock" are what they are.
I'd expect BAYES_999 BODY to count more than BAYES_99 BODY.

Noted in a header this morning:

*  3.5 BAYES_99 BODY: Bayes spam probability is 99 to 100%
*  [score: 1.]
*  0.2 BAYES_999 BODY: Bayes spam probability is 99.9 to 100%
*  [score: 1.]

Was this discussed recently?  I added a local score to mollify my
sense of propriety.


what does it solve for you ?

maybe it could be changed to not overlap on scores, but what should 
scores change ?


tag can be splited so it is not overlapping hits, but what should scores 
so change to ?








Re: BAYES scores

2023-02-28 Thread Bill Cole

On 2023-02-28 at 13:38:35 UTC-0500 (Tue, 28 Feb 2023 13:38:35 -0500)
joe a 
is rumored to have said:


On 2/28/2023 12:05 PM, Jeff Mincy wrote:

  > From: joe a 
  > Date: Tue, 28 Feb 2023 11:37:34 -0500
  >
  > Curious as to why these scores, apparently "stock" are what they 
are.

  > I'd expect BAYES_999 BODY to count more than BAYES_99 BODY.
  >
  > Noted in a header this morning:
  >
  > *  3.5 BAYES_99 BODY: Bayes spam probability is 99 to 100%
  > *  [score: 1.]
  > *  0.2 BAYES_999 BODY: Bayes spam probability is 99.9 to 100%
  > *  [score: 1.]
  >
  > Was this discussed recently?  I added a local score to mollify my 
sense

  > of propriety.

Those two rules overlap.   A message with bayes >= 99.9% hits both
rules.   BAYES_99 ends at 1.00 not .999.
-jeff



I get that they overlap.  I guess my thinker gets in a knot wondering 
why there is so little weight given to the more certain determination.


It is my understanding that an automated rescoring job was run quite 
some time ago (before I was on the PMC) to generate the Bayes scores, 
which determined that to be the best supplemental score to give to the 
greater certainty. Bayes rules are not rescored routinely in the daily 
rescoring task because those hits are inherently different at every 
site. If you wish to determine the ideal scores for YOUR mix of ham and 
spam, I believe all the tools for doing so are in the SA code tree, but 
they may not be well-documented.


That's likely to not be a satisfying answer, but as a volunteer project 
we have no funding for Customer Satisfaction, so the bare unsatisfying 
truth will have to do.


In my narrow view, anything that is 99.9% certain is probably worth a 
5 on it's own.  Or, at least should when, summed with BAYES_99, equal 
5. As that is what the default "SPAM flag" is.


Appears more experienced or thoughtful persons think otherwise.


I don't know that I'd go that far. Rescoring is not done based on simple 
clear reason, but on numbers. I'm not sure whether any currently active 
SA developers are able to explain exactly how the rescoring works.


Yes, it did snow heavily overnight.  Yes, I am looking for excuses not 
to visit that issue.


I vehemently recommend reading all of Justin's scripts and documentation 
(I think it's all in the 'build' sub-directory) and figuring out how to 
rescore based on your own mail. That's MUCH less unpleasant than dealing 
with the snow.



--
Bill Cole
b...@scconsult.com or billc...@apache.org
(AKA @grumpybozo and many *@billmail.scconsult.com addresses)
Not Currently Available For Hire


Re: BAYES scores

2023-02-28 Thread hg user
>From my small experience... I score BAYES_999 with 2.00, it was
suggested to me months ago.

But nowadays I'd be more careful and do some more testing: I'd check which
messages have only BAYES_99 and  which have BAYES_999, If you are
absolutely certain that BYES_999 are only and definitively spam, go with 2
or more; if you have several false positives, keep the score low.

I learnt the hard way that BAYES depends on the corpus used to grow the
database.

On Tue, Feb 28, 2023 at 7:39 PM joe a  wrote:

> On 2/28/2023 12:05 PM, Jeff Mincy wrote:
> >   > From: joe a 
> >   > Date: Tue, 28 Feb 2023 11:37:34 -0500
> >   >
> >   > Curious as to why these scores, apparently "stock" are what they are.
> >   > I'd expect BAYES_999 BODY to count more than BAYES_99 BODY.
> >   >
> >   > Noted in a header this morning:
> >   >
> >   > *  3.5 BAYES_99 BODY: Bayes spam probability is 99 to 100%
> >   > *  [score: 1.]
> >   > *  0.2 BAYES_999 BODY: Bayes spam probability is 99.9 to 100%
> >   > *  [score: 1.]
> >   >
> >   > Was this discussed recently?  I added a local score to mollify my
> sense
> >   > of propriety.
> >
> > Those two rules overlap.   A message with bayes >= 99.9% hits both
> > rules.   BAYES_99 ends at 1.00 not .999.
> > -jeff
> >
>
> I get that they overlap.  I guess my thinker gets in a knot wondering
> why there is so little weight given to the more certain determination.
>
> In my narrow view, anything that is 99.9% certain is probably worth a 5
> on it's own.  Or, at least should when, summed with BAYES_99, equal 5.
> As that is what the default "SPAM flag" is.
>
> Appears more experienced or thoughtful persons think otherwise.
>
> Yes, it did snow heavily overnight.  Yes, I am looking for excuses not
> to visit that issue.
>


Re: BAYES scores

2023-02-28 Thread joe a

On 2/28/2023 12:05 PM, Jeff Mincy wrote:

  > From: joe a 
  > Date: Tue, 28 Feb 2023 11:37:34 -0500
  >
  > Curious as to why these scores, apparently "stock" are what they are.
  > I'd expect BAYES_999 BODY to count more than BAYES_99 BODY.
  >
  > Noted in a header this morning:
  >
  > *  3.5 BAYES_99 BODY: Bayes spam probability is 99 to 100%
  > *  [score: 1.]
  > *  0.2 BAYES_999 BODY: Bayes spam probability is 99.9 to 100%
  > *  [score: 1.]
  >
  > Was this discussed recently?  I added a local score to mollify my sense
  > of propriety.

Those two rules overlap.   A message with bayes >= 99.9% hits both
rules.   BAYES_99 ends at 1.00 not .999.
-jeff



I get that they overlap.  I guess my thinker gets in a knot wondering 
why there is so little weight given to the more certain determination.


In my narrow view, anything that is 99.9% certain is probably worth a 5 
on it's own.  Or, at least should when, summed with BAYES_99, equal 5. 
As that is what the default "SPAM flag" is.


Appears more experienced or thoughtful persons think otherwise.

Yes, it did snow heavily overnight.  Yes, I am looking for excuses not 
to visit that issue.


Re: BAYES scores

2023-02-28 Thread Jeff Mincy
 > From: joe a 
 > Date: Tue, 28 Feb 2023 11:37:34 -0500
 > 
 > Curious as to why these scores, apparently "stock" are what they are. 
 > I'd expect BAYES_999 BODY to count more than BAYES_99 BODY.
 > 
 > Noted in a header this morning:
 > 
 > *  3.5 BAYES_99 BODY: Bayes spam probability is 99 to 100%
 > *  [score: 1.]
 > *  0.2 BAYES_999 BODY: Bayes spam probability is 99.9 to 100%
 > *  [score: 1.]
 > 
 > Was this discussed recently?  I added a local score to mollify my sense 
 > of propriety.

Those two rules overlap.   A message with bayes >= 99.9% hits both
rules.   BAYES_99 ends at 1.00 not .999.
-jeff



BAYES scores

2023-02-28 Thread joe a
Curious as to why these scores, apparently "stock" are what they are. 
I'd expect BAYES_999 BODY to count more than BAYES_99 BODY.


Noted in a header this morning:

*  3.5 BAYES_99 BODY: Bayes spam probability is 99 to 100%
*  [score: 1.]
*  0.2 BAYES_999 BODY: Bayes spam probability is 99.9 to 100%
*  [score: 1.]

Was this discussed recently?  I added a local score to mollify my sense 
of propriety.





Re: Spam with Pyzor and DCC scores

2022-07-11 Thread Benny Pedersen

On 2022-07-12 00:09, Bert Van de Poel wrote:


We have Bayes running on the main server, but my own local server
doesn't have it so hence why it's missing. I did however take all spam
I received myself in 2022 that wasn't caught and fed it to sa-learn
(for the amavis user), thx for that suggestion. Let's hope it works to
remove this minor inconvenience :)


razor, pyzor, dcc is detecting if mails is mailed to more then one 
recipients, is does not detect as so if its spam or not, so only 
massmailed is sure


for bayes training it aswell needs to know ham mails, only spam mails is 
same as no training :=)


i do not use razor, pyzor, dcc anymore, software is more or less 
outdated on gentoo, i do not plan to make updated ebuilds for it, its to 
small goal for me, on the other hands i use fuglu, with is doing well 
with python 3.10 now as gentoo portage defaults to python 3.10 now


Re: Spam with Pyzor and DCC scores

2022-07-11 Thread Bert Van de Poel

On 11/07/2022 15:44, Matus UHLAR - fantomas wrote:

On 11.07.22 12:57, Bert Van de Poel wrote:
A few times a month we have spam messages getting through, often in 
German, that have some spam score but not enough to be 
marked/discarded. Always these messages are marked by DCC, since 
they're of course bulk spam, but it's also not uncommon to see Pyzor 
as well. I've been wondering if there are realistic cases where both 
DCC and Pyzor would mark as spam while the message was ham.


this is likely to happen if the message is empty or learly empty.
some people are stupid, send one-two words or a short link in message 
without Subject: ...



Oh yeah, that's a case I hadn't thought of, good point!
I feel like when both co-occur it's a pretty solid sign it's spam.  
Therefore, I'm wondering if an upstream amplification (or a local 
one) would make sense.


Some examples (I can also supply full emails, but fear this might 
prevent my message from arriving):

X-Spam-Status: No, score=4.082 tagged_above=- required=5
    tests=[DCC_CHECK=1.1, DIGEST_MULTIPLE=0.001, FSL_BULK_SIG=0.001,
    HEADER_FROM_DIFFERENT_DOMAINS=0.25, HTML_IMAGE_RATIO_08=0.001,
    HTML_MESSAGE=0.001, MIME_HTML_ONLY=0.1, PYZOR_CHECK=1.985,
    SPF_HELO_NONE=0.001, SPF_NEUTRAL=0.652, T_SCC_BODY_TEXT_LINE=-0.01]
X-Spam-Status: No, score=4.816 tagged_above=- required=5
    tests=[DCC_CHECK=1.1, DIGEST_MULTIPLE=0.001, FSL_BULK_SIG=0.001,
    HEADER_FROM_DIFFERENT_DOMAINS=0.248, HTML_IMAGE_ONLY_28=0.726,
    HTML_IMAGE_RATIO_02=0.001, HTML_MESSAGE=0.001, MIME_HTML_ONLY=0.1,
    PYZOR_CHECK=1.985, SPF_HELO_NONE=0.001, SPF_NEUTRAL=0.652,
    T_REMOTE_IMAGE=0.01, T_SCC_BODY_TEXT_LINE=-0.01]
X-Spam-Status: No, score=4.109 tagged_above=- required=5
    tests=[DCC_CHECK=1.1, DIGEST_MULTIPLE=0.001, FSL_BULK_SIG=0.029,
    HEADER_FROM_DIFFERENT_DOMAINS=0.249, HTML_IMAGE_RATIO_04=0.001,
    HTML_MESSAGE=0.001, MIME_HTML_ONLY=0.1, PYZOR_CHECK=1.985,
    SPF_HELO_NONE=0.001, SPF_NEUTRAL=0.652, T_SCC_BODY_TEXT_LINE=-0.01]


looks like you should implement bayes.
since these are generated by amavis, you could train amavis database.

We have Bayes running on the main server, but my own local server 
doesn't have it so hence why it's missing. I did however take all spam I 
received myself in 2022 that wasn't caught and fed it to sa-learn (for 
the amavis user), thx for that suggestion. Let's hope it works to remove 
this minor inconvenience :)




Re: Spam with Pyzor and DCC scores

2022-07-11 Thread Matus UHLAR - fantomas

On 11.07.22 12:57, Bert Van de Poel wrote:
A few times a month we have spam messages getting through, often in 
German, that have some spam score but not enough to be 
marked/discarded. Always these messages are marked by DCC, since 
they're of course bulk spam, but it's also not uncommon to see Pyzor 
as well. I've been wondering if there are realistic cases where both 
DCC and Pyzor would mark as spam while the message was ham.


this is likely to happen if the message is empty or learly empty.
some people are stupid, send one-two words or a short link in message 
without Subject: ...


I feel like when both co-occur it's a pretty solid sign it's spam.  
Therefore, I'm wondering if an upstream amplification (or a local one) 
would make sense.


Some examples (I can also supply full emails, but fear this might 
prevent my message from arriving):

X-Spam-Status: No, score=4.082 tagged_above=- required=5
    tests=[DCC_CHECK=1.1, DIGEST_MULTIPLE=0.001, FSL_BULK_SIG=0.001,
    HEADER_FROM_DIFFERENT_DOMAINS=0.25, HTML_IMAGE_RATIO_08=0.001,
    HTML_MESSAGE=0.001, MIME_HTML_ONLY=0.1, PYZOR_CHECK=1.985,
    SPF_HELO_NONE=0.001, SPF_NEUTRAL=0.652, T_SCC_BODY_TEXT_LINE=-0.01]
X-Spam-Status: No, score=4.816 tagged_above=- required=5
    tests=[DCC_CHECK=1.1, DIGEST_MULTIPLE=0.001, FSL_BULK_SIG=0.001,
    HEADER_FROM_DIFFERENT_DOMAINS=0.248, HTML_IMAGE_ONLY_28=0.726,
    HTML_IMAGE_RATIO_02=0.001, HTML_MESSAGE=0.001, MIME_HTML_ONLY=0.1,
    PYZOR_CHECK=1.985, SPF_HELO_NONE=0.001, SPF_NEUTRAL=0.652,
    T_REMOTE_IMAGE=0.01, T_SCC_BODY_TEXT_LINE=-0.01]
X-Spam-Status: No, score=4.109 tagged_above=- required=5
    tests=[DCC_CHECK=1.1, DIGEST_MULTIPLE=0.001, FSL_BULK_SIG=0.029,
    HEADER_FROM_DIFFERENT_DOMAINS=0.249, HTML_IMAGE_RATIO_04=0.001,
    HTML_MESSAGE=0.001, MIME_HTML_ONLY=0.1, PYZOR_CHECK=1.985,
    SPF_HELO_NONE=0.001, SPF_NEUTRAL=0.652, T_SCC_BODY_TEXT_LINE=-0.01]


looks like you should implement bayes.
since these are generated by amavis, you could train amavis database.

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Atheism is a non-prophet organization.


Spam with Pyzor and DCC scores

2022-07-11 Thread Bert Van de Poel

Hi everyone,

A few times a month we have spam messages getting through, often in 
German, that have some spam score but not enough to be marked/discarded. 
Always these messages are marked by DCC, since they're of course bulk 
spam, but it's also not uncommon to see Pyzor as well. I've been 
wondering if there are realistic cases where both DCC and Pyzor would 
mark as spam while the message was ham. I feel like when both co-occur 
it's a pretty solid sign it's spam. Therefore, I'm wondering if an 
upstream amplification (or a local one) would make sense.


Some examples (I can also supply full emails, but fear this might 
prevent my message from arriving):

X-Spam-Status: No, score=4.082 tagged_above=- required=5
    tests=[DCC_CHECK=1.1, DIGEST_MULTIPLE=0.001, FSL_BULK_SIG=0.001,
    HEADER_FROM_DIFFERENT_DOMAINS=0.25, HTML_IMAGE_RATIO_08=0.001,
    HTML_MESSAGE=0.001, MIME_HTML_ONLY=0.1, PYZOR_CHECK=1.985,
    SPF_HELO_NONE=0.001, SPF_NEUTRAL=0.652, T_SCC_BODY_TEXT_LINE=-0.01]
X-Spam-Status: No, score=4.816 tagged_above=- required=5
    tests=[DCC_CHECK=1.1, DIGEST_MULTIPLE=0.001, FSL_BULK_SIG=0.001,
    HEADER_FROM_DIFFERENT_DOMAINS=0.248, HTML_IMAGE_ONLY_28=0.726,
    HTML_IMAGE_RATIO_02=0.001, HTML_MESSAGE=0.001, MIME_HTML_ONLY=0.1,
    PYZOR_CHECK=1.985, SPF_HELO_NONE=0.001, SPF_NEUTRAL=0.652,
    T_REMOTE_IMAGE=0.01, T_SCC_BODY_TEXT_LINE=-0.01]
X-Spam-Status: No, score=4.109 tagged_above=- required=5
    tests=[DCC_CHECK=1.1, DIGEST_MULTIPLE=0.001, FSL_BULK_SIG=0.029,
    HEADER_FROM_DIFFERENT_DOMAINS=0.249, HTML_IMAGE_RATIO_04=0.001,
    HTML_MESSAGE=0.001, MIME_HTML_ONLY=0.1, PYZOR_CHECK=1.985,
    SPF_HELO_NONE=0.001, SPF_NEUTRAL=0.652, T_SCC_BODY_TEXT_LINE=-0.01]

What's people's opinion here?

Kind regards,
Bert Van de Poel
ULYSSIS


Re: DKIM_* scores

2021-07-27 Thread John Hardin

On Mon, 26 Jul 2021, RW wrote:


On Mon, 26 Jul 2021 18:05:35 +0100
RW wrote:



"&& !DKIM_SIGNED " means the rule can only be true if there's no
signature, so none of the terms with __DKIM_DEPENDABLE, DKIM_VALID,
and DKIM_VALID_AU make any difference.


Actually it's worse than that __DKIM_DEPENDABLE is always true if there
are no signatures, so !DKIM_SIGNED && !__DKIM_DEPENDABLE is always
false.


Thanks for pointing that out.

Those are "FP exclusions", not part of the base rule logic - generated by 
inspecting the rulequ results and excluding hits on other rules where the 
combination is hammy and not (or very weakly, like 1%) spammy. The 
interactions of combinations of those exclusions isn't considered.


They also need to be reviewed periodically, which I'm doing now for XPRIO. 
__DKIM_DEPENDABLE is no longer a useful FP exclusion for XPRIO, as it hits 
100% of the spam hits.



--
 John Hardin KA7OHZhttp://www.impsec.org/~jhardin/
 jhar...@impsec.org pgpk -a jhar...@impsec.org
 key: 0xB8732E79 -- 2D8C 34F4 6411 F507 136C  AF76 D822 E6E6 B873 2E79
---
  Maxim IX: Never turn your back on an enemy.
---
 8 days until the 286th anniversary of John Peter Zenger's acquittal


Re: DKIM_* scores

2021-07-27 Thread Matus UHLAR - fantomas

On Mon, 26 Jul 2021 08:08:10 -0400 Greg Troxel wrote:

So -0.2 means that there are two dkim signatures, one for each, and
they are both valid.


On 26.07.21 18:05, RW wrote:

It could do, but usually it just means that the sender and author
domains are the same.



> BTW, looking at metas in 72_active.cf:
>
>  meta XPRIO  __XPRIO_MINFP && !DKIM_SIGNED &&
> !__DKIM_DEPENDABLE && !DKIM_VALID && !DKIM_VALID_AU &&
> !RCVD_IN_DNSWL_NONE meta XPRIO  __XPRIO_MINFP &&
> !DKIM_SIGNED && !__DKIM_DEPENDABLE && !DKIM_VALID && !DKIM_VALID_AU
> && !RCVD_IN_DNSWL_NONE && !SPF_PASS
>
> !DKIM_VALID && !DKIM_VALID_AU is redundant and !DKIM_VALID_AU
> should be enough

I don't think so.  These are negated.


if there's no valid signature, there can't be valid author domain
signature.

If there's valid author domain signature, there's surely at least valid
signature.

imho we should compare author domain signature, not any (random) signature.



"&& !DKIM_SIGNED " means the rule can only be true if there's no
signature, so none of the terms with __DKIM_DEPENDABLE, DKIM_VALID, and
DKIM_VALID_AU make any difference.

It's usually not a good idea to use DKIM_SIGNED because it relies on
the plugin, whereas __DKIM_EXISTS and the duplicate rule
__HAS_DKIM_SIGHD don't.


yes, more rules are kinda redundant here

!DKIM_SIGNED && !__DKIM_DEPENDABLE && !DKIM_VALID && !DKIM_VALID_AU

if message is not signed, then signature can't be valid or invalid. If any
of signatures is valid, the message is signed. 


the !DKIM_SIGNED is useless here unless it's a performance optimization.
Is it?

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Silvester Stallone: Father of the RISC concept.


Re: DKIM_* scores

2021-07-27 Thread Matus UHLAR - fantomas

On 26.07.21 08:40, Kevin A. McGrail wrote:

Correct. The fact that there are some scores that add up to approximately
-0.2 is negligible when compared to a standard threshold of 5.0.

Do you have false positives being caused by these emails? Do you have false
negatives? That's more important to look at then just focusing on one set
of rules.


to be more precise, I have case where these caused mail to be autolearned as
ham which is even worse than a FN

I tried to filter out other rules that could cause it.

Unfortunately no other rules hit that could avoid trainin.


Matus UHLAR - fantomas  writes:

> I noticed that pure existence of DKIM signature can push score under zero:
>
> DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1,
>
> ...so the cumulative score is -0.2.
>
> I'm aware that we don't have many rules with negative scores, but
multiple
> scores for single valid DKIM sinature should not be redundant.


--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I just got lost in thought. It was unfamiliar territory.


Re: DKIM_* scores

2021-07-26 Thread RW
On Mon, 26 Jul 2021 18:05:35 +0100
RW wrote:


> "&& !DKIM_SIGNED " means the rule can only be true if there's no
> signature, so none of the terms with __DKIM_DEPENDABLE, DKIM_VALID,
> and DKIM_VALID_AU make any difference. 

Actually it's worse than that __DKIM_DEPENDABLE is always true if there
are no signatures, so !DKIM_SIGNED && !__DKIM_DEPENDABLE is always
false.

The ruleqa shows one hit on XPRIO. 





Re: DKIM_* scores

2021-07-26 Thread RW
On Mon, 26 Jul 2021 08:08:10 -0400
Greg Troxel wrote:



> So -0.2 means that there are two dkim signatures, one for each, and
> they are both valid.  

It could do, but usually it just means that the sender and author
domains are the same.


> 
> > BTW, looking at metas in 72_active.cf:
> >
> >  meta XPRIO  __XPRIO_MINFP && !DKIM_SIGNED &&
> > !__DKIM_DEPENDABLE && !DKIM_VALID && !DKIM_VALID_AU &&
> > !RCVD_IN_DNSWL_NONE meta XPRIO  __XPRIO_MINFP &&
> > !DKIM_SIGNED && !__DKIM_DEPENDABLE && !DKIM_VALID && !DKIM_VALID_AU
> > && !RCVD_IN_DNSWL_NONE && !SPF_PASS
> >
> > !DKIM_VALID && !DKIM_VALID_AU is redundant and !DKIM_VALID_AU
> > should be enough  
> 
> I don't think so.  These are negated.


"&& !DKIM_SIGNED " means the rule can only be true if there's no
signature, so none of the terms with __DKIM_DEPENDABLE, DKIM_VALID, and
DKIM_VALID_AU make any difference. 

It's usually not a good idea to use DKIM_SIGNED because it relies on
the plugin, whereas __DKIM_EXISTS and the duplicate rule
__HAS_DKIM_SIGHD don't.

 


Re: DKIM_* scores

2021-07-26 Thread Benny Pedersen

On 2021-07-26 14:40, Kevin A. McGrail wrote:

Correct. The fact that there are some scores that add up to
approximately -0.2 is negligible when compared to a standard threshold
of 5.0.

Do you have false positives being caused by these emails? Do you have
false negatives? That's more important to look at then just focusing
on one set of rules.


i bet when spamassassin 4.0.0 is out there would be more problems :=)

all senders can make dkim pass, all senders can make spf pass, all 
recipients want to solve this, lol


now to the mix, openarc try to pass forward originating dkim/spf pass or 
fails to forwarded recipients, to be retested in dmarc stage, but 
opendmarc is not ready yet since only opendmarc in trunk support it 
still (AR header parsing)


in spamassassin 4.0.0 it will be dmarc testing not trustness on forged 
headers anyway, will spamassassin evaluate arc chains ?, hope it will


if anything should change it could be change scores so its not fyssicly 
seen as -0.1 for each negative scores but more of a 0 score while its 
really in perl is -0.01, that make it more stronger not counting much, 
while not breaking anything


its still raining..


Re: DKIM_* scores

2021-07-26 Thread Kevin A. McGrail
Correct. The fact that there are some scores that add up to approximately
-0.2 is negligible when compared to a standard threshold of 5.0.

Do you have false positives being caused by these emails? Do you have false
negatives? That's more important to look at then just focusing on one set
of rules.

Regards, KAM

On Mon, Jul 26, 2021, 08:08 Greg Troxel  wrote:

>
> Matus UHLAR - fantomas  writes:
>
> > I noticed that pure existence of DKIM signature can push score under
> zero:
> >
> > DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1,
> >
> > ...so the cumulative score is -0.2.
> >
> > I'm aware that we don't have many rules with negative scores, but
> multiple
> > scores for single valid DKIM sinature should not be redundant.
>
> I don't follow the logic in "should not be redundant" especially for
> scores with such low values of -0.1.
>
> You're talking about "below 0", but what matters is "<5", per SA
> doctrine.
>
> As I see it SIGNED and VALID are intended to cancel, causing a signature
> that isn't valid to get a +0.1.  That seems sensible, although given how
> much DKIM is broken by mailing lists that (incorrectly IMHO) modify
> messages, it doesn't seem really useful to make that higher.
>
> And then there's -0.1 for a valid dkim matching From: and another -0.1
> for valid dkim matching the envelope sender, which is often different.
> So -0.2 means that there are two dkim signatures, one for each, and they
> are both valid.  Not a guarantee of ham of course, but -0.2 is a small
> score.
>
> It's a fair question to ask how these shake out with masscheck, but I
> see nothing intrinsically wrong.
>
> > do you people modify scores of these rules?
> > I would turn both off, but  DKIM_VALID is used in some meta rules...
>
> I am someone who tweaks a lot of scores, but basically my tweaking
> reduces scores of +3 or more down a few points because I find they hit
> ham, and scoring up things of 1-2 to higher because they hit my spam and
> I find they don't really hit my ham.  I have never been motivated  to
> adjust these.
>
> For me, the biggest deal with dkim is that I can whitelist_from_dkim for
> senders, and avoid whitelisting forged mail not actually from them.
>
> > BTW, looking at metas in 72_active.cf:
> >
> >  meta XPRIO  __XPRIO_MINFP && !DKIM_SIGNED &&
> !__DKIM_DEPENDABLE && !DKIM_VALID && !DKIM_VALID_AU && !RCVD_IN_DNSWL_NONE
> >  meta XPRIO  __XPRIO_MINFP && !DKIM_SIGNED &&
> !__DKIM_DEPENDABLE && !DKIM_VALID && !DKIM_VALID_AU && !RCVD_IN_DNSWL_NONE
> && !SPF_PASS
> >
> > !DKIM_VALID && !DKIM_VALID_AU is redundant and !DKIM_VALID_AU should be
> enough
>
> I don't think so.  These are negated.  And, a dkim signature from some
> random domain that is not the From: or envelope-from will cause
> DKIM_VALID.  But I do think !DKIM_VALID will impliy !DKIM_VALID_AU.
> Still, I'm 50/50 whether I'm write or I'm about to learn something.
> >
> >  meta __HTML_FONT_LOW_CONTRAST_MINFP HTML_FONT_LOW_CONTRAST &&
> > !__HAS_SENDER && !__THREADED && !__HAS_THREAD_INDEX && !ALL_TRUSTED &&
> > !__NOT_SPOOFED && !__HDRS_LCASE_KNOWN && !DKIM_VALID
> >
> >  meta __NOT_SPOOFED  DKIM_VALID || !__LAST_EXTERNAL_RELAY_NO_AUTH ||
> ALL_TRUSTED   # yes DKIM, no SPF
> >  meta __NOT_SPOOFED  SPF_PASS || DKIM_VALID ||
> !__LAST_EXTERNAL_RELAY_NO_AUTH || ALL_TRUSTED   # yes DKIM, yes SPF
> >
> > shouldn't these contain DKIM_VALID_AU instead?
>
> perhaps, but the problem is that there is a lot of mail that is From:
> i...@foobank.com and has envelope-from of
> foobank-sen...@bankserviceprovider.com with a dkim from
> bankserviceprovider.com.  This is bogus; people who deal with
> foobank.com should be able to
>   whitelist_from_dkim *@foobank.com
> and treat everything else claiming to be from foobank as spam/phish.
> But the world isn't like that.
>


Re: DKIM_* scores

2021-07-26 Thread Greg Troxel

Matus UHLAR - fantomas  writes:

> I noticed that pure existence of DKIM signature can push score under zero:
>
> DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1,
>
> ...so the cumulative score is -0.2.
>
> I'm aware that we don't have many rules with negative scores, but multiple
> scores for single valid DKIM sinature should not be redundant.

I don't follow the logic in "should not be redundant" especially for
scores with such low values of -0.1.

You're talking about "below 0", but what matters is "<5", per SA
doctrine.

As I see it SIGNED and VALID are intended to cancel, causing a signature
that isn't valid to get a +0.1.  That seems sensible, although given how
much DKIM is broken by mailing lists that (incorrectly IMHO) modify
messages, it doesn't seem really useful to make that higher.

And then there's -0.1 for a valid dkim matching From: and another -0.1
for valid dkim matching the envelope sender, which is often different.
So -0.2 means that there are two dkim signatures, one for each, and they
are both valid.  Not a guarantee of ham of course, but -0.2 is a small
score.

It's a fair question to ask how these shake out with masscheck, but I
see nothing intrinsically wrong.

> do you people modify scores of these rules?
> I would turn both off, but  DKIM_VALID is used in some meta rules...

I am someone who tweaks a lot of scores, but basically my tweaking
reduces scores of +3 or more down a few points because I find they hit
ham, and scoring up things of 1-2 to higher because they hit my spam and
I find they don't really hit my ham.  I have never been motivated  to
adjust these.

For me, the biggest deal with dkim is that I can whitelist_from_dkim for
senders, and avoid whitelisting forged mail not actually from them.

> BTW, looking at metas in 72_active.cf:
>
>  meta XPRIO  __XPRIO_MINFP && !DKIM_SIGNED && !__DKIM_DEPENDABLE 
> && !DKIM_VALID && !DKIM_VALID_AU && !RCVD_IN_DNSWL_NONE
>  meta XPRIO  __XPRIO_MINFP && !DKIM_SIGNED && !__DKIM_DEPENDABLE 
> && !DKIM_VALID && !DKIM_VALID_AU && !RCVD_IN_DNSWL_NONE && !SPF_PASS
>
> !DKIM_VALID && !DKIM_VALID_AU is redundant and !DKIM_VALID_AU should be enough

I don't think so.  These are negated.  And, a dkim signature from some
random domain that is not the From: or envelope-from will cause
DKIM_VALID.  But I do think !DKIM_VALID will impliy !DKIM_VALID_AU.
Still, I'm 50/50 whether I'm write or I'm about to learn something.
>
>  meta __HTML_FONT_LOW_CONTRAST_MINFP HTML_FONT_LOW_CONTRAST &&
> !__HAS_SENDER && !__THREADED && !__HAS_THREAD_INDEX && !ALL_TRUSTED &&
> !__NOT_SPOOFED && !__HDRS_LCASE_KNOWN && !DKIM_VALID
>
>  meta __NOT_SPOOFED  DKIM_VALID || !__LAST_EXTERNAL_RELAY_NO_AUTH || 
> ALL_TRUSTED   # yes DKIM, no SPF
>  meta __NOT_SPOOFED  SPF_PASS || DKIM_VALID || !__LAST_EXTERNAL_RELAY_NO_AUTH 
> || ALL_TRUSTED   # yes DKIM, yes SPF
>
> shouldn't these contain DKIM_VALID_AU instead?

perhaps, but the problem is that there is a lot of mail that is From:
i...@foobank.com and has envelope-from of
foobank-sen...@bankserviceprovider.com with a dkim from
bankserviceprovider.com.  This is bogus; people who deal with
foobank.com should be able to
  whitelist_from_dkim *@foobank.com
and treat everything else claiming to be from foobank as spam/phish.
But the world isn't like that.


signature.asc
Description: PGP signature


DKIM_* scores

2021-07-26 Thread Matus UHLAR - fantomas

Hello,

I noticed that pure existence of DKIM signature can push score under zero:

DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1,

...so the cumulative score is -0.2.

I'm aware that we don't have many rules with negative scores, but multiple
scores for single valid DKIM sinature should not be redundant.



do you people modify scores of these rules?
I would turn both off, but  DKIM_VALID is used in some meta rules...

score   DKIM_VALID  -0.001
score   DKIM_VALID_EF   -0.001

I have also tuned tflags, for sure:

tflags  DKIM_VALID  noautolearn net nice
tflags  DKIM_VALID_EF   noautolearn net nice


BTW, looking at metas in 72_active.cf:

 meta XPRIO  __XPRIO_MINFP && !DKIM_SIGNED && !__DKIM_DEPENDABLE && !DKIM_VALID 
&& !DKIM_VALID_AU && !RCVD_IN_DNSWL_NONE
 meta XPRIO  __XPRIO_MINFP && !DKIM_SIGNED && !__DKIM_DEPENDABLE && !DKIM_VALID && 
!DKIM_VALID_AU && !RCVD_IN_DNSWL_NONE && !SPF_PASS

!DKIM_VALID && !DKIM_VALID_AU is redundant and !DKIM_VALID_AU should be enough


 meta __HTML_FONT_LOW_CONTRAST_MINFPHTML_FONT_LOW_CONTRAST && !__HAS_SENDER && !__THREADED && 
!__HAS_THREAD_INDEX && !ALL_TRUSTED && !__NOT_SPOOFED && !__HDRS_LCASE_KNOWN && !DKIM_VALID

 meta __NOT_SPOOFED  DKIM_VALID || !__LAST_EXTERNAL_RELAY_NO_AUTH || 
ALL_TRUSTED   # yes DKIM, no SPF
 meta __NOT_SPOOFED  SPF_PASS || DKIM_VALID || !__LAST_EXTERNAL_RELAY_NO_AUTH 
|| ALL_TRUSTED   # yes DKIM, yes SPF

shouldn't these contain DKIM_VALID_AU instead?

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
A day without sunshine is like, night.


Re: Two different machines running same versoin of SA giving different scores for scores that are commented out

2021-04-25 Thread John Hardin

On Sun, 25 Apr 2021, John Hardin wrote:


On Sun, 25 Apr 2021, Steve Dondley wrote:


On 2021-04-25 01:00 AM, John Hardin wrote:

On Sun, 25 Apr 2021, Steve Dondley wrote:


That rule has this line in the 72_active.cf file:


Look in 72_scores.cf and compare the modification dates on that file.


The date is Jan 30, 2020. I'm running SA 3.4.4 (the version supplied by 
backports on my debian machine).


Then sa-update is not running. Those scores are more than a year old. Fix 
that first.


...which you did. Ah, the hazards of answering as you read...

The installs might be giving different scores for the same rule due to 
configuration differences - for example, one might have Bayes enabled and the 
other doesn't, or one might have network checks enabled and the other does 
not.


It sounds like this isn't the case as your scores are now the same.

--
 John Hardin KA7OHZhttp://www.impsec.org/~jhardin/
 jhar...@impsec.org pgpk -a jhar...@impsec.org
 key: 0xB8732E79 -- 2D8C 34F4 6411 F507 136C  AF76 D822 E6E6 B873 2E79
---
  ...to announce there must be no criticism of the President or to
  stand by the President right or wrong is not only unpatriotic and
  servile, but is morally treasonous to the American public.
  -- Theodore Roosevelt, 1918
---
 330 days since the first private commercial manned orbital mission (SpaceX)


Re: Two different machines running same versoin of SA giving different scores for scores that are commented out

2021-04-25 Thread John Hardin

On Sun, 25 Apr 2021, Steve Dondley wrote:


On 2021-04-25 01:00 AM, John Hardin wrote:

On Sun, 25 Apr 2021, Steve Dondley wrote:


That rule has this line in the 72_active.cf file:


Look in 72_scores.cf and compare the modification dates on that file.


The date is Jan 30, 2020. I'm running SA 3.4.4 (the version supplied by 
backports on my debian machine).


Then sa-update is not running. Those scores are more than a year old. Fix 
that first.


The installs might be giving different scores for the same rule due to 
configuration differences - for example, one might have Bayes enabled and 
the other doesn't, or one might have network checks enabled and the other 
does not.


--
 John Hardin KA7OHZhttp://www.impsec.org/~jhardin/
 jhar...@impsec.org pgpk -a jhar...@impsec.org
 key: 0xB8732E79 -- 2D8C 34F4 6411 F507 136C  AF76 D822 E6E6 B873 2E79
---
  ...to announce there must be no criticism of the President or to
  stand by the President right or wrong is not only unpatriotic and
  servile, but is morally treasonous to the American public.
  -- Theodore Roosevelt, 1918
---
 330 days since the first private commercial manned orbital mission (SpaceX)


Re: Two different machines running same versoin of SA giving different scores for scores that are commented out

2021-04-25 Thread Steve Dondley

On 2021-04-25 10:19 AM, RW wrote:

On Sun, 25 Apr 2021 00:40:59 -0400
Steve Dondley wrote:




On both machines, /usr/share/spasmassassin/72_active.cf has this rule
which is commented out:



This is the legacy rule directory from  before sa-update existed.

Have you not got another directory populated by sa-update?


Yeah, I got it working after Rendi gave me a clue. Thanks.


Re: Two different machines running same versoin of SA giving different scores for scores that are commented out

2021-04-25 Thread Steve Dondley

On 2021-04-25 05:57 AM, Reindl Harald wrote:

Am 25.04.21 um 07:09 schrieb Steve Dondley:

That rule has this line in the 72_active.cf file:


Look in 72_scores.cf and compare the modification dates on that file.

Their scores as of today (saturday):

72_scores.cf:score FSL_BULK_SIG  0.001 0.001 
0.001 0.001
72_scores.cf:score PP_MIME_FAKE_ASCII_TEXT   0.999 0.837 
0.999 0.837


The date is Jan 30, 2020. I'm running SA 3.4.4 (the version supplied 
by backports on my debian machine).
it's time to  learn about basics like sa-update and where the stuff is 
located


OK, heh. I had totally forgotten about SA updates and what they do. 
After figuring out sa-update and getting it working properly on both 
machines, the scores are the same now. Thanks.





Re: Two different machines running same versoin of SA giving different scores for scores that are commented out

2021-04-25 Thread RW
On Sun, 25 Apr 2021 00:40:59 -0400
Steve Dondley wrote:


> 
> On both machines, /usr/share/spasmassassin/72_active.cf has this rule 
> which is commented out:
> 

This is the legacy rule directory from  before sa-update existed.

Have you not got another directory populated by sa-update?


Re: Two different machines running same versoin of SA giving different scores for scores that are commented out

2021-04-24 Thread Steve Dondley

On 2021-04-25 01:00 AM, John Hardin wrote:

On Sun, 25 Apr 2021, Steve Dondley wrote:

I'm running the same version of SA on the same email on two different 
machines and getting different scores in for some rules in the report:


Machine A gives: 0.0 FSL_BULK_SIG   Bulk signature with no 
Unsubscribe
Machine B gives: 1.0 FSL_BULK_SIG   Bulk signature with no 
Unsubscribe


On both machines, /usr/share/spasmassassin/72_active.cf has this rule 
which is commented out:


...

Machine A: 0.3 PP_MIME_FAKE_ASCII_TEXT BODY: MIME text/plain claims to 
be ASCII
Machine B: 1.0 PP_MIME_FAKE_ASCII_TEXT BODY: MIME text/plain claims to 
be ASCII


That rule has this line in the 72_active.cf file:


Look in 72_scores.cf and compare the modification dates on that file.

Their scores as of today (saturday):

72_scores.cf:score FSL_BULK_SIG  0.001 0.001 
0.001 0.001
72_scores.cf:score PP_MIME_FAKE_ASCII_TEXT   0.999 0.837 
0.999 0.837


The date is Jan 30, 2020. I'm running SA 3.4.4 (the version supplied by 
backports on my debian machine).


Re: Two different machines running same versoin of SA giving different scores for scores that are commented out

2021-04-24 Thread John Hardin

On Sun, 25 Apr 2021, Steve Dondley wrote:

I'm running the same version of SA on the same email on two different 
machines and getting different scores in for some rules in the report:


Machine A gives: 0.0 FSL_BULK_SIG   Bulk signature with no Unsubscribe
Machine B gives: 1.0 FSL_BULK_SIG   Bulk signature with no Unsubscribe

On both machines, /usr/share/spasmassassin/72_active.cf has this rule which 
is commented out:


...


Machine A: 0.3 PP_MIME_FAKE_ASCII_TEXT BODY: MIME text/plain claims to be ASCII
Machine B: 1.0 PP_MIME_FAKE_ASCII_TEXT BODY: MIME text/plain claims to be ASCII

That rule has this line in the 72_active.cf file:


Look in 72_scores.cf and compare the modification dates on that file.

Their scores as of today (saturday):

72_scores.cf:score FSL_BULK_SIG  0.001 0.001 0.001 0.001
72_scores.cf:score PP_MIME_FAKE_ASCII_TEXT   0.999 0.837 0.999 0.837


--
 John Hardin KA7OHZhttp://www.impsec.org/~jhardin/
 jhar...@impsec.org pgpk -a jhar...@impsec.org
 key: 0xB8732E79 -- 2D8C 34F4 6411 F507 136C  AF76 D822 E6E6 B873 2E79
---
 329 days since the first private commercial manned orbital mission (SpaceX)


Two different machines running same versoin of SA giving different scores for scores that are commented out

2021-04-24 Thread Steve Dondley
I'm running the same version of SA on the same email on two different 
machines and getting different scores in for some rules in the report:


Machine A gives: 0.0 FSL_BULK_SIG   Bulk signature with no 
Unsubscribe
Machine B gives: 1.0 FSL_BULK_SIG   Bulk signature with no 
Unsubscribe


On both machines, /usr/share/spasmassassin/72_active.cf has this rule 
which is commented out:


#scoreFSL_BULK_SIG  3.000   # limit

Similarly, for another rule that's commented out, I'm getting:

Machine A: 0.3 PP_MIME_FAKE_ASCII_TEXT BODY: MIME text/plain claims to 
be ASCII
Machine B: 1.0 PP_MIME_FAKE_ASCII_TEXT BODY: MIME text/plain claims to 
be ASCII


That rule has this line in the 72_active.cf file:

#scorePP_MIME_FAKE_ASCII_TEXT  1.0


It appears Machine A is somehow caching the old scores for rules that 
have been commented out. Restarting spamassassin daemon doesn't help. 
The command I'm running to generate the report is:


spamc -R < 
/spam/Maildir/.Spam/cur/1619286920.M132164P23787.email.dondley.com\,S\=5093\,W\=5214\:2\,S


Re: Getting different SA scores when using -R argument with spamc

2021-04-06 Thread Steve Dondley

It can only do so if report_safe is set to 0. With non-zero
report_safe settings, the original mail is encapsulated as an
attachment inside a wrapper message also including the report. That
wrapper message containing the SA report is "safe" because it is fully
local, the text/plain part won't look like spam to any spam filter,
and the original, encapsulated as a message/rfc822 attachment, should
be skipped by any filter. If you want to test the *original* message,
you have to extract the message/rfc822 part into its own file and test
that.


OK, did some more googling on this. Let me spell this out and help clear 
up those who may be as confused as I was:


1) sa-learn *will* "unwrap" the original encapsulated spam emails when 
they are encapsulated by SA: 
https://cwiki.apache.org/confluence/display/SPAMASSASSIN/LearningMarkedUpMessages
2) However, the spamassassin command (or spamc/spamd) does not do this 
for you. You must use the -d option to remove any spam markup.


What this means is if that report_safe is set to "1"  (the default) in 
your SA config file, you must pull the original spam email out with the 
-d option if you wish to run it through spamassassin/spamc again. You do 
*not* have to worry about doing this with the sa-learn command.


If I got this wrong, let me know. Thanks.



Re: Getting different SA scores when using -R argument with spamc

2021-04-06 Thread Bill Cole

On 6 Apr 2021, at 16:19, Steve Dondley wrote:
[...]

It can only do so if report_safe is set to 0. With non-zero
report_safe settings, the original mail is encapsulated as an
attachment inside a wrapper message also including the report. That
wrapper message containing the SA report is "safe" because it is 
fully

local, the text/plain part won't look like spam to any spam filter,
and the original, encapsulated as a message/rfc822 attachment, should
be skipped by any filter. If you want to test the *original* message,
you have to extract the message/rfc822 part into its own file and 
test

that.


OK, so that's the problem, I guess. That config option is commented 
out in my local.cf file:


# report_safe 1


That is to document the fact that it is not explicitly set but that it 
defaults to 1.



So what do you recommend setting this to '1'?


It's 1 now, by default. I use '0' because I overtly reject mail that SA 
scores over my threshold, while stashing a pristine copy in a 3-day 
message dumpster. The best choice depends on how you handle messages 
that SA scores as spam after that determination, and who your users are. 
The default is good because it raises the difficulty for users to 
accidentally treat spam as ham after delivery, if they are the sort to 
not notice things like subject tagging or the fact that a message is in 
a folder named "Spam." I think that '2' is misguided in principle, 
because it leaves the original message open to re-filtering, is likely 
to cause Bayes poisoning if you autolearn, and opens accidental access 
to a broader range of users.



Any downsides to that? I'm just a little leery of changing a default 
setting. But I'll do whatever the pros suggest.


Leaving it at the default setting of 1 leaves you where you are. The 
main downside to that in my opinion is that the wrapper is a nuisance if 
you want to work with original spam messages. Once you understand how to 
handle that, it's a minor problem to work around.


It says a value of '2' sets it "use text/plain instead" but I don't 
know what that is referring to.


The attached original message uses a MIME file type of 'message/rfc822' 
when report_safe is 1. That is the standard MIME file type for Internet 
email messages embedded in other messages. When report_safe is 2, it 
uses the type 'text/plain' which makes the original message more widely 
accessible to MUAs and when extracted to an independent text file. In 
practice, the only difference is whether the extracted file as a '.eml' 
or '.txt' extension.


--
Bill Cole
b...@scconsult.com or billc...@apache.org
(AKA @grumpybozo and many *@billmail.scconsult.com addresses)
Not Currently Available For Hire


Re: Getting different SA scores when using -R argument with spamc

2021-04-06 Thread Steve Dondley

On 2021-04-06 04:19 PM, Steve Dondley wrote:

It seems to have done so. Thank you.

Some MUAs have a "Reply to List" function that uses the List-Post
header (and sometimes heuristics when that header is missing) to send
replies only to a list itself.


I've recently switched to Roundcube from gmail. I didn't see that
option but I think I've figured out I just need to hit "reply". Thanks
for pointing out you were getting dupes.



It can only do so if report_safe is set to 0. With non-zero
report_safe settings, the original mail is encapsulated as an
attachment inside a wrapper message also including the report. That
wrapper message containing the SA report is "safe" because it is fully
local, the text/plain part won't look like spam to any spam filter,
and the original, encapsulated as a message/rfc822 attachment, should
be skipped by any filter. If you want to test the *original* message,
you have to extract the message/rfc822 part into its own file and test
that.


OK, so that's the problem, I guess. That config option is commented
out in my local.cf file:

# report_safe 1


I should read the documentation before asking questions. So '1' is the 
default which encapsulates the original spam as an attachment.


Re: Getting different SA scores when using -R argument with spamc

2021-04-06 Thread Steve Dondley




Some MUAs have a "Reply to List" function that uses the List-Post
header (and sometimes heuristics when that header is missing) to send
replies only to a list itself.


Ah! I see that option now under the little down arrow next to "Reply 
all". My day is made. Thanks!


Re: Getting different SA scores when using -R argument with spamc

2021-04-06 Thread Steve Dondley




It seems to have done so. Thank you.

Some MUAs have a "Reply to List" function that uses the List-Post
header (and sometimes heuristics when that header is missing) to send
replies only to a list itself.


I've recently switched to Roundcube from gmail. I didn't see that option 
but I think I've figured out I just need to hit "reply". Thanks for 
pointing out you were getting dupes.




It can only do so if report_safe is set to 0. With non-zero
report_safe settings, the original mail is encapsulated as an
attachment inside a wrapper message also including the report. That
wrapper message containing the SA report is "safe" because it is fully
local, the text/plain part won't look like spam to any spam filter,
and the original, encapsulated as a message/rfc822 attachment, should
be skipped by any filter. If you want to test the *original* message,
you have to extract the message/rfc822 part into its own file and test
that.


OK, so that's the problem, I guess. That config option is commented out 
in my local.cf file:


# report_safe 1

So what do you recommend setting this to '1'? Any downsides to that? I'm 
just a little leery of changing a default setting. But I'll do whatever 
the pros suggest.


It says a value of '2' sets it "use text/plain instead" but I don't know 
what that is referring to.





Re: Getting different SA scores when using -R argument with spamc

2021-04-06 Thread Bill Cole

On 6 Apr 2021, at 14:55, Steve Dondley wrote:


On 2021-04-06 02:32 PM, Bill Cole wrote:

PLEASE NOTE:

I read the mailing list obsessively and DO NOT NEED (or want) the
extra copies sent when you send both to me and to the list.


Sorry, I still haven't figured out how to properly respond. When I hi 
"reply all" it cc's the list and sends to you. When I hit just "reply" 
it only sends to you. I've manually deleted you from the "To" box and 
sending it directly to the list here. Hopefully that fixes things up.


It seems to have done so. Thank you.

Some MUAs have a "Reply to List" function that uses the List-Post header 
(and sometimes heuristics when that header is missing) to send replies 
only to a list itself.





Since the scores being added during delivery are much richer,
detecting enough info to do SPF and DKIM analysis, I am 99.9% certain
that the format of 'some_email' is mangled, probably missing critical
headers or using CR linebreaks instead of proper LFs.


Hmm, this is on a linux box, so I'm not sure how it could be screwing 
up the line breaks. Is it possible that when spamd injects the scores 
before the body of the email, it is screwing things up?


Here is email as it sits in my inbox now, which is after it gets 
processed by spamd. I was under the impression that an email that had 
already been processed by SA could be processed again and it would 
ignore any modifications made by earlier passes through SA.


It can only do so if report_safe is set to 0. With non-zero report_safe 
settings, the original mail is encapsulated as an attachment inside a 
wrapper message also including the report. That wrapper message 
containing the SA report is "safe" because it is fully local, the 
text/plain part won't look like spam to any spam filter, and the 
original, encapsulated as a message/rfc822 attachment, should be skipped 
by any filter. If you want to test the *original* message, you have to 
extract the message/rfc822 part into its own file and test that.


So these are the headers you were checking post-delivery:

Return-Path: 


Delivered-To: s...@exmaple.com
Received: from email.exmaple.com
by email.exmaple.com with LMTP
id kAhSKc1dY2BCKgAAB604Gw
(envelope-from 
)

for ; Tue, 30 Mar 2021 13:20:13 -0400
Received: by email.exmaple.com (Postfix, from userid 115)
id A64BE200C8; Tue, 30 Mar 2021 13:20:13 -0400 (EDT)
Received: from localhost by email.exmaple.com
with SpamAssassin (version 3.4.2);
Tue, 30 Mar 2021 13:20:13 -0400
From: "Home Warranty - AHS" 
To: 
Subject: *SPAM* It's getting warmer, are you covered?
Date: Tue, 30 Mar 2021 05:18:34 -0700
Message-Id: 
X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on 
email.exmaple.com

X-Spam-Flag: YES
X-Spam-Level: *
X-Spam-Status: Yes, score=5.2 required=5.0 tests=BAYES_99,BAYES_999,
DATE_IN_PAST_03_06,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,
HTML_IMAGE_RATIO_02,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,RCVD_IN_MSPIKE_H2,
SPF_HELO_NONE,SPF_SOFTFAIL shortcircuit=no autolearn=no
autolearn_force=no version=3.4.2
MIME-Version: 1.0
Content-Type: multipart/mixed; 
boundary="--=_60635DCD.A0F5D194"


[...]

but this is the original header block buried in the attachment:

Received-SPF: Softfail (mailfrom) identity=mailfrom; 
client-ip=69.252.207.38; helo=resqmta-ch2-06v.sys.comcast.net; 
envelope-from=bounce-use=m=44682734836=echo4=6df0a8c162cdc2810dc8b4fe0a119...@returnpath.bluehornet.com; 
receiver=

Authentication-Results: email.exmaple.com;
dkim=pass (2048-bit key; secure) 
header.d=comcastmailservice.net header.i=@comcastmailservice.net 
header.b="YTHf56Fx";
dkim=pass (1024-bit key; unprotected) 
header.d=forgetmassives.com header.i=@forgetmassives.com 
header.b="Cc3SOvHE";

dkim-atps=neutral
Received: from resqmta-ch2-06v.sys.comcast.net 
(resqmta-ch2-06v.sys.comcast.net [69.252.207.38])

by email.exmaple.com (Postfix) with ESMTPS id F0A9D200C8
for ; Tue, 30 Mar 2021 13:20:12 -0400 (EDT)
Received: from resomta-ch2-06v.sys.comcast.net ([69.252.207.102])
by resqmta-ch2-06v.sys.comcast.net with ESMTP
id RCA7l3lgvsjoSRI2ElIKl6; Tue, 30 Mar 2021 17:20:10 +
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=comcastmailservice.net; s=20180828_2048; t=1617124810;
bh=EzUwkxtc+07gV+1cIeMVwIqhGkZuGI/a4ukUrCjG7nM=;
h=Received:Received:Received:Received:Received:Received:Received:
 Message-ID:Date:From:Reply-To:To:Subject:Mime-Version:
 Content-Type;
b=YTHf56FxVyphxJLrqEnfZKfP5M62QfSc0ICCe5ZS/2UXQUsumO0ltgCO6ZjDRxrso
 Up8oEgr4gqv8kNMAtJEM532f15eLObwwty+P0OAS8HncjfsiHJspdnk3Eg0aC4A57k
 5w8gnpRbQoa/KaAn0bejQNcCdr+KArf6VwKO+q5/HY9UQxa2RxIWUsoxIMmyZX0WpF
 upTL1nKnd+zaRENmudAllcfxCLMUpnc9oK/Ea//4bcT/51ofrewbe/J0ZhaAUfJu5O

Re: Getting different SA scores when using -R argument with spamc

2021-04-06 Thread Steve Dondley

On 2021-04-06 02:55 PM, Steve Dondley wrote:

On 2021-04-06 02:32 PM, Bill Cole wrote:

PLEASE NOTE:

I read the mailing list obsessively and DO NOT NEED (or want) the
extra copies sent when you send both to me and to the list.


Sorry, I still haven't figured out how to properly respond. When I hi
"reply all" it cc's the list and sends to you. When I hit just "reply"
it only sends to you. I've manually deleted you from the "To" box and
sending it directly to the list here. Hopefully that fixes things up.


Since the scores being added during delivery are much richer,
detecting enough info to do SPF and DKIM analysis, I am 99.9% certain
that the format of 'some_email' is mangled, probably missing critical
headers or using CR linebreaks instead of proper LFs.




I just noticed the date in the email header was from about a week ago.


Re: Getting different SA scores when using -R argument with spamc

2021-04-06 Thread Steve Dondley

On 2021-04-06 02:32 PM, Bill Cole wrote:

PLEASE NOTE:

I read the mailing list obsessively and DO NOT NEED (or want) the
extra copies sent when you send both to me and to the list.


Sorry, I still haven't figured out how to properly respond. When I hi 
"reply all" it cc's the list and sends to you. When I hit just "reply" 
it only sends to you. I've manually deleted you from the "To" box and 
sending it directly to the list here. Hopefully that fixes things up.



Since the scores being added during delivery are much richer,
detecting enough info to do SPF and DKIM analysis, I am 99.9% certain
that the format of 'some_email' is mangled, probably missing critical
headers or using CR linebreaks instead of proper LFs.


Hmm, this is on a linux box, so I'm not sure how it could be screwing up 
the line breaks. Is it possible that when spamd injects the scores 
before the body of the email, it is screwing things up?


Here is email as it sits in my inbox now, which is after it gets 
processed by spamd. I was under the impression that an email that had 
already been processed by SA could be processed again and it would 
ignore any modifications made by earlier passes through SA.


Return-Path: 


Delivered-To: s...@exmaple.com
Received: from email.exmaple.com
by email.exmaple.com with LMTP
id kAhSKc1dY2BCKgAAB604Gw
(envelope-from 
)

for ; Tue, 30 Mar 2021 13:20:13 -0400
Received: by email.exmaple.com (Postfix, from userid 115)
id A64BE200C8; Tue, 30 Mar 2021 13:20:13 -0400 (EDT)
Received: from localhost by email.exmaple.com
with SpamAssassin (version 3.4.2);
Tue, 30 Mar 2021 13:20:13 -0400
From: "Home Warranty - AHS" 
To: 
Subject: *SPAM* It's getting warmer, are you covered?
Date: Tue, 30 Mar 2021 05:18:34 -0700
Message-Id: 
X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on 
email.exmaple.com

X-Spam-Flag: YES
X-Spam-Level: *
X-Spam-Status: Yes, score=5.2 required=5.0 tests=BAYES_99,BAYES_999,
DATE_IN_PAST_03_06,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,

HTML_IMAGE_RATIO_02,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,RCVD_IN_MSPIKE_H2,

SPF_HELO_NONE,SPF_SOFTFAIL shortcircuit=no autolearn=no
autolearn_force=no version=3.4.2
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="--=_60635DCD.A0F5D194"

This is a multi-part message in MIME format.

=_60635DCD.A0F5D194
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit

Spam detection software, running on the system "email.exmaple.com",
has identified this incoming email as possible spam.  The original
message has been attached to this so you can view it or label
similar future email.  If you have any questions, see
the administrator of that system for details.

Content preview:  Your AHS Home Warranty covers the repair or 
replacement of
   many system and appliance breakdowns, but not necessarily the entire 
system
   or appliance. Please refer to your contract for details. American 
Home Shield
   150 Peabody Pl., Memphis, TN 38103. Unsubscribe | Privacy Policy © 
2021

  American Home Shield Corporation. All rights reserved.

Content analysis details:   (5.2 points, 5.0 required)

 pts rule name  description
 -- 
--

 0.2 BAYES_999  BODY: Bayes spam probability is 99.9 to 100%
[score: 1.]
 3.5 BAYES_99   BODY: Bayes spam probability is 99 to 100%
[score: 1.]
 0.7 SPF_SOFTFAIL   SPF: sender does not match SPF record 
(softfail)
-0.7 RCVD_IN_DNSWL_LOW  RBL: Sender listed at 
https://www.dnswl.org/,

low trust
[69.252.207.38 listed in list.dnswl.org]
-0.0 RCVD_IN_MSPIKE_H2  RBL: Average reputation (+2)
[69.252.207.38 listed in wl.mailspike.net]
 1.6 DATE_IN_PAST_03_06 Date: is 3 to 6 hours before Received: date
 0.0 SPF_HELO_NONE  SPF: HELO does not publish an SPF Record
 0.0 HTML_IMAGE_RATIO_02BODY: HTML has a low ratio of text to image
area
 0.0 HTML_MESSAGE   BODY: HTML included in message
-0.1 DKIM_VALID_AU  Message has a valid DKIM or DK signature 
from

author's domain
-0.1 DKIM_VALID Message has at least one valid DKIM or DK 
signature
 0.1 DKIM_SIGNEDMessage has a DKIM or DK signature, not 
necessarily

valid

The original message was not completely plain text, and may be unsafe to
open with some email clients; in particular, it may contain a virus,
or confirm that your address can receive spam.  If you wish to view
it, it may be safer to save it to a file and open it with an editor.


=_60635DCD.A0F5D19

Re: Getting different SA scores when using -R argument with spamc

2021-04-06 Thread Bill Cole

PLEASE NOTE:

I read the mailing list obsessively and DO NOT NEED (or want) the extra 
copies sent when you send both to me and to the list.



On 6 Apr 2021, at 14:17, Steve Dondley wrote:

Can you provide a working example message AND the operative user 
prefs?


OK, I was being very stupid. It finally dawned on me that the SA 
scores that appeared above the message body and below the headers when 
spamc was run without the -R option were SA scores embedded in the 
message by the postfix software and were not getting generated by 
spamc.


But that doesn't change the fact that the spamassassin score that is 
generated by the postfix command is different than what I'm getting 
directly on the command line. Here's is what is in my postfix 
master.cf file:


spamassassin unix - n   n   -   -   pipe
 user=debian-spamd argv=/usr/bin/spamc -u ${user} -e 
/usr/sbin/sendmail -oi -f ${sender} ${recipient}


Nitpick: Postfix is not adding the score report in the header, spamd is. 
That line hands off the message to spamc, which sends it to spamd and 
gets back a scored copy, which it then re-injects via sendmail (which is 
actually part of Postfix...)



spamassassin --prefs-file user_prefs_file -D all < some_email

Does the score and hits match one of your spamc tests?


No. The headers have a different score and the tests are different. 
It's scored only as 2.6 with BAYES_50 while what was embedded in the 
email by postfix had a BAYES_99  and BAYES_999 ans scored 5.2. postfix 
score also shows RCVD_IN_DNSWL_LOW while running from the command line 
does not show any such test hit.


And I cannot reproduce the SA scores embedded in the email by postfix 
even if I log in as user "s" and run this command:


spamassassin --prefs-file=/home/s/.spamassassin/user_prefs  -t < 
some_email


So I'm not sure what's going on.


Since the scores being added during delivery are much richer, detecting 
enough info to do SPF and DKIM analysis, I am 99.9% certain that the 
format of 'some_email' is mangled, probably missing critical headers or 
using CR linebreaks instead of proper LFs.



--
Bill Cole
b...@scconsult.com or billc...@apache.org
(AKA @grumpybozo and many *@billmail.scconsult.com addresses)
Not Currently Available For Hire


Re: Getting different SA scores when using -R argument with spamc

2021-04-06 Thread Steve Dondley




Can you provide a working example message AND the operative user prefs?


OK, I was being very stupid. It finally dawned on me that the SA scores 
that appeared above the message body and below the headers when spamc 
was run without the -R option were SA scores embedded in the message by 
the postfix software and were not getting generated by spamc.


But that doesn't change the fact that the spamassassin score that is 
generated by the postfix command is different than what I'm getting 
directly on the command line. Here's is what is in my postfix master.cf 
file:


spamassassin unix - n   n   -   -   pipe
 user=debian-spamd argv=/usr/bin/spamc -u ${user} -e 
/usr/sbin/sendmail -oi -f ${sender} ${recipient}





spamassassin --prefs-file user_prefs_file -D all < some_email

Does the score and hits match one of your spamc tests?


No. The headers have a different score and the tests are different. It's 
scored only as 2.6 with BAYES_50 while what was embedded in the email by 
postfix had a BAYES_99  and BAYES_999 ans scored 5.2. postfix score also 
shows RCVD_IN_DNSWL_LOW while running from the command line does not 
show any such test hit.


And I cannot reproduce the SA scores embedded in the email by postfix 
even if I log in as user "s" and run this command:


spamassassin --prefs-file=/home/s/.spamassassin/user_prefs  -t < 
some_email


So I'm not sure what's going on.


Re: Getting different SA scores when using -R argument with spamc

2021-04-06 Thread Bill Cole

On 6 Apr 2021, at 12:54, Steve Dondley wrote:


When I run spamc without -R option like this:

spamc -u some_user  < some_email

I get the following output:


[...]


However, when I run this command on the same email with the -R command 
to get the SA scores only like this:


spamc -R -u some_user  < some_email


I get this output:

[...]


Notice the scores are totally different.


Also, rules related to parsing the headers are wildly different. That 
shouldn't happen with the same input. I suspect a subtle difference 
between your inputs.



According to man page, -R says:

Just output the SpamAssassin report text to stdout, for all messages.  
See -r for details of the output format used.


So why are the scores different with and without the -R option?


Dunno. I cannot reproduce it despite trying with the last 50 messages to 
be seen by my mail server. I'm trying another 125 also but I don't 
really expect to see differences.


Can you provide a working example message AND the operative user prefs?

Run this:

spamassassin --prefs-file user_prefs_file -D all < some_email

Does the score and hits match one of your spamc tests?

--
Bill Cole
b...@scconsult.com or billc...@apache.org
(AKA @grumpybozo and many *@billmail.scconsult.com addresses)
Not Currently Available For Hire


Getting different SA scores when using -R argument with spamc

2021-04-06 Thread Steve Dondley

When I run spamc without -R option like this:

spamc -u some_user  < some_email

I get the following output:





This is a multi-part message in MIME format.




Content analysis details:   (5.2 points, 5.0 required)

 pts rule name  description
 -- 
--

 0.2 BAYES_999  BODY: Bayes spam probability is 99.9 to 100%
[score: 1.]
 3.5 BAYES_99   BODY: Bayes spam probability is 99 to 100%
[score: 1.]
 0.7 SPF_SOFTFAIL   SPF: sender does not match SPF record 
(softfail)
-0.7 RCVD_IN_DNSWL_LOW  RBL: Sender listed at 
https://www.dnswl.org/,

low trust
[69.252.207.38 listed in list.dnswl.org]
-0.0 RCVD_IN_MSPIKE_H2  RBL: Average reputation (+2)
[69.252.207.38 listed in wl.mailspike.net]
 1.6 DATE_IN_PAST_03_06 Date: is 3 to 6 hours before Received: date
 0.0 SPF_HELO_NONE  SPF: HELO does not publish an SPF Record
 0.0 HTML_IMAGE_RATIO_02BODY: HTML has a low ratio of text to image
area
 0.0 HTML_MESSAGE   BODY: HTML included in message
-0.1 DKIM_VALID_AU  Message has a valid DKIM or DK signature 
from

author's domain
-0.1 DKIM_VALID Message has at least one valid DKIM or DK 
signature
 0.1 DKIM_SIGNEDMessage has a DKIM or DK signature, not 
necessarily




===



However, when I run this command on the same email with the -R command 
to get the SA scores only like this:


spamc -R -u some_user  < some_email


I get this output:


===

2.6/5.0
Spam detection software, running on the system "email.dondley.com",
has NOT identified this incoming email as spam.  The original
message has been attached to this so you can view it or label
similar future email.  If you have any questions, see
the administrator of that system for details.

Content preview:  Spam detection software, running on the system 
"email.dondley.com",
   has identified this incoming email as possible spam. The original 
message

   has been attached to this so you can view it or label simi [...]

Content analysis details:   (2.6 points, 5.0 required)

 pts rule name  description
 -- 
--

 0.8 BAYES_50   BODY: Bayes spam probability is 40 to 60%
[score: 0.5000]
-0.0 NO_RELAYS  Informational: message was not relayed via 
SMTP

 0.2 HEADER_FROM_DIFFERENT_DOMAINS From and EnvelopeFrom 2nd level
mail domains are different
 1.6 DATE_IN_PAST_03_06 Date: is 3 to 6 hours before Received: date
 0.0 HTML_MESSAGE   BODY: HTML included in message
 0.0 HTML_IMAGE_RATIO_02BODY: HTML has a low ratio of text to image
area


====


Notice the scores are totally different. According to man page, -R says:

Just output the SpamAssassin report text to stdout, for all messages.  
See -r for details of the output format used.


So why are the scores different with and without the -R option?


Re: sa 3.4.4 'spamassassin' scores test message using local.cf; 'spamd' finds/reads local.cg, but 'spamc' of test msg fails to hit/score?

2020-06-10 Thread RW
On Tue, 9 Jun 2020 19:55:24 -0700
PGNet Dev wrote:

> sorry, that's unclear
> 
> spamc --help | egrep "config|socket|fallback|size|username|log-to"
>   -U, --socket path   Connect to spamd via UNIX domain sockets.
>   -F, --config path   Use this configuration file.
>   Try connecting to spamd tcp socket this many
> times -s, --max-size size Specify maximum message size, in bytes.
>   -u, --username username
>   -x, --no-safe-fallback
>   Don't fallback safely.
>   -l, --log-to-stderr Log errors and warnings to stderr.
> 
> 
> just to be clear, which of those options is not a spamc option?

Looking at it again there is only one other beside -u. 

  -F, --config path 

is for spamc configuration not the spamassassin/spamd configuration.
It's for use on the command line. 


Re: sa 3.4.4 'spamassassin' scores test message using local.cf; 'spamd' finds/reads local.cg, but 'spamc' of test msg fails to hit/score?

2020-06-09 Thread PGNet Dev
On 6/9/20 7:45 PM, PGNet Dev wrote:
> RW Tue, 09 Jun 2020 17:15:49 -0700
> If you need this line you are doing something strange.

always happy to simplify.

rm'ing

--configpath=/usr/local/etc/spamassassin \

from spamd launch, I still see

...
Jun 09 19:44:41 dev.loc spamd[57610]: config: read file 
/usr/local/etc/spamassassin/local.cf
Jun 09 19:46:31 dev.loc spamd[57731]: config: using 
"/usr/local/etc/spamassassin" for site rules pre files
Jun 09 19:46:31 dev.loc spamd[57731]: config: read file 
/usr/local/etc/spamassassin/init.pre
Jun 09 19:46:31 dev.loc spamd[57731]: config: read file 
/usr/local/etc/spamassassin/sh.pre
Jun 09 19:46:31 dev.loc spamd[57731]: config: read file 
/usr/local/etc/spamassassin/v310.pre
Jun 09 19:46:31 dev.loc spamd[57731]: config: read file 
/usr/local/etc/spamassassin/v312.pre
...

so that's good.


> Also, the last time I checked, you still need to pass '-u  spamd' even
> 
> if you start spamd as that user.

it's already 'in there'

--username=spamd \

--groupname=spamd \

where,

spamd --help | egrep "\-u|\-g"
 -u username, --username=username  Run as username
 -g groupname, --groupname=groupname  Run as groupname

> 
>> but, with this 'spamc' config,
> 
>>
> 
>>cat /usr/local/etc/spamassassin/spamc.conf
> 
>>--config=/usr/local/etc/spamassassin/local.cf
> 
>>--socket=/run/spamd/spamd.sock
> 
>>--no-safe-fallback
> 
>>--max-size=100
> 
>>--username spamd
> 
>>--log-to-stderr
> 
> 
> 
> You are passing a mixture of spamd and spamc arguments to spamc.

sorry, that's unclear

spamc --help | egrep "config|socket|fallback|size|username|log-to"
  -U, --socket path   Connect to spamd via UNIX domain sockets.
  -F, --config path   Use this configuration file.
  Try connecting to spamd tcp socket this many times
  -s, --max-size size Specify maximum message size, in bytes.
  -u, --username username
  -x, --no-safe-fallback
  Don't fallback safely.
  -l, --log-to-stderr Log errors and warnings to stderr.


just to be clear, which of those options is not a spamc option?

> You are also treating --username as if it were an argument to spamd
> 
> where it's the unprivileged user. In spamc it's the user for 'per user'
> 
> features - you shouldn't need it.

ok, removing it.

-   --username spamd


and, now,

spamc < sample-spam.txt

X-Spam-Status: Yes, score=1003.0 required=5.0 
tests=DCC_CHECK,FSL_BULK_SIG,
GTUBE,NO_RECEIVED,NO_RELAYS autolearn=disabled version=3.4.4


::facepalm:: !

ta!! o/


Re: sa 3.4.4 'spamassassin' scores test message using local.cf; 'spamd' finds/reads local.cg, but 'spamc' of test msg fails to hit/score?

2020-06-09 Thread RW
On Tue, 9 Jun 2020 16:27:01 -0700
PGNet Dev wrote:


> next, launching 'spamd', 
> 

>--configpath=/usr/local/etc/spamassassin \

If you need this line you are doing something strange.

You are overriding the default config location with the default site
config location. There's not much there anymore, but the config
location used to be where the rules went when they were updated by
package rather than sa-update - it's for installed files.

Also, the last time I checked, you still need to pass '-u  spamd' even
if you start spamd as that user.

> but, with this 'spamc' config,
> 
>   cat /usr/local/etc/spamassassin/spamc.conf
>   --config=/usr/local/etc/spamassassin/local.cf
>   --socket=/run/spamd/spamd.sock
>   --no-safe-fallback
>   --max-size=100
>   --username spamd
>   --log-to-stderr

You are passing a mixture of spamd and spamc arguments to spamc. 

You are also treating --username as if it were an argument to spamd
where it's the unprivileged user. In spamc it's the user for 'per user'
features - you shouldn't need it.




sa 3.4.4 'spamassassin' scores test message using local.cf; 'spamd' finds/reads local.cg, but 'spamc' of test msg fails to hit/score?

2020-06-09 Thread PGNet Dev


I'm setting up a local/standalone instance spamd on linux

lsb_release -rd
Description:openSUSE Leap 15.1
Release:15.1

uname -rm
5.7.1-25.gc4df4ce-default x86_64

perl -v
This is perl 5, version 26, subversion 1 (v5.26.1) built for 
x86_64-linux-thread-multi

I've built/installed

ls -al `which spamassassin` `which spamd` `which spamc`
-r-xr-xr-x 1 root root  30K Jun  9 10:05 /usr/bin/spamassassin*
-r-xr-xr-x 1 root root  60K Jun  9 10:05 /usr/bin/spamc*
-r-xr-xr-x 1 root root 128K Jun  9 10:05 /usr/bin/spamd*

spamassassin -V
SpamAssassin version 3.4.4
  running on Perl version 5.26.1
spamd -V
SpamAssassin Server version 3.4.4
  running on Perl 5.26.1
  with SSL support (IO::Socket::SSL 2.067)
  with zlib support (Compress::Zlib 2.093)
spamc -V
SpamAssassin Client version 3.4.4
  compiled with SSL support (OpenSSL 1.1.1g  21 Apr 2020)

cleaning old data

cd /var/lib/spamassassin
rm -f updates_spamassassin_org.cf
rm -rf updates_spamassassin_org/*
rm -rf 3.00*
rm -rf compiled*

updating/compiling,

/usr/bin/sudo -u spamd \
/usr/bin/sa-update -D \
--channel updates.spamassassin.org \
--allowplugins \
--reallyallowplugins \
--refreshmirrors

/usr/bin/sudo -u spamd \
 /usr/bin/sa-compile -D \
 --siteconfigpath=/usr/local/etc/spamassassin

populates

tree 3.004004/ compiled/
3.004004/
├── updates_spamassassin_org
│   ├── 10_default_prefs.cf
│   ├── 10_hasbase.cf
... 

│   ├── 60_whitelist_subject.cf
│   ├── 72_active.cf
│   ├── 72_scores.cf
│   ├── 73_sandbox_manual_scores.cf
│   ├── languages
│   ├── local.cf
│   ├── MIRRORED.BY
│   ├── regression_tests.cf
│   ├── sa-update-pubkey.txt
│   ├── STATISTICS-set0-72_scores.cf.txt
│   ├── STATISTICS-set1-72_scores.cf.txt
│   ├── STATISTICS-set2-72_scores.cf.txt
│   ├── STATISTICS-set3-72_scores.cf.txt
│   └── user_prefs.template
└── updates_spamassassin_org.cf
compiled/
└── 5.026
└── 3.004004
├── auto
│   └── Mail
│   └── SpamAssassin
│   └── CompiledRegexps
│   ├── body_0
│   │   └── body_0.so
│   ├── body_neg1000
│   │   └── body_neg1000.so
│   └── body_neg300
│   └── body_neg300.so
├── bases_body_0.pl
├── bases_body_neg1000.pl
├── bases_body_neg300.pl
└── Mail
└── SpamAssassin
└── CompiledRegexps
├── body_0.pm
├── body_neg1000.pm
└── body_neg300.pm

13 directories, 77 files


using the "sample-spam" GTUBE file,

wget http://svn.apache.org/repos/asf/spamassassin/trunk/sample-spam.txt

testing 'spamassassin', returns as expected

spamassassin -D -t < sample-spam.txt
...
 pts rule name  description
 -- 
--
1000 GTUBE  BODY: Generic Test for Unsolicited 
Bulk Email
-0.0 NO_RELAYS  Informational: message was not 
relayed via SMTP
-1.9 BAYES_00   BODY: Bayes spam probability is 0 
to 1%
[score: 0.]
 3.0 DCC_CHECK  Detected as bulk mail by DCC 
(dcc-servers.net)
-0.0 NO_RECEIVEDInformational: message has no 
Received headers
 0.0 FSL_BULK_SIG   Bulk signature with no Unsubscribe

Jun  9 15:22:44.510 [26682] dbg: check: tagrun - tag 
SENDERDOMAIN is still blocking action 2
Jun  9 15:22:44.510 [26682] dbg: check: tagrun - tag DKIMDOMAIN 
is still blocking action 0
Jun  9 15:22:44.511 [26682] dbg: plugin: 
Mail::SpamAssassin::Plugin::MIMEHeader=HASH(0x55ec83b68860) implements 

Rules without scores

2020-03-06 Thread RW


There are some rules (listed below) that have no explicit scores and
fall back on the default 1 point.

ADMAIL
ADVANCE_FEE_3_NEW_FORM
ADVANCE_FEE_4_NEW
ADVANCE_FEE_4_NEW_FRM_MNY
CN_B2B_SPAMMER
CTYPE_NULL
DOS_DEREK_AUG08
DX_TEXT_02
EXCUSE_24
FORGED_GMAIL_RCVD
FORGED_SPF_HELO
FROM_IN_TO_AND_SUBJ
FROM_MISSP_TO_UNDISC
FROM_OFFERS
FROM_WSP_TRAIL
FUZZY_ANDROID
FUZZY_BROWSER
FUZZY_BTC_WALLET
FUZZY_CLICK_HERE
FUZZY_DR_OZ
FUZZY_IMPORTANT
FUZZY_MONERO
FUZZY_PRIVACY
FUZZY_PROMOTION
FUZZY_SAVINGS
FUZZY_SECURITY
FUZZY_UNSUBSCRIBE
FUZZY_WALLET
GOOG_REDIR_SHORT
HK_LOTTO
MAILING_LIST_MULTI
MANY_SPAN_IN_TEXT
MISSING_FROM
MIXED_ES
MSGID_MULTIPLE_AT
PUMPDUMP_TIP
RCVD_DBL_DQ
SUBJ_BRKN_WORDNUMS
SUBJ_UNNEEDED_HTML
URI_DQ_UNSUB
URI_TRY_USME
URI_WPADMIN
XM_PHPMAILER_FORGED


Re: Extreme scores from FRNAME rules.

2018-10-25 Thread Reio Remma

On 25/10/2018 14:06, Matus UHLAR - fantomas wrote:

On 25/10/2018 11:43, Matus UHLAR - fantomas wrote:

On 25/10/2018 10:33, Matus UHLAR - fantomas wrote:

bug number would help more...


On 25.10.18 10:58, Reio Remma wrote:
The bug contains no additional info. :) I was simply asked to post 
to the list.


and this is exactly why it would be better to post the link to the 
bug, or

at least the bug number, instead of just link to the attachment...


On 25.10.18 11:46, Reio Remma wrote:

No worries. Here it is:

https://bz.apache.org/SpamAssassin/show_bug.cgi?id=7644


Good.  I don't see FRNAME_IN_MSG_NO_SUBJ in rules now (apparently due to
John Hardin's change) , but according to original description, they 
seem to

match:

*  2.5 FRNAME_IN_MSG_XPRIO From name in message + X-Priority

A+B = 2.5

*  2.5 XPRIO_SHORT_SUBJ Has X-Priority header + short subject

B+C = 2.5

*  2.5 FRNAME_IN_MSG_NO_SUBJ From name in message + short or no subject

A+C = 2.5

so, in fact neither of them overlaps, but they all three in common 
seem to match three different conditions, where final score was 3*2.5



currently we have FRNAME_IN_MSG_XPRIO_NO_SUB which matches

A+B+C

but does not match short subject now.

This could fix your problem, can you rescan the mail?


current scores:

score FRNAME_IN_MSG_NO_SUBJ 0.001 2.499 0.001 2.499
score FRNAME_IN_MSG_XPRIO   0.001 2.499 0.001 2.499
score FRNAME_IN_MSG_XPRIO_NO_SUB    2.499 0.001 2.499 0.001
score XPRIO_SHORT_SUBJ  2.499 2.131 2.499 2.131

note that FRNAME_IN_MSG_NO_SUBJ and FRNAME_IN_MSG_XPRIO are not defined.


Tested from command line and it only matched this now:

2.5 XPRIO_SHORT_SUBJ   Has X-Priority header + short subject

That's much better. Thanks!

Reio


Re: Extreme scores from FRNAME rules.

2018-10-25 Thread Matus UHLAR - fantomas

On 25/10/2018 11:43, Matus UHLAR - fantomas wrote:

On 25/10/2018 10:33, Matus UHLAR - fantomas wrote:

bug number would help more...


On 25.10.18 10:58, Reio Remma wrote:
The bug contains no additional info. :) I was simply asked to post 
to the list.


and this is exactly why it would be better to post the link to the 
bug, or

at least the bug number, instead of just link to the attachment...


On 25.10.18 11:46, Reio Remma wrote:

No worries. Here it is:

https://bz.apache.org/SpamAssassin/show_bug.cgi?id=7644


Good.  I don't see FRNAME_IN_MSG_NO_SUBJ in rules now (apparently due to
John Hardin's change) , but according to original description, they seem to
match:

*  2.5 FRNAME_IN_MSG_XPRIO From name in message + X-Priority

A+B = 2.5

*  2.5 XPRIO_SHORT_SUBJ Has X-Priority header + short subject

B+C = 2.5

*  2.5 FRNAME_IN_MSG_NO_SUBJ From name in message + short or no subject

A+C = 2.5

so, in fact neither of them overlaps, but they all three in common seem to 
match three different conditions, where final score was 3*2.5



currently we have FRNAME_IN_MSG_XPRIO_NO_SUB which matches

A+B+C

but does not match short subject now.

This could fix your problem, can you rescan the mail?


current scores:

score FRNAME_IN_MSG_NO_SUBJ 0.001 2.499 0.001 2.499
score FRNAME_IN_MSG_XPRIO   0.001 2.499 0.001 2.499
score FRNAME_IN_MSG_XPRIO_NO_SUB2.499 0.001 2.499 0.001
score XPRIO_SHORT_SUBJ  2.499 2.131 2.499 2.131

note that FRNAME_IN_MSG_NO_SUBJ and FRNAME_IN_MSG_XPRIO are not defined.


I did first think of FRNAME_IN_MSG_XPRIO_NO_SUB balancing those three rules
- it could score negatively, so when mail would match all three meta-rules,
the final score wouldn't be triple of their scores.

however, I understand that such thing is too much for manual testing.

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
We are but packets in the Internet of life (userfriendly.org)


Re: Extreme scores from FRNAME rules.

2018-10-25 Thread Reio Remma

On 25/10/2018 11:43, Matus UHLAR - fantomas wrote:

On 25/10/2018 10:33, Matus UHLAR - fantomas wrote:

bug number would help more...


On 25.10.18 10:58, Reio Remma wrote:
The bug contains no additional info. :) I was simply asked to post to 
the list.


and this is exactly why it would be better to post the link to the 
bug, or
at least the bug number, instead of just link to the attachment... 


No worries. Here it is:

https://bz.apache.org/SpamAssassin/show_bug.cgi?id=7644



Re: Extreme scores from FRNAME rules.

2018-10-25 Thread Matus UHLAR - fantomas

On 22.10.18 21:34, Reio Remma wrote:
I have this perfectly legit mail that has a +7.5 score from these 
three rules.


*  2.5 FRNAME_IN_MSG_XPRIO From name in message + X-Priority
*  2.5 XPRIO_SHORT_SUBJ Has X-Priority header + short subject
*  2.5 FRNAME_IN_MSG_NO_SUBJ From name in message + short or no subject

If it wasn't for the -1.9 from Bayes and -2.6 from TxRep, it would 
have been thrown away.


Should these XPRIO/FRNAME rules stack like this?

The e-mail in question is available here:

https://bz.apache.org/SpamAssassin/attachment.cgi?id=5607



On 25/10/2018 10:33, Matus UHLAR - fantomas wrote:

bug number would help more...


On 25.10.18 10:58, Reio Remma wrote:
The bug contains no additional info. :) I was simply asked to post to 
the list.


and this is exactly why it would be better to post the link to the bug, or
at least the bug number, instead of just link to the attachment...

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Linux - It's now safe to turn on your computer.
Linux - Teraz mozete pocitac bez obav zapnut.


Re: Extreme scores from FRNAME rules.

2018-10-25 Thread Reio Remma

On 25/10/2018 10:33, Matus UHLAR - fantomas wrote:

On 22.10.18 21:34, Reio Remma wrote:
I have this perfectly legit mail that has a +7.5 score from these 
three rules.


*  2.5 FRNAME_IN_MSG_XPRIO From name in message + X-Priority
*  2.5 XPRIO_SHORT_SUBJ Has X-Priority header + short subject
*  2.5 FRNAME_IN_MSG_NO_SUBJ From name in message + short or no subject

If it wasn't for the -1.9 from Bayes and -2.6 from TxRep, it would 
have been thrown away.


Should these XPRIO/FRNAME rules stack like this?

The e-mail in question is available here:

https://bz.apache.org/SpamAssassin/attachment.cgi?id=5607


bug number would help more... 


The bug contains no additional info. :) I was simply asked to post to 
the list.


Reio


Re: Extreme scores from FRNAME rules.

2018-10-25 Thread Matus UHLAR - fantomas

On 22.10.18 21:34, Reio Remma wrote:

I have this perfectly legit mail that has a +7.5 score from these three rules.

*  2.5 FRNAME_IN_MSG_XPRIO From name in message + X-Priority
*  2.5 XPRIO_SHORT_SUBJ Has X-Priority header + short subject
*  2.5 FRNAME_IN_MSG_NO_SUBJ From name in message + short or no subject

If it wasn't for the -1.9 from Bayes and -2.6 from TxRep, it would have been 
thrown away.

Should these XPRIO/FRNAME rules stack like this?

The e-mail in question is available here:

https://bz.apache.org/SpamAssassin/attachment.cgi?id=5607


bug number would help more...

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Posli tento mail 100 svojim znamim - nech vidia aky si idiot
Send this email to 100 your friends - let them see what an idiot you are


Re: Extreme scores from FRNAME rules.

2018-10-22 Thread John Hardin

On Mon, 22 Oct 2018, Reio Remma wrote:


Hello!

I have this perfectly legit mail that has a +7.5 score from these three 
rules.


*  2.5 FRNAME_IN_MSG_XPRIO From name in message + X-Priority
*  2.5 XPRIO_SHORT_SUBJ Has X-Priority header + short subject
*  2.5 FRNAME_IN_MSG_NO_SUBJ From name in message + short or no subject

If it wasn't for the -1.9 from Bayes and -2.6 from TxRep, it would have been 
thrown away.


Should these XPRIO/FRNAME rules stack like this?

The e-mail in question is available here:

https://bz.apache.org/SpamAssassin/attachment.cgi?id=5607


I checked in some changes to reduce the overlap in the FRNAME rules. The 
reason they are scoring that high even with overlap is those are strong 
spam signs in the masscheck corpus.


And: Bayes and TxRep did exactly what they are supposed to do here.


--
 John Hardin KA7OHZhttp://www.impsec.org/~jhardin/
 jhar...@impsec.orgFALaholic #11174 pgpk -a jhar...@impsec.org
 key: 0xB8732E79 -- 2D8C 34F4 6411 F507 136C  AF76 D822 E6E6 B873 2E79
---
  Justice is justice, whereas "social justice" is code for one set
  of rules for the rich, another for the poor; one set for whites,
  another set for minorities; one set for straight men, another for
  women and gays. In short, it's the opposite of actual justice.
-- Burt Prelutsky
---
 571 days since the first commercial re-flight of an orbital booster (SpaceX)


Extreme scores from FRNAME rules.

2018-10-22 Thread Reio Remma

Hello!

I have this perfectly legit mail that has a +7.5 score from these three rules.

*  2.5 FRNAME_IN_MSG_XPRIO From name in message + X-Priority
*  2.5 XPRIO_SHORT_SUBJ Has X-Priority header + short subject
*  2.5 FRNAME_IN_MSG_NO_SUBJ From name in message + short or no subject

If it wasn't for the -1.9 from Bayes and -2.6 from TxRep, it would have been 
thrown away.

Should these XPRIO/FRNAME rules stack like this?

The e-mail in question is available here:

https://bz.apache.org/SpamAssassin/attachment.cgi?id=5607

Thanks!
Reio



Re: Increase scores based on lewd body text

2018-05-03 Thread RW
On Thu, 3 May 2018 10:38:14 -0300
Steve Mallett wrote:

> Didn't cc users@
> 
> How do I add a non sa-compile ruleset to spamassassin? The googles
> are not helping.
> 

If  you mean non sa-update, you put them in the directory that
contains the global configuration file local.cf. In Linux this is
usually /etc/mail/spamassassin.

More generally  you can find it like this:


$ grep  LOCAL_RULES_DIR   `which spamassassin`
my $LOCAL_RULES_DIR = '/usr/local/etc/mail/spamassassin';#



Re: Fwd: Increase scores based on lewd body text

2018-05-03 Thread Benny Pedersen

Steve Mallett skrev den 2018-05-03 15:38:

Didn't cc users@

How do I add a non sa-compile ruleset to spamassassin? The googles are
not helping.


non sa-compile ?

show what you have tryed would help us to help you more

all rules must support sa-compile

else spamassassin --lint will fail

i dont know where local.cf is in ubuntu, but your edits must be there in 
this file


Fwd: Increase scores based on lewd body text

2018-05-03 Thread Steve Mallett
Didn't cc users@

How do I add a non sa-compile ruleset to spamassassin? The googles are not
helping.

on Ubuntu16

Steve

On Tue, May 1, 2018 at 7:52 PM, Kevin A. McGrail <kmcgr...@apache.org>
wrote:

> I have several rules for sexually explicit content in KAM.cf.  See
> https://www.pccc.com/downloads/SpamAssassin/contrib/KAM.cf
>
> --
> Kevin A. McGrail
> Asst. Treasurer & VP Fundraising, Apache Software Foundation
> Chair Emeritus Apache SpamAssassin Project
> https://www.linkedin.com/in/kmcgrail - 703.798.0171
>
> On Tue, May 1, 2018 at 6:42 PM, Steve Mallett <s...@iioo.co> wrote:
>
>>
>> Hi,
>>
>> I have mboxs I'm running spamassassin against & many emails with very
>> lewd body text have the same scores as other emails without.
>>
>> I'm invoking via: formail -s procmail ~/procmail.rc < mbox
>>
>> SA V: 3.4.1
>>
>> Running on Ubuntu 16.04LTS
>>
>> How can increase the scores on bad words in body text and/or is there a
>> recipe specifically for that type of thing?
>>
>>
>> Steve
>>
>
>


Re: Increase scores based on lewd body text

2018-05-01 Thread Benny Pedersen

Steve Mallett skrev den 2018-05-02 00:42:


How can increase the scores on bad words in body text and/or is there
a recipe specifically for that type of thing?


# add to local.cf

body FOO /foo/i
describe FOO foo found in body
score FOO 0.01 0.01 0.01 0.01

aditional can be

tflags FOO learn autolearn_force

that will make bayes learning based on foo found in body

more help, post a sample to pastebin.com and give links here


Re: Increase scores based on lewd body text

2018-05-01 Thread Kevin A. McGrail
I have several rules for sexually explicit content in KAM.cf.  See
https://www.pccc.com/downloads/SpamAssassin/contrib/KAM.cf

--
Kevin A. McGrail
Asst. Treasurer & VP Fundraising, Apache Software Foundation
Chair Emeritus Apache SpamAssassin Project
https://www.linkedin.com/in/kmcgrail - 703.798.0171

On Tue, May 1, 2018 at 6:42 PM, Steve Mallett <s...@iioo.co> wrote:

>
> Hi,
>
> I have mboxs I'm running spamassassin against & many emails with very lewd
> body text have the same scores as other emails without.
>
> I'm invoking via: formail -s procmail ~/procmail.rc < mbox
>
> SA V: 3.4.1
>
> Running on Ubuntu 16.04LTS
>
> How can increase the scores on bad words in body text and/or is there a
> recipe specifically for that type of thing?
>
>
> Steve
>


Increase scores based on lewd body text

2018-05-01 Thread Steve Mallett
Hi,

I have mboxs I'm running spamassassin against & many emails with very lewd
body text have the same scores as other emails without.

I'm invoking via: formail -s procmail ~/procmail.rc < mbox

SA V: 3.4.1

Running on Ubuntu 16.04LTS

How can increase the scores on bad words in body text and/or is there a
recipe specifically for that type of thing?


Steve


Re: Differing scores on spamassassin checks

2018-04-17 Thread John Hardin

On Tue, 17 Apr 2018, John Hardin wrote:


On Tue, 17 Apr 2018, Computer Bob wrote:

In this way, any user can move a mail to their .SpamLearn folder and it 
will get learned.


It is a very bad idea to do that without review unless you *strongly* trust 
the judgement and responsibility of your users.


Exception: per-user Bayes. If you're doing that you can let them suffer 
the wages of their own folly without negatively impacting other users.


--
 John Hardin KA7OHZhttp://www.impsec.org/~jhardin/
 jhar...@impsec.orgFALaholic #11174 pgpk -a jhar...@impsec.org
 key: 0xB8732E79 -- 2D8C 34F4 6411 F507 136C  AF76 D822 E6E6 B873 2E79
---
  Our government should bear in mind the fact that the American
  Revolution was touched off by the then-current government
  attempting to confiscate firearms from the people.
---
 2 days until the 243rd anniversary of The Shot Heard 'Round The World


Re: Differing scores on spamassassin checks

2018-04-17 Thread RW
On Tue, 17 Apr 2018 11:19:57 -0500
Computer Bob wrote:


> The problem I immediately see is that I get one big bayes of everyone 
> and a 'one for all, all for one' bayes config.
> I would like to configure SA to be able to deal with the virtual
> users individually somehow but don't know if it can (and requires
> source analysis).

There are two ways of doing this, one is to setup virtual home
directories by adding the following to the spamd options


-x  -c --virtual-config-dir=

How to set the pattern is described in the spamd documentation.

The other way is to store everything in an SQL database.


Re: Differing scores on spamassassin checks

2018-04-17 Thread John Hardin

On Tue, 17 Apr 2018, Computer Bob wrote:

In this way, any user can move a mail to their .SpamLearn folder and it 
will get learned.


It is a very bad idea to do that without review unless you *strongly* 
trust the judgement and responsibility of your users.


Allowing training without review may be suitable for a small subset of 
trusted users, but in general, users will classify as spam "anything I 
don't want" even if it's something they *did* subscribe to from a vendor 
they *do* have a business relationship with.


The "learn as spam" folder will be treated as an easier alternative to 
hitting the "unsubscribe" link in emails, in part because we've been 
training users to *not* click on unsubscribe links in emails from 
businesses they don't have any legitimate interaction with, and all they 
hear is the "don't click on unsubscribe links" part - the other part 
requires actual *judgement*.


Also: for performance reasons you really should relocate those messages 
once they've been learned, but do keep those messages permanently as your 
Bayes training corpus, so that (1) you can review the users' 
classifications and correct any mistraining, and (2) you can easily 
rebuild Bayes from scratch if it goes off the rails.



--
 John Hardin KA7OHZhttp://www.impsec.org/~jhardin/
 jhar...@impsec.orgFALaholic #11174 pgpk -a jhar...@impsec.org
 key: 0xB8732E79 -- 2D8C 34F4 6411 F507 136C  AF76 D822 E6E6 B873 2E79
---
  Our government should bear in mind the fact that the American
  Revolution was touched off by the then-current government
  attempting to confiscate firearms from the people.
---
 2 days until the 243rd anniversary of The Shot Heard 'Round The World


Re: Differing scores on spamassassin checks

2018-04-17 Thread Computer Bob

I would like to thank everyone for your responses, they have been great.
This maillist has not failed to help me improve things everytime I use it.

So this particular server has virtual domains and virtual users in a 
folder hierarchy there under all owned by 'vmail' user.

I have done the following:
1)  Installed a SiteWideBayesSetup config _without_ the 0777 set which 
seems to work for all virtual users regardless of their virtual domain.
2)  Config'd mail folders to be created in the mail folder hierarchy 
under each user called .SpamLearn with a subfolder of .Learned.
3)  Setup a cron to run periodically under user 'vmail' perusing all 
.SpamLearn folders and running sa-learn using the 'vmail' user on those 
found subsequently moving them to the corresponding .Learned folders.


In this way, any user can move a mail to their .SpamLearn folder and it 
will get learned.

Have I had too many beers ? or not enough ?
The problem I immediately see is that I get one big bayes of everyone 
and a 'one for all, all for one' bayes config.
I would like to configure SA to be able to deal with the virtual users 
individually somehow but don't know if it can (and requires source 
analysis).


In any event, it seems to be working pretty well and most all of the 
spam is apparently getting caught.

And no 'root' involvement...
Thanks to all respondents.


Re: Differing scores on spamassassin checks

2018-04-17 Thread RW
On Tue, 17 Apr 2018 15:44:25 +0200
Matus UHLAR - fantomas wrote:

> >> On 15.04.18 20:04, RW wrote:  
> >> >All setting bayes_path buys you here is the ability to run
> >> >sa-learn and spamassassin as root, something you should *never*
> >> >do anyway.  
> 
> >On Tue, 17 Apr 2018 13:55:13 +0200
> >Matus UHLAR - fantomas wrote:  
> >> it's the only way to use per-user settings and bayes DB on system
> >> with unix users.  
> 
> On 17.04.18 13:43, RW wrote:
> >spamd does, but not sa-learn or spamassassin.  
> 
> sa-learn and spamassassin use current user. this way people can use
> either, but spamd is most effective

I'm not sure what you trying to say here. My original point was that
sa-learn and spamassassin  shouldn't be run as root (to access global
databases owned by the unprivileged user running spamd). 

spamd is different because it never processes the contents of an email
as root.


Re: Differing scores on spamassassin checks

2018-04-17 Thread Matus UHLAR - fantomas

On 15.04.18 20:04, RW wrote:
>All setting bayes_path buys you here is the ability to run sa-learn
>and spamassassin as root, something you should *never* do anyway.



On Tue, 17 Apr 2018 13:55:13 +0200
Matus UHLAR - fantomas wrote:

it's the only way to use per-user settings and bayes DB on system
with unix users.


On 17.04.18 13:43, RW wrote:

spamd does, but not sa-learn or spamassassin.


sa-learn and spamassassin use current user. this way people can use either,
but spamd is most effective
(and great combined wish sa-milter)
--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I intend to live forever - so far so good. 


Re: Differing scores on spamassassin checks

2018-04-17 Thread RW
On Tue, 17 Apr 2018 13:55:13 +0200
Matus UHLAR - fantomas wrote:


> On 15.04.18 20:04, RW wrote:

> >All setting bayes_path buys you here is the ability to run sa-learn
> >and spamassassin as root, something you should *never* do anyway.  
> 
> it's the only way to use per-user settings and bayes DB on system
> with unix users.

spamd does, but not sa-learn or spamassassin.


Re: Differing scores on spamassassin checks

2018-04-17 Thread Matus UHLAR - fantomas

On Sun, 15 Apr 2018 13:39:31 -0500
Computer Bob wrote:


Update:
For this location, it is ok to have a central bayes database, so I
turned off AWL, adjusted local.cf to contain:
bayes_path /Central_Path/bayes_db/bayes
bayes_file_mode 0777


On 15.04.18 20:04, RW wrote:

Don't set 0777. If that's still in the wiki someone with access should
remove it.

All setting bayes_path buys you here is the ability to run sa-learn and
spamassassin as root, something you should *never* do anyway.


it's the only way to use per-user settings and bayes DB on system with unix
users.


If you run spamd as the unix user spamd, with "-u spamd", then spamd
look for files in ~spamd which is where it was finding them when you
(correctly) ran spamassassin as spamd.


It's quite possible just now wich is why spamd users' bayes DB gets used.

in such case bayes_path is not needed.

just the spamassassin and sa-learn should be done under spamd user.
--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I drive way too fast to worry about cholesterol. 


Re: Differing scores on spamassassin checks

2018-04-16 Thread Bill Cole

On 16 Apr 2018, at 19:01 (-0400), John Hardin wrote:


On Mon, 16 Apr 2018, Computer Bob wrote:


Why should sa-learn not be run as root ?


That's a general safe practice. Do as little as root as you possibly 
can. Why risk a root crack from an unknown bug in sa-learn that 
somebody has discovered and figured out how to exploit via email?


Right: don't let malicious strangers talk to root, even via email.

ALSO: sa-learn itself won't stop you from running it as root. Without a 
global bayes_path, it will learn into ~root/.spamassassin/bayes_* files 
which no other user can access and spamd can't even TRY to use because 
it refuses to run as root and drops to 'nobody' if run by root. With a 
global bayes_path, the bayes_* files will become owned by root and 
everything else trying to use them (i.e. everything) will fail.


--
Bill Cole
b...@scconsult.com or billc...@apache.org
(AKA @grumpybozo and many *@billmail.scconsult.com addresses)
Currently Seeking Steady Work: https://linkedin.com/in/billcole


Re: Differing scores on spamassassin checks

2018-04-16 Thread John Hardin

On Mon, 16 Apr 2018, Computer Bob wrote:


Why should sa-learn not be run as root ?


That's a general safe practice. Do as little as root as you possibly can. 
Why risk a root crack from an unknown bug in sa-learn that somebody has 
discovered and figured out how to exploit via email?


--
 John Hardin KA7OHZhttp://www.impsec.org/~jhardin/
 jhar...@impsec.orgFALaholic #11174 pgpk -a jhar...@impsec.org
 key: 0xB8732E79 -- 2D8C 34F4 6411 F507 136C  AF76 D822 E6E6 B873 2E79
---
  Ten-millimeter explosive-tip caseless, standard light armor
  piercing rounds. Why?
---
 3 days until the 243rd anniversary of The Shot Heard 'Round The World


Re: Differing scores on spamassassin checks

2018-04-16 Thread Computer Bob

Well, now I am more thoroughly confused than usual. #:)

On 4/15/18 2:04 PM, RW wrote:

On Sun, 15 Apr 2018 13:39:31 -0500
Computer Bob wrote:

Update:
For this location, it is ok to have a central bayes database, so I
turned off AWL, adjusted local.cf to contain:
bayes_path /Central_Path/bayes_db/bayes
bayes_file_mode 0777

Don't set 0777. If that's still in the wiki someone with access should
remove it.

So is the SiteWideBayesSetup ok to run without the 0777 ?

All setting bayes_path buys you here is the ability to run sa-learn and
spamassassin as root, something you should *never* do anyway.
This seems contrary to 
https://wiki.apache.org/spamassassin/SiteWideBayesSetup does it not ?

Why should sa-learn not be run as root ?


If you run spamd as the unix user spamd, with "-u spamd", then spamd
look for files in ~spamd which is where it was finding them when you
(correctly) ran spamassassin as spamd.
The /etc/init.d/spamassassin init script is not starting spamd with -u, 
it is only -D but clearly mail processing in the logs show:
Apr 16 17:31:13 M1-2 spamd[3926]: spamd: connection from localhost 
[127.0.0.1]:49938 to port 783, fd 5
Apr 16 17:31:13 M1-2 spamd[3926]: spamd: setuid to spamd succeeded 
<---changing here***
Apr 16 17:31:13 M1-2 spamd[3926]: spamd: processing message 
 for 
spamd:1001
Apr 16 17:31:13 M1-2 postfix/smtpd[4248]: disconnect from 
mail.microcenter.com[66.194.187.30] ehlo=2 starttls=1 mail=1 rcpt=1 
data=1 quit=1 commands=7
Apr 16 17:31:19 M1-2 spamd[3926]: spamd: clean message (1.7/4.0) for 
spamd:1001 in 6.0 seconds, 30321 bytes.


This setup is running all virtual users and virtual domains via mysql 
and the logs show mail traversing the spamd daemon.
The spamd daemon is running as user spamd and does seem to be using the 
SiteWide files specified.




Re: Differing scores on spamassassin checks

2018-04-16 Thread Amir Caspi
On Apr 16, 2018, at 11:15 AM, RW  wrote:
> 
> You seem to be confusing unix and virtual users.

Sorry, I was confusing "virtual hosting" with "virtual users."  Oops.

Ignore me!

--- Amir



Re: Differing scores on spamassassin checks

2018-04-16 Thread RW
On Mon, 16 Apr 2018 10:34:41 -0600
Amir Caspi wrote:

> > On Apr 15, 2018, at 12:39 PM, Computer Bob 
> > wrote:
> > 
> > I still am a bit puzzled how bayes db gets handled when using
> > virtual users and domains. I see no trace of bayes or .spamassassin
> > files in any of the virtual locations or in the sql databases.  
> 
> If you want Bayes to run per-user with virtual hosts then you need to
> use some sort of glue for each user to invoke spamd as their own
> user.  This is typically done by running spamd as root (without the
> -u flag) and enabling per-user settings (-cH) and then using global
> (or per-user) procmail line to invoke spamc with the -u flag.  But
> that's not the default behavior for SA, unless it was packaged that
> way by your virtual hosting software (e.g., Parallels Pro née Ensim
> did it that way).

You seem to be confusing unix and virtual users.


Re: Differing scores on spamassassin checks

2018-04-16 Thread Amir Caspi
> On Apr 15, 2018, at 12:39 PM, Computer Bob  wrote:
> 
> I still am a bit puzzled how bayes db gets handled when using virtual users 
> and domains. I see no trace of bayes or .spamassassin files in any of the 
> virtual locations or in the sql databases.

If you want Bayes to run per-user with virtual hosts then you need to use some 
sort of glue for each user to invoke spamd as their own user.  This is 
typically done by running spamd as root (without the -u flag) and enabling 
per-user settings (-cH) and then using global (or per-user) procmail line to 
invoke spamc with the -u flag.  But that's not the default behavior for SA, 
unless it was packaged that way by your virtual hosting software (e.g., 
Parallels Pro née Ensim did it that way).

But if you're trying to use Bayes with mySQL or Redis, that can't be done 
per-user AFAIK.

Cheers.

--- Amir



Re: Differing scores on spamassassin checks

2018-04-15 Thread RW
On Sun, 15 Apr 2018 13:39:31 -0500
Computer Bob wrote:

> Update:
> For this location, it is ok to have a central bayes database, so I 
> turned off AWL, adjusted local.cf to contain:
> bayes_path /Central_Path/bayes_db/bayes
> bayes_file_mode 0777

Don't set 0777. If that's still in the wiki someone with access should
remove it.

All setting bayes_path buys you here is the ability to run sa-learn and
spamassassin as root, something you should *never* do anyway. 

If you run spamd as the unix user spamd, with "-u spamd", then spamd
look for files in ~spamd which is where it was finding them when you
(correctly) ran spamassassin as spamd.



On Sun, 15 Apr 2018 13:39:46 -0500
Computer Bob wrote:

> I still am a bit puzzled how bayes db gets handled when using virtual 
> users and domains. I see no trace of bayes or .spamassassin files in
> any of the virtual locations or in the sql databases.

It doesn't do that by default.


Re: Differing scores on spamassassin checks

2018-04-15 Thread RW
On Sun, 15 Apr 2018 11:08:35 -0700 (PDT)
John Hardin wrote:

> On Sun, 15 Apr 2018, Matus UHLAR - fantomas wrote:
> 
> > On 15.04.18 11:55, Computer Bob wrote:  
> >> Here is a root scan:  https://pastebin.com/qdXMRzKb  
> >
> > X-Spam-Status: Yes, score=10.2 required=4.0 tests=HTML_MESSAGE,
> >RAZOR2_CF_RANGE_51_100,RAZOR2_CHECK,RCVD_IN_SBL_CSS,SPF_HELO_PASS,
> >URIBL_DBL_SPAM autolearn=no autolearn_force=no version=3.4.1
> >  
> >> Here is the same run under spamd: https://pastebin.com/SvvYptYv  
> >
> > X-Spam-Status: No, score=2.5 required=4.0
> > tests=AWL,BAYES_00,HTML_MESSAGE,
> > RAZOR2_CF_RANGE_51_100,RAZOR2_CHECK,RCVD_IN_SBL_CSS,SPF_HELO_PASS
> > autolearn=no autolearn_force=no version=3.4.1
> >
> > the main two differences are AWL and BAYES_00 which means
> >
> > 1. your spamd' bayes database is mistrained
> > 2. you apparently should disable AWL at least until you train bayes
> > properly.  
> 
> Actually, it's using user-specific (vs. global) bayes databases, and 
> apparently only root's database is being trained.

No that's not correct. The version run as spamd is using files under
~spamd and has BAYES_00, the version run as root is using files under
~root and hasn't been trained.



Re: Differing scores on spamassassin checks

2018-04-15 Thread Matus UHLAR - fantomas

On 15.04.18 11:55, Computer Bob wrote:

Here is a root scan:  https://pastebin.com/qdXMRzKb



On Sun, 15 Apr 2018, Matus UHLAR - fantomas wrote:

X-Spam-Status: Yes, score=10.2 required=4.0 tests=HTML_MESSAGE,
  RAZOR2_CF_RANGE_51_100,RAZOR2_CHECK,RCVD_IN_SBL_CSS,SPF_HELO_PASS,
  URIBL_DBL_SPAM autolearn=no autolearn_force=no version=3.4.1



Here is the same run under spamd: https://pastebin.com/SvvYptYv



X-Spam-Status: No, score=2.5 required=4.0 tests=AWL,BAYES_00,HTML_MESSAGE,
  RAZOR2_CF_RANGE_51_100,RAZOR2_CHECK,RCVD_IN_SBL_CSS,SPF_HELO_PASS
  autolearn=no autolearn_force=no version=3.4.1



the main two differences are AWL and BAYES_00 which means

1. your spamd' bayes database is mistrained
2. you apparently should disable AWL at least until you train bayes
properly.



On Sun, 15 Apr 2018, John Hardin wrote:
Actually, it's using user-specific (vs. global) bayes databases, 
and apparently only root's database is being trained.


Define a shared Bayes database that all users can read and use that.


On 15.04.18 11:13, John Hardin wrote:

...or train as spamd rather than as root...


the root's BAYES DB seems untrained.
the spamd's is trained, but badly (re-training should help there).

the question is:

how is spamassassin used? running spamd? does spamd run with "-u" option?

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Depression is merely anger without enthusiasm. 


Re: Differing scores on spamassassin checks

2018-04-15 Thread John Hardin

On Sun, 15 Apr 2018, John Hardin wrote:


On Sun, 15 Apr 2018, Matus UHLAR - fantomas wrote:


On 15.04.18 11:55, Computer Bob wrote:

Here is a root scan:  https://pastebin.com/qdXMRzKb


X-Spam-Status: Yes, score=10.2 required=4.0 tests=HTML_MESSAGE,
   RAZOR2_CF_RANGE_51_100,RAZOR2_CHECK,RCVD_IN_SBL_CSS,SPF_HELO_PASS,
   URIBL_DBL_SPAM autolearn=no autolearn_force=no version=3.4.1


Here is the same run under spamd: https://pastebin.com/SvvYptYv


X-Spam-Status: No, score=2.5 required=4.0 tests=AWL,BAYES_00,HTML_MESSAGE,
   RAZOR2_CF_RANGE_51_100,RAZOR2_CHECK,RCVD_IN_SBL_CSS,SPF_HELO_PASS
   autolearn=no autolearn_force=no version=3.4.1

the main two differences are AWL and BAYES_00 which means

1. your spamd' bayes database is mistrained
2. you apparently should disable AWL at least until you train bayes
properly.


Actually, it's using user-specific (vs. global) bayes databases, and 
apparently only root's database is being trained.


Define a shared Bayes database that all users can read and use that.


...or train as spamd rather than as root...

--
 John Hardin KA7OHZhttp://www.impsec.org/~jhardin/
 jhar...@impsec.orgFALaholic #11174 pgpk -a jhar...@impsec.org
 key: 0xB8732E79 -- 2D8C 34F4 6411 F507 136C  AF76 D822 E6E6 B873 2E79
---
  Our government should bear in mind the fact that the American
  Revolution was touched off by the then-current government
  attempting to confiscate firearms from the people.
---
 4 days until the 243rd anniversary of The Shot Heard 'Round The World

Re: Differing scores on spamassassin checks

2018-04-15 Thread John Hardin

On Sun, 15 Apr 2018, Matus UHLAR - fantomas wrote:


On 15.04.18 11:55, Computer Bob wrote:

Here is a root scan:  https://pastebin.com/qdXMRzKb


X-Spam-Status: Yes, score=10.2 required=4.0 tests=HTML_MESSAGE,
   RAZOR2_CF_RANGE_51_100,RAZOR2_CHECK,RCVD_IN_SBL_CSS,SPF_HELO_PASS,
   URIBL_DBL_SPAM autolearn=no autolearn_force=no version=3.4.1


Here is the same run under spamd: https://pastebin.com/SvvYptYv


X-Spam-Status: No, score=2.5 required=4.0 tests=AWL,BAYES_00,HTML_MESSAGE,
   RAZOR2_CF_RANGE_51_100,RAZOR2_CHECK,RCVD_IN_SBL_CSS,SPF_HELO_PASS
   autolearn=no autolearn_force=no version=3.4.1

the main two differences are AWL and BAYES_00 which means

1. your spamd' bayes database is mistrained
2. you apparently should disable AWL at least until you train bayes
properly.


Actually, it's using user-specific (vs. global) bayes databases, and 
apparently only root's database is being trained.


Define a shared Bayes database that all users can read and use that.

--
 John Hardin KA7OHZhttp://www.impsec.org/~jhardin/
 jhar...@impsec.orgFALaholic #11174 pgpk -a jhar...@impsec.org
 key: 0xB8732E79 -- 2D8C 34F4 6411 F507 136C  AF76 D822 E6E6 B873 2E79
---
  Our government should bear in mind the fact that the American
  Revolution was touched off by the then-current government
  attempting to confiscate firearms from the people.
---
 4 days until the 243rd anniversary of The Shot Heard 'Round The World

Re: Differing scores on spamassassin checks

2018-04-15 Thread Matus UHLAR - fantomas

On 15.04.18 11:55, Computer Bob wrote:

Here is a root scan:  https://pastebin.com/qdXMRzKb


X-Spam-Status: Yes, score=10.2 required=4.0 tests=HTML_MESSAGE,
RAZOR2_CF_RANGE_51_100,RAZOR2_CHECK,RCVD_IN_SBL_CSS,SPF_HELO_PASS,
URIBL_DBL_SPAM autolearn=no autolearn_force=no version=3.4.1


Here is the same run under spamd: https://pastebin.com/SvvYptYv


X-Spam-Status: No, score=2.5 required=4.0 tests=AWL,BAYES_00,HTML_MESSAGE,
RAZOR2_CF_RANGE_51_100,RAZOR2_CHECK,RCVD_IN_SBL_CSS,SPF_HELO_PASS
autolearn=no autolearn_force=no version=3.4.1

the main two differences are AWL and BAYES_00 which means

1. your spamd' bayes database is mistrained
2. you apparently should disable AWL at least until you train bayes
properly.



On 4/15/18 11:34 AM, Computer Bob wrote:

Greeting all, *
*I have had some issues with spam getting low scores and in 
troubleshooting I have found that if I run a command line check 
with "spamassassin -D -x  < test" on a mail in question, I get a 
very high score when run under user root. When run under user spamd 
it gets a low passing score. This is on obvious spam mail. Any 
advice on how to determine what is the difference ? *

*




--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Atheism is a non-prophet organization. 


Re: Differing scores on spamassassin checks

2018-04-15 Thread Computer Bob

Here is a root scan:  https://pastebin.com/qdXMRzKb
Here is the same run under spamd: https://pastebin.com/SvvYptYv



On 4/15/18 11:34 AM, Computer Bob wrote:

Greeting all, *
*I have had some issues with spam getting low scores and in 
troubleshooting I have found that if I run a command line check with 
"spamassassin -D -x  < test" on a mail in question, I get a very high 
score when run under user root. When run under user spamd it gets a 
low passing score. This is on obvious spam mail. Any advice on how to 
determine what is the difference ? *
* 




Differing scores on spamassassin checks

2018-04-15 Thread Computer Bob

Greeting all, *
*I have had some issues with spam getting low scores and in 
troubleshooting I have found that if I run a command line check with 
"spamassassin -D -x  < test" on a mail in question, I get a very high 
score when run under user root. When run under user spamd it gets a low 
passing score. This is on obvious spam mail. Any advice on how to 
determine what is the difference ? *

*


Re: spamasssassin vs mimedefang scores

2018-02-22 Thread Bill Cole

On 22 Feb 2018, at 4:15, saqariden wrote:


Hello guys,

i'm using mimedefang with spamassasin, when I test an email with the 
command "spamassain -t file.eml", I got results like this:


Dails de l'analyse du message:   (-5.8 points, 3.0 requis)
-5.0 RCVD_IN_DNSWL_HI   RBL: Sender listed at 
http://www.dnswl.org/, high

trust
[70.38.112.54 listed in list.dnswl.org]

-1.9 BAYES_00   BODY: L'algorithme Bayien a alula 
probabilitde spam

entre 0 et 1%
[score: 0.]
 0.8 RDNS_NONE  Delivered to internal network by a host 
with no rDNS

 0.3 TO_EQ_FM_DOM_SPF_FAIL  To domain == From domain and external SPF
failed

However, the SA check which was done trough mimedefang, seems like 
giving other scores, how can i test an email to get these scores, and 
saw the difference.


Typically mimedefang runs as its own special user (e.g. 'defang') which 
may be configured to block normal interactive use or even simple 'su' 
use by root. This means that if you run 'spamassassin -t' in an 
interactive shell, you use the user_prefs, AWL/TxRep and BayesDB for the 
user running that shell, not the special user. This is particularly 
problematic for 'learning' ham and spam for the BayesDB, because it is 
easy to end up either training into a DB that is entirely separate from 
the system-wide one used by mimedefang OR working with the system-wide 
DBs in ways that change ownership of them so that mimedefang can't use 
them.


My solution for this is to use sudo and these shell aliases:

satest='sudo -H -u defang spamassassin -t '
lham='sudo -H -u defang  sa-learn --ham --progress '
lspam='sudo -H -u defang  sa-learn --spam --progress '
blspam='sudo -H -u defang spamassassin --add-to-blacklist '
reportspam='sudo -H -u defang spamassassin -r -t '



Re: spamasssassin vs mimedefang scores

2018-02-22 Thread Kevin A. McGrail

On 2/22/2018 4:15 AM, saqariden wrote:
i'm using mimedefang with spamassasin, when I test an email with the 
command "spamassain -t file.eml", I got results like this:


Dails de l'analyse du message:   (-5.8 points, 3.0 requis)
-5.0 RCVD_IN_DNSWL_HI   RBL: Sender listed at 
http://www.dnswl.org/, high

    trust
    [70.38.112.54 listed in list.dnswl.org]

-1.9 BAYES_00   BODY: L'algorithme Bayien a alula 
probabilitde spam

    entre 0 et 1%
    [score: 0.]
 0.8 RDNS_NONE  Delivered to internal network by a host 
with no rDNS

 0.3 TO_EQ_FM_DOM_SPF_FAIL  To domain == From domain and external SPF
    failed

However, the SA check which was done trough mimedefang, seems like 
giving other scores, how can i test an email to get these scores, and 
saw the difference.


Network tests and Bayesian tests could change in between runs.

Unless you ran the tests almost concurrently, this could be 
normal/expected behavior.


I love MD and I don't run it with spamassassin it's space.  I use a 
system call to spamc and interpret the results.  That way I'm always 
using the same configuration for spamassassin and I can spam it onto 
other servers easily.


Regards,
KAM


spamasssassin vs mimedefang scores

2018-02-22 Thread saqariden

Hello guys,

i'm using mimedefang with spamassasin, when I test an email with the 
command "spamassain -t file.eml", I got results like this:


Dails de l'analyse du message:   (-5.8 points, 3.0 requis)
-5.0 RCVD_IN_DNSWL_HI   RBL: Sender listed at http://www.dnswl.org/, 
high

trust
[70.38.112.54 listed in list.dnswl.org]

-1.9 BAYES_00   BODY: L'algorithme Bayien a alula 
probabilitde spam

entre 0 et 1%
[score: 0.]
 0.8 RDNS_NONE  Delivered to internal network by a host 
with no rDNS

 0.3 TO_EQ_FM_DOM_SPF_FAIL  To domain == From domain and external SPF
failed

However, the SA check which was done trough mimedefang, seems like 
giving other scores, how can i test an email to get these scores, and 
saw the difference.


Signature Academique





Re: Mailspike scores

2017-05-02 Thread John Hardin

On Tue, 2 May 2017, RW wrote:


On Tue, 2 May 2017 09:20:49 -0700 (PDT)
John Hardin wrote:


On Tue, 2 May 2017, Bowie Bailey wrote:


On 5/2/2017 11:53 AM, John Hardin wrote:

 On Tue, 2 May 2017, Bowie Bailey wrote:


 I was checking to see what the scores for mailspike were on my
server and I noticed that there are two sets of scores.

 Is this expected?


 50_scores is handcoded default scores, 72_scores is generated from
 masscheck


That's what I thought, but what's the point of having handcoded
scores that are overridden by the generated scores?


Reasonable default values are always a good idea.


It looks like 72_scores only holds scores for rules from 72_active.cf
plus scores for MSPIKE rules and SURBL_BLOCKED (what's that doing
there?).

If 72_scores holds generated scores why are so many of then 1.000?


Limits?


You say that 50_scores is "handcoded default scores", but at the top
of the file there's a big block of scores that are claimed to have been
set by perceptron, which I understand fell-off a long time ago. They
certainly look autogenerated.


Check the revision history in SVN.

   http://svn.apache.org/viewvc/spamassassin/trunk/rules/50_scores.cf


On the face of it it looks a lot of the rules I assumed were autoscored
are using scores generated years ago.


Potentially. I have not done a detailed review.


--
 John Hardin KA7OHZhttp://www.impsec.org/~jhardin/
 jhar...@impsec.orgFALaholic #11174 pgpk -a jhar...@impsec.org
 key: 0xB8732E79 -- 2D8C 34F4 6411 F507 136C  AF76 D822 E6E6 B873 2E79
---
  Maxim IV: Close air support covereth a multitude of sins.
---
 6 days until the 72nd anniversary of VE day


Re: Mailspike scores

2017-05-02 Thread RW
On Tue, 2 May 2017 09:20:49 -0700 (PDT)
John Hardin wrote:

> On Tue, 2 May 2017, Bowie Bailey wrote:
> 
> > On 5/2/2017 11:53 AM, John Hardin wrote:  
> >>  On Tue, 2 May 2017, Bowie Bailey wrote:
> >>   
> >> >  I was checking to see what the scores for mailspike were on my
> >> > server and I noticed that there are two sets of scores.
> >> > 
> >> >  Is this expected?  
> >>
> >>  50_scores is handcoded default scores, 72_scores is generated from
> >>  masscheck  
> >
> > That's what I thought, but what's the point of having handcoded
> > scores that are overridden by the generated scores?  
> 
> Reasonable default values are always a good idea.


It looks like 72_scores only holds scores for rules from 72_active.cf
plus scores for MSPIKE rules and SURBL_BLOCKED (what's that doing
there?).

If 72_scores holds generated scores why are so many of then 1.000?

You say that 50_scores is "handcoded default scores", but at the top
of the file there's a big block of scores that are claimed to have been
set by perceptron, which I understand fell-off a long time ago. They
certainly look autogenerated. 

On the face of it it looks a lot of the rules I assumed were autoscored
are using scores generated years ago.


Re: Mailspike scores

2017-05-02 Thread John Hardin

On Tue, 2 May 2017, Bowie Bailey wrote:


On 5/2/2017 11:53 AM, John Hardin wrote:

 On Tue, 2 May 2017, Bowie Bailey wrote:

>  I was checking to see what the scores for mailspike were on my server 
>  and I noticed that there are two sets of scores.
> 
>  Is this expected?


 50_scores is handcoded default scores, 72_scores is generated from
 masscheck


That's what I thought, but what's the point of having handcoded scores that 
are overridden by the generated scores?


Reasonable default values are always a good idea.

--
 John Hardin KA7OHZhttp://www.impsec.org/~jhardin/
 jhar...@impsec.orgFALaholic #11174 pgpk -a jhar...@impsec.org
 key: 0xB8732E79 -- 2D8C 34F4 6411 F507 136C  AF76 D822 E6E6 B873 2E79
---
  USMC Rules of Gunfighting #4: If your shooting stance is good,
  you're probably not moving fast enough nor using cover correctly.
---
 6 days until the 72nd anniversary of VE day


Re: Mailspike scores

2017-05-02 Thread Bowie Bailey

On 5/2/2017 11:53 AM, John Hardin wrote:

On Tue, 2 May 2017, Bowie Bailey wrote:

I was checking to see what the scores for mailspike were on my server 
and I noticed that there are two sets of scores.


50_scores.cf:  score RCVD_IN_MSPIKE_ZBI 2.7
50_scores.cf:  score RCVD_IN_MSPIKE_L5  2.5
50_scores.cf:  score RCVD_IN_MSPIKE_L4  1.7
50_scores.cf:  score RCVD_IN_MSPIKE_L3  0.9
50_scores.cf:  score RCVD_IN_MSPIKE_H3  -0.01
50_scores.cf:  score RCVD_IN_MSPIKE_H4  -0.01
50_scores.cf:  score RCVD_IN_MSPIKE_H5  -1.0
50_scores.cf:  score RCVD_IN_MSPIKE_BL  0.01
50_scores.cf:  score RCVD_IN_MSPIKE_WL  -0.01
72_scores.cf:score RCVD_IN_MSPIKE_BL 0.001 0.010 
0.001  0.010
72_scores.cf:score RCVD_IN_MSPIKE_H2 0.001 -2.800 
0.001 -2.800
72_scores.cf:score RCVD_IN_MSPIKE_H3 0.001 -0.010 
0.001 -0.010
72_scores.cf:score RCVD_IN_MSPIKE_H4 0.001 -0.010 
0.001 -0.010
72_scores.cf:score RCVD_IN_MSPIKE_H5 0.001 -1.000 
0.001 -1.000
72_scores.cf:score RCVD_IN_MSPIKE_L2 0.001 0.001 
0.001 0.001
72_scores.cf:score RCVD_IN_MSPIKE_L3 0.001 0.001 
0.001 0.001
72_scores.cf:score RCVD_IN_MSPIKE_L4 0.001 0.001 
0.001 0.001
72_scores.cf:score RCVD_IN_MSPIKE_L5 0.001 0.001 
0.001 0.001
72_scores.cf:score RCVD_IN_MSPIKE_WL 0.001 -0.010 
0.001 -0.010
72_scores.cf:score RCVD_IN_MSPIKE_ZBI0.001 0.001 
0.001 -1.001


Is this expected?


50_scores is handcoded default scores, 72_scores is generated from 
masscheck


That's what I thought, but what's the point of having handcoded scores 
that are overridden by the generated scores?


--
Bowie


Re: Mailspike scores

2017-05-02 Thread John Hardin

On Tue, 2 May 2017, Bowie Bailey wrote:

I was checking to see what the scores for mailspike were on my server and I 
noticed that there are two sets of scores.


50_scores.cf:  score RCVD_IN_MSPIKE_ZBI 2.7
50_scores.cf:  score RCVD_IN_MSPIKE_L5  2.5
50_scores.cf:  score RCVD_IN_MSPIKE_L4  1.7
50_scores.cf:  score RCVD_IN_MSPIKE_L3  0.9
50_scores.cf:  score RCVD_IN_MSPIKE_H3  -0.01
50_scores.cf:  score RCVD_IN_MSPIKE_H4  -0.01
50_scores.cf:  score RCVD_IN_MSPIKE_H5  -1.0
50_scores.cf:  score RCVD_IN_MSPIKE_BL  0.01
50_scores.cf:  score RCVD_IN_MSPIKE_WL  -0.01
72_scores.cf:score RCVD_IN_MSPIKE_BL 0.001 0.010 0.001  
0.010
72_scores.cf:score RCVD_IN_MSPIKE_H2 0.001 -2.800 0.001 
-2.800
72_scores.cf:score RCVD_IN_MSPIKE_H3 0.001 -0.010 0.001 
-0.010
72_scores.cf:score RCVD_IN_MSPIKE_H4 0.001 -0.010 0.001 
-0.010
72_scores.cf:score RCVD_IN_MSPIKE_H5 0.001 -1.000 0.001 
-1.000
72_scores.cf:score RCVD_IN_MSPIKE_L2 0.001 0.001 0.001 0.001
72_scores.cf:score RCVD_IN_MSPIKE_L3 0.001 0.001 0.001 0.001
72_scores.cf:score RCVD_IN_MSPIKE_L4 0.001 0.001 0.001 0.001
72_scores.cf:score RCVD_IN_MSPIKE_L5 0.001 0.001 0.001 0.001
72_scores.cf:score RCVD_IN_MSPIKE_WL 0.001 -0.010 0.001 
-0.010
72_scores.cf:score RCVD_IN_MSPIKE_ZBI0.001 0.001 0.001 
-1.001

Is this expected?


50_scores is handcoded default scores, 72_scores is generated from 
masscheck.


--
 John Hardin KA7OHZhttp://www.impsec.org/~jhardin/
 jhar...@impsec.orgFALaholic #11174 pgpk -a jhar...@impsec.org
 key: 0xB8732E79 -- 2D8C 34F4 6411 F507 136C  AF76 D822 E6E6 B873 2E79
---
  How do you argue with people to whom math is an opinion? -- Unknown
---
 6 days until the 72nd anniversary of VE day


  1   2   3   4   5   6   7   8   9   10   >