Re: SOLVED Re: Load Average Problems

2004-11-01 Thread jdow
From: John Fleming [EMAIL PROTECTED]
   What am I missing?  - John
 
  OK, from the spamd --help output:
   -m num, --max-children num Allow maximum num children
 
  So that option is positively a spamd thing. So how does one get that
  option into spamd? On the Mandrake test machine I have the init script
  in /etc/init.d as spamassassin. It includes these lines:
  

 Ahhh, THAT's what's missing from my understanding!  (plus a lot of other
 stuff!)

 I reviewed my spamassassin init.d script and saw the options in there.  A
 comment line in there directed me to /etc/default/spamassassin (specific
to
 Debian).  GUESS WHAT I FOUND IN THERE?

 OPTIONS=-c -m 10 -a -H

 GOOD GRIEF!  -m 10 and me with 512 RAM!

 Hope my load average will go down soon!!  Thanks especially to Jason and
 jdow and **WAIT** - I see I just got a msg from Duncan that directed me
 specifically to /etc/default/spamassassin!!!

 So I think the -m option could be added in the init script OR the
 /etc/default/spamassassin file, but the init script is probably
overwritten
 during updates, so better to use the default file.

 I know Fedora Core didn't used to have the /etc/default/spamassassin file,
 so that is specific to Debian.

Um, I just look at the actual script file for the distro in use and see
where *IT* expects the options file. It works better that way.

(I know I overkilled the explanation. I hoped it would increase the
understanding better than a rote solution.)

{^_-}




Re: Load Average Problems

2004-11-01 Thread jdow
From: Duncan Findlay [EMAIL PROTECTED]
 Sorry, Mandrake not Debian. Anyways, change options in
 /etc/sysconfig/spamassassin, I think.


Yeah, in theory that's the best way. But do copy the SPAMDOPTIONS
line and then place IT into the /etc/sysconfig/spamassassin. I get
naughty and cheat on that issue by changing the line in the script.

{^_-}



Nice URIDNSBL functionality

2004-11-01 Thread Bill Landry
Folks, with some of the nice functionality that the SA devs built into the
URIDNSBL plug-in (see
http://spamassassin.apache.org/full/3.0.x/dist/doc/Mail_SpamAssassin_Plugin_URIDNSBL.html),
you can do cool things like:

=
# URIDNSBL (queries URIs against standard DNSBLs)

uridnsbl  URIBL_AH_DNSBL dnsbl.ahbl.org.   TXT
body  URIBL_AH_DNSBL eval:check_uridnsbl('URIBL_AH_DNSBL')
describe  URIBL_AH_DNSBL Contains a URL listed in the AH DNSBL blocklist
tflagsURIBL_AH_DNSBL net
score URIBL_AH_DNSBL 0.5

uridnsbl  URIBL_NJA_DNSBL combined.njabl.org.   TXT
body  URIBL_NJA_DNSBL eval:check_uridnsbl('URIBL_NJA_DNSBL')
describe  URIBL_NJA_DNSBL Contains a URL listed in the NJA DNSBL blocklist
tflagsURIBL_NJA_DNSBL net
score URIBL_NJA_DNSBL 0.5

uridnsbl  URIBL_SBL_XBL sbl-xbl.spamhaus.org.   TXT
body  URIBL_SBL_XBL eval:check_uridnsbl('URIBL_SBL_XBL')
describe  URIBL_SBL_XBL Contains a URL listed in the SBL-XBL DNSBL blocklist
tflagsURIBL_SBL_XBL net
score URIBL_SBL_XBL 0.5

uridnsbl  URIBL_SORBS_DNSBL dnsbl.sorbs.net.   TXT
body  URIBL_SORBS_DNSBL eval:check_uridnsbl('URIBL_SORBS_DNSBL')
describe  URIBL_SORBS_DNSBL Contains a URL listed in the SORBS DNSBL
blocklist
tflagsURIBL_SORBS_DNSBL net
score URIBL_SORBS_DNSBL 0.5

# URIRHSBL (queries URIs against standard RHSBLs)

urirhsbl  URIBL_AH_RHSBL rhsbl.ahbl.org.   A
body  URIBL_AH_RHSBL eval:check_uridnsbl('URIBL_AH_RHSBL')
describe  URIBL_AH_RHSBL Contains a URL listed in the AH RHSBL blocklist
tflagsURIBL_AH_RHSBL net
score URIBL_AH_RHSBL 0.5

urirhsbl  URIBL_MP_RHSBL block.rhs.mailpolice.com.   A
body  URIBL_MP_RHSBL eval:check_uridnsbl('URIBL_MP_RHSBL')
describe  URIBL_MP_RHSBL Contains a URL listed in the MP RHSBL blocklist
tflagsURIBL_MP_RHSBL net
score URIBL_MP_RHSBL 0.5

urirhsbl  URIBL_SS_RHSBL blackhole.securitysage.com.   A
body  URIBL_SS_RHSBL eval:check_uridnsbl('URIBL_SS_RHSBL')
describe  URIBL_SS_RHSBL Contains a URL listed in the SS RHSBL blocklist
tflagsURIBL_SS_RHSBL net
score URIBL_SS_RHSBL 0.5
=

I have been running these additional URI tests for about two weeks and have
gotten very good results.  If you decide to try out these tests, you may
want to run them with minimal scores until you see how they are going to
perform for you in your particular environment.

Bill



Lint fails on latest bogus0virus-warnings.cf

2004-11-01 Thread Mike Zanker
From RulesDuJour last night:
Lint output: Relative score without previous setting in SpamAssassin 
configuration, skipping: score VIRUS_WARNING412		Unhelpful 'virus 
warning' (412)

Thanks,
Mike.


Re: [SURBL-Discuss] Nice URIDNSBL functionality

2004-11-01 Thread Bill Landry
- Original Message - 
From: Alex Broens [EMAIL PROTECTED]

  Folks, with some of the nice functionality that the SA devs built into
the
  URIDNSBL plug-in (see
 
http://spamassassin.apache.org/full/3.0.x/dist/doc/Mail_SpamAssassin_Plugin_URIDNSBL.html),
  you can do cool things like:
 
  =
  # URIDNSBL (queries URIs against standard DNSBLs)
 
  uridnsbl  URIBL_AH_DNSBL dnsbl.ahbl.org.   TXT
  body  URIBL_AH_DNSBL eval:check_uridnsbl('URIBL_AH_DNSBL')
  describe  URIBL_AH_DNSBL Contains a URL listed in the AH DNSBL blocklist
  tflagsURIBL_AH_DNSBL net
  score URIBL_AH_DNSBL 0.5
 
  I have been running these additional URI tests for about two weeks and
have
  gotten very good results.  If you decide to try out these tests, you may
  want to run them with minimal scores until you see how they are going to
  perform for you in your particular environment.

 Bill or anybody,

 Will these lookups also work with SA 2.6x/SpamcopURI ?

Alex, I'm sure it would work fine for the RHSBLs (in fact, I was using
SpamcopURI against the MailPolice RHSBL before upgrading to SA 3.0.x), but
probably will not for the DNSBLs, since I don't think it supports the
functionality of doing a DNS lookup on the URI and then querying the DNSBL
with the IP address (instead of the domain) like the URIDNSBL plug-in does.

Bill



Re: [SURBL-Discuss] Nice URIDNSBL functionality

2004-11-01 Thread Jeff Chan
On Sunday, October 31, 2004, 11:24:33 PM, Bill Landry wrote:
 - Original Message - 
 From: Alex Broens [EMAIL PROTECTED]

  Folks, with some of the nice functionality that the SA devs built into
 the
  URIDNSBL plug-in (see
 
 http://spamassassin.apache.org/full/3.0.x/dist/doc/Mail_SpamAssassin_Plugin_URIDNSBL.html),
  you can do cool things like:
 
  =
  # URIDNSBL (queries URIs against standard DNSBLs)
 
  uridnsbl  URIBL_AH_DNSBL dnsbl.ahbl.org.   TXT
  body  URIBL_AH_DNSBL eval:check_uridnsbl('URIBL_AH_DNSBL')
  describe  URIBL_AH_DNSBL Contains a URL listed in the AH DNSBL blocklist
  tflagsURIBL_AH_DNSBL net
  score URIBL_AH_DNSBL 0.5
 
  I have been running these additional URI tests for about two weeks and
 have
  gotten very good results.  If you decide to try out these tests, you may
  want to run them with minimal scores until you see how they are going to
  perform for you in your particular environment.

 Will these lookups also work with SA 2.6x/SpamcopURI ?

 Alex, I'm sure it would work fine for the RHSBLs (in fact, I was using
 SpamcopURI against the MailPolice RHSBL before upgrading to SA 3.0.x), but
 probably will not for the DNSBLs, since I don't think it supports the
 functionality of doing a DNS lookup on the URI and then querying the DNSBL
 with the IP address (instead of the domain) like the URIDNSBL plug-in does.

It may be worth pointing out that uridnsbl does not look up the
IP address of the URI against RBLs, but the IP address of the
URI domain's *name server*.  It's not the same thing as checking
the web server against an RBL, but looking up name servers is
quite effective if the RBL contains some addresses of spammer
name servers, as sbl.spamhaus.org definitely does.

Jeff C.
--
If it appears in hams, then don't list it.



less header information

2004-11-01 Thread Roel Bindels
Dear Listers,

I just want the X-spam-flag to be present in the header but not the report.
what am I doing wrong.
See my local.cf below

greetings Roel Bindels

rewrite_subject 0
report_safe 0
report_header  0
use_terse_report
required_hits   5.0
use_bayes   1
auto_learn  1
skip_rbl_checks 0
use_razor2  1
use_dcc 1
use_pyzor   1
ok_languagesall
ok_locales  all



SB_NEW_BULK SB_NSP_VOLUME_SPIKE

2004-11-01 Thread Bill Landry
I noticed that the devs included the above experimental SenderBase tests
with SA 3.0.x, so I enabled them a few weeks ago and have found them to work
quite nicely.  The SB_NEW_BULK test has a much higher hit ratio, and
provides more accurate results than the SB_NSP_VOLUME_SPIKE test does, but
both have proved to be nice additional spam tests.

Set a low score for them in your local.cf and see how they work for you.

You can find the following info about these tests in your SA rules directory
in the 20_dnsbl_tests.cf file:
=
# SenderBase information http://www.senderbase.org/dnsresponses.html
# these are experimental example rules

# sa.senderbase.org for SpamAssassin queries
# query.senderbase.org for other queries
header __SENDERBASE eval:check_rbl_txt('sb', 'sa.senderbase.org.')
tflags __SENDERBASE net

# S23 = domain daily magnitude, S25 = date of first message from this domain
header SB_NEW_BULK  eval:check_rbl_sub('sb', 'sb:S23  6.2 
(time - S25  120*86400)')
describe SB_NEW_BULKSender domain is new and very high volume
tflags SB_NEW_BULK  net

# S5 = category, S40 = IP daily magnitude, S41 = IP monthly magnitude
# note: accounting for rounding,  0.3 means at least a 59% volume spike
header SB_NSP_VOLUME_SPIKE  eval:check_rbl_sub('sb', 'sb:S5 =~ /NSP/ 
S41  3.8  S40 - S41  0.3')
describe SB_NSP_VOLUME_SPIKESender IP hosted at NSP has a volume spike
tflags SB_NSP_VOLUME_SPIKE  net
=

Bill



Re: [SURBL-Discuss] Nice URIDNSBL functionality

2004-11-01 Thread Bill Landry
- Original Message - 
From: Jeff Chan [EMAIL PROTECTED]

   Folks, with some of the nice functionality that the SA devs built
into
  the
   URIDNSBL plug-in (see
  
 
http://spamassassin.apache.org/full/3.0.x/dist/doc/Mail_SpamAssassin_Plugin_URIDNSBL.html),
   you can do cool things like:
  
   =
   # URIDNSBL (queries URIs against standard DNSBLs)
  
   uridnsbl  URIBL_AH_DNSBL dnsbl.ahbl.org.   TXT
   body  URIBL_AH_DNSBL eval:check_uridnsbl('URIBL_AH_DNSBL')
   describe  URIBL_AH_DNSBL Contains a URL listed in the AH DNSBL
blocklist
   tflagsURIBL_AH_DNSBL net
   score URIBL_AH_DNSBL 0.5
  
   I have been running these additional URI tests for about two weeks
and
  have
   gotten very good results.  If you decide to try out these tests, you
may
   want to run them with minimal scores until you see how they are going
to
   perform for you in your particular environment.

  Will these lookups also work with SA 2.6x/SpamcopURI ?

  Alex, I'm sure it would work fine for the RHSBLs (in fact, I was using
  SpamcopURI against the MailPolice RHSBL before upgrading to SA 3.0.x),
but
  probably will not for the DNSBLs, since I don't think it supports the
  functionality of doing a DNS lookup on the URI and then querying the
DNSBL
  with the IP address (instead of the domain) like the URIDNSBL plug-in
does.

 It may be worth pointing out that uridnsbl does not look up the
 IP address of the URI against RBLs, but the IP address of the
 URI domain's *name server*.  It's not the same thing as checking
 the web server against an RBL, but looking up name servers is
 quite effective if the RBL contains some addresses of spammer
 name servers, as sbl.spamhaus.org definitely does.

Yes, thanks for clarifying!

Bill



Re: less header information

2004-11-01 Thread Bill Landry
- Original Message - 
From: Roel Bindels [EMAIL PROTECTED]

 I just want the X-spam-flag to be present in the header but not the
report.
 what am I doing wrong.
 See my local.cf below

 greetings Roel Bindels

 rewrite_subject 0
 report_safe 0
 report_header  0
 use_terse_report
 required_hits   5.0
 use_bayes   1
 auto_learn  1
 skip_rbl_checks 0
 use_razor2  1
 use_dcc 1
 use_pyzor   1
 ok_languagesall
 ok_locales  all

Try adding remove_header after the report_safe 0 entry.  See
(http://spamassassin.apache.org/full/3.0.x/dist/doc/Mail_SpamAssassin_Conf.h
tml#basic_message_tagging_options):
=
report_safe { 0 | 1 | 2 } (default: 1)
[...]
If this option is set to 0, incoming spam is only modified by adding some
X-Spam- headers and no changes will be made to the body. In addition, a
header named X-Spam-Report will be added to spam. You can use the
remove_header option to remove that header after setting report_safe to 0
=

Bill



Re: FW: Lint fails on latest bogus0virus-warnings.cf

2004-11-01 Thread Raymond Dijkxhoorn
Hi!
Lint output: Relative score without previous setting in SpamAssassin
configuration, skipping: score VIRUS_WARNING412
Unhelpful 'virus
warning' (412)

Just for clarification, after this update:
Lint fails on 3.0.1 here
Lint does not fail for 2.6.3 here.
Noticed the same here.
Bye,
Raymond.


sa-learn after autolearn=no

2004-11-01 Thread Dan Barker
Group, please comment on or correct these two statements.

1) When I get a spam with autolearn=no in the headers, that means it's
already been learned.

2) There is no need to sa-learn --spam that message, it's already learned
but simply didn't meet the threshold.

Dan



trusted_networks and ALL_TRUSTED

2004-11-01 Thread Sean Doherty

Hi,

I'm looking for some clarification on trusted_networks, the 
ALL_TRUSTED rule, and in particular how trusted_networks are 
inferred if not specified in local.cf.

Since upgrading to 3.0.1 I have seen an increase in false
negatives, which would have otherwise been caught if not for
the ALL_TRUSTED rule firing.

I don't have trusted_networks set in local.cf, so SpamAssassin
will use the inference algorithm as specified in the docs:

- if the 'from' IP address is on the same /16 network as the top
  Received line's 'by' host, it's trusted 
- if the address of the 'from' host is in a reserved network range, 
  then it's trusted 
- if any addresses of the 'by' host is in a reserved network range, 
  then it's trusted

My postfix mail server, that runs SpamAssasin is in a reserved
network range (10.0.0.53) and processes only incoming mail. The
following msg snippet (Received headers) results in the ALL_TRUSTED 
rule firing:

Received: from 206.81.84.119 (unknown [206.81.84.119]) by
marvin.copperfasten.com (Postfix) with SMTP id 127ACEBC7F for
[EMAIL PROTECTED]; Mon,  1 Nov 2004 11:09:24 + (GMT)
Received: from 206.81.84.119 by mail003.datapropo.com; Mon, 01 Nov 2004
16:02:51 +0500

With trusted_networks unset I get the following with I debug
the msg with Spamassassin:

debug: looking up PTR record for '206.81.84.119'
debug: PTR for '206.81.84.119': '206-81-84-119.info-goals.com'
debug: received-header: parsed as [ ip=206.81.84.119
rdns=206-81-84-119.info-goals.com helo=206.81.84.119
by=marvin.copperfasten.com ident= envfrom= intl=0 id=127ACEBC7F ]
debug: looking up A records for 'marvin.copperfasten.com'
debug: A records for 'marvin.copperfasten.com': 10.0.0.53
debug: looking up A records for 'marvin.copperfasten.com'
debug: A records for 'marvin.copperfasten.com': 10.0.0.53
debug: received-header: 'by' marvin.copperfasten.com has reserved IP
10.0.0.53
debug: received-header: 'by' marvin.copperfasten.com has no public IPs
debug: received-header: relay 206.81.84.119 trusted? yes internal? no

I'm assuming that 206.81.84.119 is trusted since the following
condition of the inference algorithm fires:

- if any addresses of the 'by' host is in a reserved network range, 
  then it's trusted

However, I would have thought that this would imply that the 10.0.0.53
host is trusted and not any servers connecting to it. 

Can someone please clarify this for me? Also should I be specifying
10.0.0.53 in trusted_networks in local.cf?

Regards,
- Sean




SPF

2004-11-01 Thread Sauer, Peter
hi

has anybody a working config for spamassassin whit spf...i got spf
aktive (plugin in init.pre) but i haven't seen any mails spamassassin
does mark whit spf-rules...i am using spamassassin through amvisd-new...




Re: sa-learn after autolearn=no

2004-11-01 Thread Matt Kettler
At 07:36 AM 11/1/2004 -0500, Dan Barker wrote:
1) When I get a spam with autolearn=no in the headers, that means it's
already been learned.
Not entirely true... It could mean it was already learned, but autolearn=no 
could also mean the score wasn't high enough.
in 2.6x it could also mean the bayes lock was busy, but in 3.0 
you'll get failed here.

Bear in mind, and bear it well, that the score that bayes autolearn uses as 
a threshold is NOT the final message score. Autolearn uses the score the 
message would get if bayes were disabled (including scoreset shift), the 
AWL is not included, nor are manual white/blacklists. This score can be 
DRAMATICALY different from the message score.

SA will also refuse to auto learn a message as spam unless the header and 
body rules each have a minimum of 3.0 each, regardless of total pre-bayes 
score.

Autolearning is meant to be a only if you're absolutely positive it's 
spam.. SA relies on you to manually train to fill in the grey areas.


2) There is no need to sa-learn --spam that message, it's already learned
but simply didn't meet the threshold.
Not true. Again, the message could be learned, but more likely it's not 
already learned, and did not meet the learning threshold.




sql config problem with position-dependent config params

2004-11-01 Thread hamann . w
Hi,

I am trying to use sql config, but it seems quite troublesome to receive sql 
data in
the correct order.

Assuming the default config is
- just tag headers but do not modify message (report_safe 0)
- no test report in the header (remove_header Report)
and a user wants to turn the reports on, the user config should say
add_header all Report _REPORT_
and be read after the defaults.

If, however, default policy leaves the report header and the user wants to 
disable, the user
config should be
remove_header all Report

The suggested database layout does not ensure that the remove_header is 
actually following
the report_safe part

Wolfgang Hamann



Re: Bayes sometimes not used

2004-11-01 Thread Matt Kettler
At 06:38 PM 10/31/2004, Juliano Simões wrote:
See below sample outputs from subsequent executions of
/usr/bin/spamassassin -tLD  spam_msg_file:
since you're using -D for debug output, is there anything in the debug that 
might give some clues?

Do both use the same score set?
Any complaints about lock failures?
Did either trigger autolearning?
Did either trigger a bayes sync (can cause a dramatic change in the bayes 
DB as the journal is integrated)?



99_sare_fraud and SA 3.0.x

2004-11-01 Thread Christopher X. Candreva

The announcement for SA 3 mentioned that anti-fraud rules from Matt Yackley 
had been added, so when I upgraded I removed the 9_sare_fraud ruleset.  
However, I've noticed that some lotto scams were getting through.

Just testing with on that didn't trigger any standard fraud rules, it did 
trigger SARE_FRAUD_X3, so I've put 99_sare_fraud back in.

If some of it's rules are now standard in 3.x, perhaps an update is in 
order, if nothing else to remove duplicates ?


==
Chris Candreva  -- [EMAIL PROTECTED] -- (914) 967-7816
WestNet Internet Services of Westchester
http://www.westnet.com/


Re: sa-learn after autolearn=no

2004-11-01 Thread Theo Van Dinter
On Mon, Nov 01, 2004 at 07:36:35AM -0500, Dan Barker wrote:
 1) When I get a spam with autolearn=no in the headers, that means it's
 already been learned.

no means autolearning didn't occur, there is no way to know why it didn't do
so unless you check the debug output.

 2) There is no need to sa-learn --spam that message, it's already learned
 but simply didn't meet the threshold.

If it was already learned, there's no point, but see #1. :)

-- 
Randomly Generated Tagline:
That thing [the space shuttle] has the glide slope of a brick.
 - Joe Ruga at LISA '99


pgp919Uf0IkMf.pgp
Description: PGP signature


Re: CFLAGS

2004-11-01 Thread David Brodbeck
On Wed, 22 Sep 2004 09:15:48 -0700, Justin Mason wrote
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 David Brodbeck writes:
  Is there a way to get the SpamAssassin build process to use -O instead of 
  -O2
  while building spamc?  I run FreeBSD on a DEC Alpha, and -O2 triggers
  optimizer bugs in gcc on that architecture.  I've just been editing the
  configure script before building, but it'd be nice if there was an easier 
  way.
 
 yep:
 
 perl Makefile.PL CCFLAGS=-O

Hmm...just tried that with SpamAssassin 3.0.1 and it *still* built spamc with
-O2.  I had to edit spamc/configure again to force it to use -O.

Should I file this as a bug?



Re: trusted_networks and ALL_TRUSTED

2004-11-01 Thread Justin Mason
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


Sean Doherty writes:
 I'm looking for some clarification on trusted_networks, the 
 ALL_TRUSTED rule, and in particular how trusted_networks are 
 inferred if not specified in local.cf.
 
 Since upgrading to 3.0.1 I have seen an increase in false
 negatives, which would have otherwise been caught if not for
 the ALL_TRUSTED rule firing.
 
 I don't have trusted_networks set in local.cf, so SpamAssassin
 will use the inference algorithm as specified in the docs:
 
 - if the 'from' IP address is on the same /16 network as the top
   Received line's 'by' host, it's trusted 
 - if the address of the 'from' host is in a reserved network range, 
   then it's trusted 
 - if any addresses of the 'by' host is in a reserved network range, 
   then it's trusted
 
 My postfix mail server, that runs SpamAssasin is in a reserved
 network range (10.0.0.53) and processes only incoming mail. The
 following msg snippet (Received headers) results in the ALL_TRUSTED 
 rule firing:
 
 Received: from 206.81.84.119 (unknown [206.81.84.119]) by
 marvin.copperfasten.com (Postfix) with SMTP id 127ACEBC7F for
 [EMAIL PROTECTED]; Mon,  1 Nov 2004 11:09:24 + (GMT)
 Received: from 206.81.84.119 by mail003.datapropo.com; Mon, 01 Nov 2004
 16:02:51 +0500
 
 With trusted_networks unset I get the following with I debug
 the msg with Spamassassin:
 
 debug: looking up PTR record for '206.81.84.119'
 debug: PTR for '206.81.84.119': '206-81-84-119.info-goals.com'
 debug: received-header: parsed as [ ip=206.81.84.119
 rdns=206-81-84-119.info-goals.com helo=206.81.84.119
 by=marvin.copperfasten.com ident= envfrom= intl=0 id=127ACEBC7F ]
 debug: looking up A records for 'marvin.copperfasten.com'
 debug: A records for 'marvin.copperfasten.com': 10.0.0.53
 debug: looking up A records for 'marvin.copperfasten.com'
 debug: A records for 'marvin.copperfasten.com': 10.0.0.53
 debug: received-header: 'by' marvin.copperfasten.com has reserved IP
 10.0.0.53
 debug: received-header: 'by' marvin.copperfasten.com has no public IPs
 debug: received-header: relay 206.81.84.119 trusted? yes internal? no
 
 I'm assuming that 206.81.84.119 is trusted since the following
 condition of the inference algorithm fires:
 
 - if any addresses of the 'by' host is in a reserved network range, 
   then it's trusted
 
 However, I would have thought that this would imply that the 10.0.0.53
 host is trusted and not any servers connecting to it. 

The problem is that 10.x is a private net, therefore SpamAssassin infers
it cannot possibly be the external MX sitting out there on the internet.
(for a host to be sitting on the public internet accepting SMTP
connections, it'd obviously need a public IP addr.)

so the *next* step must be the external MX.

 Can someone please clarify this for me? Also should I be specifying
 10.0.0.53 in trusted_networks in local.cf?

Yep, that's right -- and trusted_networks will fix it.

- --j.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (GNU/Linux)
Comment: Exmh CVS

iD8DBQFBhnfiMJF5cimLx9ARAtXlAJ9oN9SVWC4dC8FE2dKP/IEIORdDUgCeJ/GY
DjAorX+fCBwLoq0HMcgYr4g=
=WyEy
-END PGP SIGNATURE-



Re: trusted_networks and ALL_TRUSTED

2004-11-01 Thread Justin Mason
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


Sean Doherty writes:
 Justin,
 
   - if any addresses of the 'by' host is in a reserved network range, 
 then it's trusted
   
   However, I would have thought that this would imply that the 10.0.0.53
   host is trusted and not any servers connecting to it. 
  
  The problem is that 10.x is a private net, therefore SpamAssassin infers
  it cannot possibly be the external MX sitting out there on the internet.
  (for a host to be sitting on the public internet accepting SMTP
  connections, it'd obviously need a public IP addr.)
  
  so the *next* step must be the external MX.
 
 My 10.x server is inside a firewall which NATs port 25 so this
 conclusion is not correct. I imagine that my setup isn't all 
 that different from a lot of other peoples. 
 
   Can someone please clarify this for me? Also should I be specifying
   10.0.0.53 in trusted_networks in local.cf?
  
  Yep, that's right -- and trusted_networks will fix it.
 
 Yes trusted_networks does indeed fix the issue, but I'm still
 not so sure that the algorithm to deduce trusted_networks is
 correct (if not specified). 

it's correct *except* in this kind of situation, where there's NAT and/or
private IP ranges involved.  we should document that more clearly, maybe.

 For an inbound only relay is it correct to say that trusted_networks
 should only contain the IP address of the relay itself?

yep.

if you have a virus-scanning gateway or firewall beyond *that*,
though, you should trust that too.

 For an inbound/outbound relay it should contain the local 
 network/mask or eg downstream Exchange server + relay host?

not sure what you mean by 'downstream Exchange server' here...
you can trust all the hosts you consider trustworthy; it'll skip
looking them up in DNSBLs etc.   You can even trust e.g. YahooGroups'
outbound MTAs if you like ;)

- --j.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (GNU/Linux)
Comment: Exmh CVS

iD8DBQFBhn8aMJF5cimLx9ARAryLAJ9KziKBTJI9lqpvL2YaaD0Za5zE8ACfcBdM
q3iahboiTWIbxxT1NxhgzjE=
=Om5B
-END PGP SIGNATURE-



Re: trusted_networks and ALL_TRUSTED

2004-11-01 Thread Matt Kettler
At 01:07 PM 11/1/2004, Sean Doherty wrote:
 The problem is that 10.x is a private net, therefore SpamAssassin infers
 it cannot possibly be the external MX sitting out there on the internet.
 (for a host to be sitting on the public internet accepting SMTP
 connections, it'd obviously need a public IP addr.)

 so the *next* step must be the external MX.
My 10.x server is inside a firewall which NATs port 25 so this
conclusion is not correct. I imagine that my setup isn't all
that different from a lot of other peoples.
Yes, it is incorrect, but SA can't know that. Thus, SA assumes, 
incorrectly, that any 10.x host must not be externally addressable. It's 
not a very good assumption in modern networks, but there's not much else 
one can do.

SA's trust path code has pretty much always been incompatible with NAT'ed 
mailservers. And it's hard for SA to autodetect such things from mail headers.

 Yep, that's right -- and trusted_networks will fix it.
Yes trusted_networks does indeed fix the issue, but I'm still
not so sure that the algorithm to deduce trusted_networks is
correct (if not specified).
Do you know of an algorithm that will allow SA to deduce the difference 
between a NATed mailserver, and an internal relay server which feeds a 
local non NATed mailserver?

ie: consider these two setups
PC 10.1.1.1
internal groupware server 10.2.2.2
local internet mail gateway 208.39.141.94
--- end of local network ---
outside mailserver 1.1.1.1
outside PC 10.0.1.1
vs
PC 10.1.1.1
NATed mail server 10.2.2.2
--- end of local network ---
outside list server  208.39.141.94
outside mailserver 1.1.1.1
outside PC 10.0.1.1
It's very hard from Received: headers alone to know the difference.. You 
can't make assumptions based on the domain names being the same, because if 
208.39.141.94 is an untrusted server, such things can easily be forged.




Re: Bayes sometimes not used

2004-11-01 Thread Juliano Simões
- Original Message - 
From: Theo Van Dinter [EMAIL PROTECTED]
To: users@spamassassin.apache.org
Sent: Sunday, October 31, 2004 9:42 PM
Subject: Re: Bayes sometimes not used

 On Sun, Oct 31, 2004 at 09:38:09PM -0200, Juliano Simões wrote:
  I have noticed a strange behavior of SA, after upgrading from
  version 2.64 to 3.0.1. Sometimes, the same message is scored
  with bayes, sometimes not.

 Yes, Bayes can't always give an answer.  You can run with -D and see
what's
 going on.

  See below sample outputs from subsequent executions of
  /usr/bin/spamassassin -tLD  spam_msg_file:

 Bayes probably learned enough tokens from the first to score on the
second.

  Version 2.64 was very consistent when it comes to using bayes.
  Any clues on what may be causing this problem on 3.0.1?

 Well, 2.[56]x always gave a BAYES_* hit, even if it wasn't usable (aka
 BAYES_50).  3.0 only gives you a result when there's a result.

Theo, thanks for clarifying. What still troubles me is how can SA
show a bayes hit for a given message once and skip bayes completely
for the same message after a few seconds.

I will go over the debug records to try to figure it out.

Regards,

Juliano Simões
Gerente de Tecnologia
Central Server
http://www.centralserver.com.br
[EMAIL PROTECTED]
+55 41 324-1993

- Original Message - 
From: Theo Van Dinter [EMAIL PROTECTED]
To: users@spamassassin.apache.org
Sent: Sunday, October 31, 2004 9:42 PM
Subject: Re: Bayes sometimes not used




Re: trusted_networks and ALL_TRUSTED

2004-11-01 Thread Jim Maul
Sean Doherty wrote:
Justin,

- if any addresses of the 'by' host is in a reserved network range, 
 then it's trusted

However, I would have thought that this would imply that the 10.0.0.53
host is trusted and not any servers connecting to it. 
The problem is that 10.x is a private net, therefore SpamAssassin infers
it cannot possibly be the external MX sitting out there on the internet.
(for a host to be sitting on the public internet accepting SMTP
connections, it'd obviously need a public IP addr.)
so the *next* step must be the external MX.

My 10.x server is inside a firewall which NATs port 25 so this
conclusion is not correct. I imagine that my setup isn't all 
that different from a lot of other peoples. 


This is exactly how i have my system setup.  I have a 192.168 IP 
assigned to my server.  It has no public IP assigned to it.  However, i 
have a router/firewall in front of it which has a public ip assigned to 
its wan interface which then does NAT/port forwarding to my qmail 
server.  It works extremely well for our purposes.  It sounds to me that 
if i upgraded to 3.0 (still running 2.64) i would then have the same 
issue with the trusted networks.  It doesnt really sound correct.  Just 
because my machine doesnt have a public ip does NOT mean that mail 
passes through a trusted source first..unless you are calling my little 
SMC barricade a trusted source.

-Jim


Re: trusted_networks and ALL_TRUSTED

2004-11-01 Thread George Georgalis

 Yep, that's right -- and trusted_networks will fix it.

Yes trusted_networks does indeed fix the issue, but I'm still
not so sure that the algorithm to deduce trusted_networks is
correct (if not specified).

In any event, how is it disabled? I'm getting false negatives...

-2.8 ALL_TRUSTEDDid not pass through any untrusted hosts

In my setup SA doesn't get _any_ trusted network connections, those
connections are routed beforehand, so my quick fix is to score
ALL_TRUSTED 0

but to save resources, I don't want SA even checking the network. how do
I disable it completely?

// George


-- 
George Georgalis, systems architect, administrator Linux BSD IXOYE
http://galis.org/george/ cell:646-331-2027 mailto:[EMAIL PROTECTED]


AWL and ABL Re: trusted_networks and ALL_TRUSTED

2004-11-01 Thread George Georgalis
On Mon, Nov 01, 2004 at 02:03:36PM -0500, George Georgalis wrote:

In any event, how is it disabled? I'm getting false negatives...

-2.8 ALL_TRUSTEDDid not pass through any untrusted hosts

In my setup SA doesn't get _any_ trusted network connections, those
connections are routed beforehand, so my quick fix is to score
ALL_TRUSTED 0

those false negatives are also growing an AWL, which I also don't want.

-1.4 AWLAWL: From: address is in the auto white-list

how do I disable and purge any AWL and ABL generation, too?

// George


-- 
George Georgalis, systems architect, administrator Linux BSD IXOYE
http://galis.org/george/ cell:646-331-2027 mailto:[EMAIL PROTECTED]


Re: sql config problem with position-dependent config params

2004-11-01 Thread Michael Barnes
On Mon, Nov 01, 2004 at 02:22:39PM -, [EMAIL PROTECTED] wrote:
 I am trying to use sql config, but it seems quite troublesome to
 receive sql data in the correct order.

 Assuming the default config is
 - just tag headers but do not modify message (report_safe 0)
 - no test report in the header (remove_header Report)
 and a user wants to turn the reports on, the user config should say
 add_header all Report _REPORT_
 and be read after the defaults.
 
 If, however, default policy leaves the report header and the user
 wants to disable, the user config should be
 remove_header all Report
 
 The suggested database layout does not ensure that the remove_header
 is actually following the report_safe part

My only suggestion would be to use the user_scores_sql_custom_query
configuration option to create a custom SQL query that enforces the
order of the configuration directives.

It may not be ideal, but it is a workaround.

Mike

-- 
/-\
| Michael Barnes [EMAIL PROTECTED] |
| UNIX Systems Administrator  |
| College of William and Mary |
| Phone: (757) 879-3930   |
\-/


Re: CFLAGS

2004-11-01 Thread Michael Barnes
On Mon, Nov 01, 2004 at 12:24:27PM -0500, David Brodbeck wrote:
 On Wed, 22 Sep 2004 09:15:48 -0700, Justin Mason wrote
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
  
  David Brodbeck writes:
   Is there a way to get the SpamAssassin build process to use
   -O instead of -O2 while building spamc?  I run FreeBSD on a
   DEC Alpha, and -O2 triggers optimizer bugs in gcc on that
   architecture.  I've just been editing the configure script before
   building, but it'd be nice if there was an easier way.
  
  yep:
  
  perl Makefile.PL CCFLAGS=-O
 
 Hmm...just tried that with SpamAssassin 3.0.1 and it *still* built
 spamc with -O2.  I had to edit spamc/configure again to force it to
 use -O.

I was able to change the default CFLAGS by putting the CCFLAGS and the
CFLAGS values in my environment before running perl Makefile.PL.

I would guess that this could be considered a bug, because its not too
uncommon for default CFLAGS to be changed during a compilation.

Mike

-- 
/-\
| Michael Barnes [EMAIL PROTECTED] |
| UNIX Systems Administrator  |
| College of William and Mary |
| Phone: (757) 879-3930   |
\-/


Re: trusted_networks and ALL_TRUSTED

2004-11-01 Thread Justin Mason
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


Jim Maul writes:
 This is exactly how i have my system setup.  I have a 192.168 IP 
 assigned to my server.  It has no public IP assigned to it.  However, i 
 have a router/firewall in front of it which has a public ip assigned to 
 its wan interface which then does NAT/port forwarding to my qmail 
 server.  It works extremely well for our purposes.  It sounds to me that 
 if i upgraded to 3.0 (still running 2.64) i would then have the same 
 issue with the trusted networks.  It doesnt really sound correct.  Just 
 because my machine doesnt have a public ip does NOT mean that mail 
 passes through a trusted source first..unless you are calling my little 
 SMC barricade a trusted source.

there's a very easy way to deal with this, and it's what you should
use.   set trusted_networks.   That's exactly why there's a parameter
there to set ;)

Basically, SpamAssassin can't know all about your network setup unless
you tell it.  it'll try to guess, but there's only so far guessing
will go, and without information from you, it's pretty much impossible
to guess this.

- --j.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (GNU/Linux)
Comment: Exmh CVS

iD8DBQFBho5AMJF5cimLx9ARAq80AJ9GoNUFpAUjvPb0EorG9c9yyk/RzwCffyEA
gwkTsMQm4eGxoK6ibAzHmP4=
=RRZu
-END PGP SIGNATURE-



Whitelist and Blacklist

2004-11-01 Thread Ron Shuck
Hi,
 
I am running 2.63 on Red Hat 9. I am invoking with a modified version of
an Advosys script, but it is using spamc.

The issue with whitelists and blacklists. They are working fine. The
problem is that I have blacklisted [EMAIL PROTECTED] and have whitelisted
[EMAIL PROTECTED] The purpose was to only whitelist the single user,
but blacklist all others from this domain. What happens is that both
USER_IN_WHITELIST and USER_IN_BLACKLIST are both applied and just cancel
each other out.

Is this fixed in 2.64 or 3.00, or is there another way around this? I
thought about just changing the score of USER_IN_WHITELIST to -120 or
something.

Thanks,

 
Ron Shuck, CISSP, GCIA, CCSE


Re: CFLAGS

2004-11-01 Thread Justin Mason
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


Michael Barnes writes:
 On Mon, Nov 01, 2004 at 12:24:27PM -0500, David Brodbeck wrote:
  On Wed, 22 Sep 2004 09:15:48 -0700, Justin Mason wrote
   -BEGIN PGP SIGNED MESSAGE-
   Hash: SHA1
   
   David Brodbeck writes:
Is there a way to get the SpamAssassin build process to use
-O instead of -O2 while building spamc?  I run FreeBSD on a
DEC Alpha, and -O2 triggers optimizer bugs in gcc on that
architecture.  I've just been editing the configure script before
building, but it'd be nice if there was an easier way.
   
   yep:
   
   perl Makefile.PL CCFLAGS=-O
  
  Hmm...just tried that with SpamAssassin 3.0.1 and it *still* built
  spamc with -O2.  I had to edit spamc/configure again to force it to
  use -O.
 
 I was able to change the default CFLAGS by putting the CCFLAGS and the
 CFLAGS values in my environment before running perl Makefile.PL.
 
 I would guess that this could be considered a bug, because its not too
 uncommon for default CFLAGS to be changed during a compilation.

BTW the default CFLAGS are coming from whatever perl was built with;
so I'd be worried about bugs in your perl accordingly ;)

you can also do

perl Makefile.PL CCFLAGS=-O

and that should work.  if it doesn't, it's a bug, because that's pretty
much std perl practice for CPAN modules afaik.

- --j.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (GNU/Linux)
Comment: Exmh CVS

iD8DBQFBhpSKMJF5cimLx9ARAhTYAJ4tS5ztgAOj5ZcfsmKcKF5o+qDyRQCeJu6C
JCnXNBYqyXv24LuRtKyrtQY=
=l790
-END PGP SIGNATURE-



Re: SPF

2004-11-01 Thread Matt Kettler
At 08:41 AM 11/1/2004, Sauer, Peter wrote:
has anybody a working config for spamassassin whit spf...i got spf
aktive (plugin in init.pre) but i haven't seen any mails spamassassin
does mark whit spf-rules...i am using spamassassin through amvisd-new...
spelling lesson of the day... repeat after me: with not whit :-)
I'm not using SPF, but you might want to check SA's debug output.
spamassassin --lint -D
Usually SA will give you hints as to why something can't be used.
In particular, you must have Net::DNS (0.34 or higher) and Mail::SPF::Query 
perl modules installed before SA's SPF plugin will work.



Re: AWL and ABL Re: trusted_networks and ALL_TRUSTED

2004-11-01 Thread George Georgalis
On Mon, Nov 01, 2004 at 03:13:50PM -0500, Matt Kettler wrote:
At 02:11 PM 11/1/2004, George Georgalis wrote:
those false negatives are also growing an AWL, which I also don't want.

-1.4 AWLAWL: From: address is in the auto white-list

how do I disable and purge any AWL and ABL generation, too?

Well, there is no ABL just one system called AWL which works as both 
white and black.

Disable it with:
use_auto_whitelist 0

You can purge it by removing the database files with rm -f. They should be 
in ~/.spamassassin/. Be sure SA isn't running when you delete them.

Thanks, I've added that:

skip_rbl_checks 1
use_bayes 0

noautolearn 1
use_auto_whitelist 0
score AWL 0.001

trusted_networks 192.168.
score ALL_TRUSTED 0.001


// George


-- 
George Georgalis, systems architect, administrator Linux BSD IXOYE
http://galis.org/george/ cell:646-331-2027 mailto:[EMAIL PROTECTED]


Re: AWL and ABL (use of score AWL statements)

2004-11-01 Thread Matt Kettler
At 03:37 PM 11/1/2004, George Georgalis wrote:
Thanks, I've added that:
skip_rbl_checks 1
use_bayes 0
noautolearn 1
use_auto_whitelist 0
score AWL 0.001
I've seen lots of people using the score statement on AWL. However, I 
myself have serious doubts about the validity of doing that. The AWL 
doesn't normally have a score statement, because it's a no-rule system 
implemented entirely in the code.

Any devs who might know for sure care to comment? 



Memory Usage

2004-11-01 Thread Scott Palmer
Is there a reason why the memory usage jumped with 3.0.x? I have two
servers running it and I am thinking I might have to upgrade the RAM
because of it.

Is there anything that can be done to reduce the usage. I thought that
perhaps it was because of Bayes being in SQL. But, one server has it in
SQL and the other does not. Both are chewing up the same amount of memory.

Thanks
Scott



Re: Bayes sometimes not used

2004-11-01 Thread Juliano Simões
- Original Message - 
 From: Matt Kettler [EMAIL PROTECTED]
 To: Juliano Simões [EMAIL PROTECTED];
users@spamassassin.apache.org
 Sent: Monday, November 01, 2004 12:58 PM
 Subject: Re: Bayes sometimes not used


 At 06:38 PM 10/31/2004, Juliano Simões wrote:
 See below sample outputs from subsequent executions of
 /usr/bin/spamassassin -tLD  spam_msg_file:

 since you're using -D for debug output, is there anything in the debug
that
 might give some clues?

 Do both use the same score set?
Yes.

 Any complaints about lock failures?
No, locks and unlocks look good.

 Did either trigger autolearning?
Nope.

 Did either trigger a bayes sync (can cause a dramatic change in the bayes
 DB as the journal is integrated)?
Yes, it seems like they did. Please, take a look at the
following debug log scenarios from SA testing the same
message:

** 1. Have bayes hits **
...
debug: bayes: opportunistic call found expiry due
debug: Syncing Bayes and expiring old tokens...
debug: lock: 32684 created /home/spamd/.spamassassin/bayes.mutex
debug: lock: 32684 trying to get lock on /home/spamd/.spamassassin/bayes
with 10 timeout
debug: lock: 32684 link to /home/spamd/.spamassassin/bayes.mutex: link ok
debug: bayes: 32684 tie-ing to DB file R/W
/home/spamd/.spamassassin/bayes_toks
debug: bayes: 32684 tie-ing to DB file R/W
/home/spamd/.spamassassin/bayes_seen
debug: bayes: found bayes db version 3
debug: refresh: 32684 refresh /home/spamd/.spamassassin/bayes.mutex
debug: Syncing complete.
debug: bayes: 32684 untie-ing
debug: bayes: 32684 untie-ing db_toks
debug: bayes: 32684 untie-ing db_seen
debug: bayes: files locked, now unlocking lock

** 2. No bayes hits **
...
debug: bayes: opportunistic call found journal sync due
debug: Syncing Bayes and expiring old tokens...
debug: lock: 4276 created /home/spamd/.spamassassin/bayes.mutex
debug: lock: 4276 trying to get lock on /home/spamd/.spamassassin/bayes with
10 timeout
debug: lock: 4276 link to /home/spamd/.spamassassin/bayes.mutex: link ok
debug: bayes: 4276 tie-ing to DB file R/W
/home/spamd/.spamassassin/bayes_toks
debug: bayes: 4276 tie-ing to DB file R/W
/home/spamd/.spamassassin/bayes_seen
debug: bayes: found bayes db version 3
debug: refresh: 4276 refresh /home/spamd/.spamassassin/bayes.mutex
debug: Syncing complete.
debug: bayes: Not available for scanning, only 0 spam(s) in Bayes DB  200
debug: bayes: not scoring message, returning undef
debug: bayes: 4276 untie-ing
debug: bayes: 4276 untie-ing db_toks
debug: bayes: 4276 untie-ing db_seen
debug: bayes: files locked, now unlocking lock

So, if bayes sync is the problem, why is this happen so often?
I run sa-learn --sync many times per day, after training ham
and spam. Is there a way to prevent spamassassin from triggering
a sync every time?

Regards,

Juliano Simões
Gerente de Tecnologia
Axios Tecnologia e Serviços
http://www.axios.com.br
[EMAIL PROTECTED]
+55 41 324-1993



Re: Memory Usage

2004-11-01 Thread Matt Kettler
At 03:51 PM 11/1/2004, Scott Palmer wrote:
Is there a reason why the memory usage jumped with 3.0.x? I have two
servers running it and I am thinking I might have to upgrade the RAM
because of it.
One thing that springs to mind is the AWL is on by default, unlike 2.6x.. 
if you don't want it, try use_auto_whitelist 0 in your local.cf.

Also be sure you're on 3.0.1 not 3.0.0, there were some memory-hog type 
bugfixes added in 3.0.1


Is there anything that can be done to reduce the usage. I thought that
perhaps it was because of Bayes being in SQL. But, one server has it in
SQL and the other does not. Both are chewing up the same amount of memory
SQL vs local really should only affect speed, not memory consumption.


interesting paper on SoBig's authorship

2004-11-01 Thread Justin Mason
http://authortravis.tripod.com/
http://www.geocities.com/author_travis/

very interesting!

--j.


Re: interesting paper on SoBig's authorship

2004-11-01 Thread Jeff Chan
On Monday, November 1, 2004, 2:28:42 PM, Justin Mason wrote:
 http://authortravis.tripod.com/
 http://www.geocities.com/author_travis/

 very interesting!

 --j.

Nice work, whoever the mystery authors are...

Jeff C.
-- 
Jeff Chan
mailto:[EMAIL PROTECTED]
http://www.surbl.org/



Re: interesting paper on SoBig's authorship

2004-11-01 Thread Tom Collins
Mirror of PDF (from Slashdot): http://wetsexygirl.com/WhoWroteSobig.pdf
Seriously.
On Nov 1, 2004, at 2:57 PM, Dan Barker wrote:
Must be interesting, it's over quota and won't render.
Dan
snip
http://www.geocities.com/author_travis/
very interesting!
--j.
--
Tom Collins  -  [EMAIL PROTECTED]
QmailAdmin: http://qmailadmin.sf.net/  Vpopmail: http://vpopmail.sf.net/
Info on the Sniffter hand-held Network Tester: http://sniffter.com/


Re: interesting paper on SoBig's authorship

2004-11-01 Thread Jeff Chan
On Monday, November 1, 2004, 2:57:02 PM, Dan Barker wrote:
 Must be interesting, it's over quota and won't render.

 Dan

It's up on the tripod site Justin mentioned:

  http://authortravis.tripod.com/


 snip
 http://www.geocities.com/author_travis/

 very interesting!

 --j.



Jeff C.
-- 
Jeff Chan
mailto:[EMAIL PROTECTED]
http://www.surbl.org/



Re: Error after upgrading to 3.0.1

2004-11-01 Thread Theo Van Dinter
On Mon, Nov 01, 2004 at 11:17:02PM -, marti wrote:
 ERROR!  spamassassin script is v3.00, but using modules v3.01!
 
 Any idea what scrip its refering to, spamassassin --lint -D worked just
 fine, but cant fire up spamd.

It's a fail-safe.  It means you are trying to use the spamassassin 3.0.0
script, but it finds the 3.0.1 modules.

-- 
Randomly Generated Tagline:
you might be a sys admin if you see the bumper sticker 'users are losers'
 and not realize it refers to drugs  - Unknown


pgpHDjZ08z0w7.pgp
Description: PGP signature


RE: Error after upgrading to 3.0.1

2004-11-01 Thread marti
|-Original Message-
|From: Theo Van Dinter [mailto:[EMAIL PROTECTED] 
|Sent: 01 November 2004 23:17
|To: Spamassassin
|Subject: Re: Error after upgrading to 3.0.1
|
|On Mon, Nov 01, 2004 at 11:17:02PM -, marti wrote:
| ERROR!  spamassassin script is v3.00, but using modules 
|v3.01!
| 
| Any idea what scrip its refering to, spamassassin --lint -D worked 
| just fine, but cant fire up spamd.
|
|It's a fail-safe.  It means you are trying to use the 
|spamassassin 3.0.0 script, but it finds the 3.0.1 modules.
|
|--

So are you saying it's the spamd that's not being updated?
Do I need to copy that in manualy then?



FW: Lint fails on latest bogus0virus-warnings.cf

2004-11-01 Thread Alan Munday
 -Original Message-
 From: Mike Zanker [mailto:[EMAIL PROTECTED] 
 Sent: Monday, November 01, 2004 6:43 AM
 To: users@spamassassin.apache.org
 Subject: Lint fails on latest bogus0virus-warnings.cf
 
 
  From RulesDuJour last night:
 
 Lint output: Relative score without previous setting in SpamAssassin 
 configuration, skipping: score VIRUS_WARNING412   
 Unhelpful 'virus 
 warning' (412)
 
 Thanks,
 
 Mike.
 

Just for clarification, after this update:

Lint fails on 3.0.1 here 

Lint does not fail for 2.6.3 here.

Alan