spamc Load Balancing

2008-04-21 Thread Christoph Petersen
Hi guys,

some days ago I started deploying spamassassin in a load balanced
environment (though no real lb but round robin DNS lb with spamc's -d
switch). When I watch the logs it's sometimes with peaks that one server is
under heavy load  90 spamd childs and the other very low.

Maybe it's a good idea to create something like a spamc deamon wich can
distribute the load better among the childs so that both systems are better
used. 

Feedback is welcome - maybe I'm on a very wrong train here  but spam is not
getting less..

BR
Christoph



Re: spamc Load Balancing

2008-04-21 Thread Michael Schwartzkopff
Am Sonntag, 20. April 2008 22:51 schrieb Christoph Petersen:
 Hi guys,

 some days ago I started deploying spamassassin in a load balanced
 environment (though no real lb but round robin DNS lb with spamc's -d
 switch). When I watch the logs it's sometimes with peaks that one server is
 under heavy load  90 spamd childs and the other very low.

 Maybe it's a good idea to create something like a spamc deamon wich can
 distribute the load better among the childs so that both systems are better
 used.

 Feedback is welcome - maybe I'm on a very wrong train here  but spam is not
 getting less..

 BR
 Christoph

Real good loadbalancer is Linux Virtual Server (LVS, 
www.linuxvirtualserver.org). Try localhost option to distribute load to local 
node AND use Linux-HA (heartbeat) for high availabliliy.

Hint: ldirectord as a resource in Linux-HA version 2. Also use pingd to check 
availability of the nodes inside heartbeat.

Sounds difficult to set up, but when you get used to it it is quite simple, 
works good and ist VERY scalable.

-- 
Dr. Michael Schwartzkopff
MultiNET Services GmbH
Addresse: Bretonischer Ring 7; 85630 Grasbrunn; Germany
Tel: +49 - 89 - 45 69 11 0
Fax: +49 - 89 - 45 69 11 21
mob: +49 - 174 - 343 28 75

mail: [EMAIL PROTECTED]
web: www.multinet.de

Sitz der Gesellschaft: 85630 Grasbrunn
Registergericht: Amtsgericht München HRB 114375
Geschäftsführer: Günter Jurgeneit, Hubert Martens

---

PGP Fingerprint: F919 3919 FF12 ED5A 2801 DEA6 AA77 57A4 EDD8 979B
Skype: misch42


Re: Upgrading

2008-04-21 Thread hiram

Hi Jarif,

No, the spam assasin is called by procmail. It seems that it was a problem
with a new whitelist entry not configured well. I rolled back the changes
now but still receive a couple of bounced emails from the half an hour I
changed the configuration.

Thanks for the advice.

Best regards,

/Hiram

--
Is your Spamassassin started via an entry in /etc/postfix/master.cf?

I had such an installation at first years ago, and it managed to do just as
you described. It bounced all my email back..

I reconfigured it so that spamc is called by maildrop (could be procmail
too, of course). I think that is a better solution too, because I have no
need to send outgoing email to SA. If postfix calls SA via master.cf all
mail, including outgoing will be scanned.

Best regards,
jarif

-- 
View this message in context: 
http://www.nabble.com/Upgrading-tp16630332p16806578.html
Sent from the SpamAssassin - Users mailing list archive at Nabble.com.



Re: Dnsbl checks

2008-04-21 Thread Justin Mason

=?utf-8?B?V2lsbGlhbSBUYXlsb3I=?= writes:
 I'm having some issues getting the dns blacklists to work on a box.
 I have an ip in an email that I have verified manually that its listed in 
 spamcop via dns query and via the webpage. However when I run the message 
 through spamassassin it doesn't produce a hit. When ran with -D I see it 
 queries all the blacklists but I never see anything indicating that it 
 matched them.
 
 Any thoughts on things I can check on to figure this out? 
 DCC,Razor,Pyzor works fine.

hi William --

check the resolv.conf configuration to ensure it's using a good
local nameserver; it may be hitting timeouts in SpamAssassin.

also, post the DNS debug logs... you may have to obscure the
blacklisted domain though.

--j.


Dnsbl checks

2008-04-21 Thread William Taylor
I'm having some issues getting the dns blacklists to work on a box.
I have an ip in an email that I have verified manually that its listed in 
spamcop via dns query and via the webpage. However when I run the message 
through spamassassin it doesn't produce a hit. When ran with -D I see it 
queries all the blacklists but I never see anything indicating that it matched 
them.

Any thoughts on things I can check on to figure this out? 
DCC,Razor,Pyzor works fine.

Thanks,
 William
-- 
William Taylor - [EMAIL PROTECTED]   Sonic.net
System Administrator   2260 Apollo Way
707.522.1000 (Voice)   Santa Rosa, CA 95407
707.547.2199 (Fax) http://www.sonic.net


Re: [sa-list] Re: Blogger URLs

2008-04-21 Thread Mark Martinec
On Monday 21 April 2008 06:27:57 Dan Mahoney, System Admin wrote:
 The possibility of catering the reporting protocols to different sites
 (i.e. the major free sites have their own reporting systems that might be
 better used).  It's beyond the scope of this thread, but are there any
 docs on how to write a reporting protocol?

This one is on an e-mail reporting format:

http://www.ietf.org/internet-drafts/draft-shafranovich-feedback-report-04.txt


  Mark


RE: spamc Load Balancing

2008-04-21 Thread Christoph Petersen
Hi Michael,

  Hi guys,
 
  some days ago I started deploying spamassassin in a load balanced
  environment (though no real lb but round robin DNS lb with spamc's -d
  switch). When I watch the logs it's sometimes with peaks that one
 server is
  under heavy load  90 spamd childs and the other very low.
 
  Maybe it's a good idea to create something like a spamc deamon wich
 can
  distribute the load better among the childs so that both systems are
 better
  used.
 
  Feedback is welcome - maybe I'm on a very wrong train here  but spam
 is not
  getting less..
 
  BR
  Christoph
 
 Real good loadbalancer is Linux Virtual Server (LVS,
 www.linuxvirtualserver.org). Try localhost option to distribute load to
 local
 node AND use Linux-HA (heartbeat) for high availabliliy.
 
 Hint: ldirectord as a resource in Linux-HA version 2. Also use pingd to
 check
 availability of the nodes inside heartbeat.
 
 Sounds difficult to set up, but when you get used to it it is quite
 simple,
 works good and ist VERY scalable.

I'm using round robin load balancing from DNS right now. But it's simply
switching the host every time. What I would like to have a small daemon or
something which keep track how many processes are running on each node so
how the load is distributed. In my setup with peaks the load is sometimes
distributed very unequally.

Can I do something like this with LVS? I didn't play around with LVS yet so
I haven't any experience yet..

BR
Christoph



Re: spamc Load Balancing

2008-04-21 Thread Michael Schwartzkopff
Am Montag, 21. April 2008 13:20 schrieb Christoph Petersen:
 Hi Michael,
(...)
 I'm using round robin load balancing from DNS right now. But it's simply
 switching the host every time. What I would like to have a small daemon or
 something which keep track how many processes are running on each node so
 how the load is distributed. In my setup with peaks the load is sometimes
 distributed very unequally.

 Can I do something like this with LVS? I didn't play around with LVS yet so
 I haven't any experience yet..

 BR
 Christoph

Hi,

DNS has an failover time of ~60 secs. Sometimes this is not acceptable.

LVS is just this what you want. In a simple setup LVS cannot measure the 
actual load (i.e. uptime) of the nodes in the background and distribute new 
connections according to that number. BUT LVS knows about the number of 
actual connections to every node and can distribute load accoring to the 
least number of connections. You even can attach weights to the least 
connections algorithm (wlc).

ldirectord checks every background server for availablility and distributes 
new connctions only to available servers. Failover is measured in seconds, 
not in 10's of seconds.

You can make the whole setup high available with Linux-HA. LVS integrates very 
nicely into that framework. See also chapter Applications (sorry: German!) 
of:
http://www.oreilly.de/catalog/linuxhaclusterger/

Christoph: For further questions please contact me off-list.


-- 
Dr. Michael Schwartzkopff
MultiNET Services GmbH
Addresse: Bretonischer Ring 7; 85630 Grasbrunn; Germany
Tel: +49 - 89 - 45 69 11 0
Fax: +49 - 89 - 45 69 11 21
mob: +49 - 174 - 343 28 75

mail: [EMAIL PROTECTED]
web: www.multinet.de

Sitz der Gesellschaft: 85630 Grasbrunn
Registergericht: Amtsgericht München HRB 114375
Geschäftsführer: Günter Jurgeneit, Hubert Martens

---

PGP Fingerprint: F919 3919 FF12 ED5A 2801 DEA6 AA77 57A4 EDD8 979B
Skype: misch42


Bayes DB growing without bound; expiry not working

2008-04-21 Thread Chris St. Pierre

I have two MXes, both writing Bayes data to a shared MySQL database.
Something is quite awry.

My Bayes database is _huge_:

-rw-rw 1 mysql mysql  145044032 Apr 21 08:09 bayes_seen.MYD
-rw-rw 1 mysql mysql  189879296 Apr 21 08:09 bayes_seen.MYI
-rw-rw 1 mysql mysql 1881960784 Apr 21 08:09 bayes_token.MYD
-rw-rw 1 mysql mysql 4650297344 Apr 21 08:09 bayes_token.MYI

mysql select count(*) from bayes_token;
+--+
| count(*) |
+--+
| 85544225 |
+--+
1 row in set (0.01 sec)

mysql select count(*) from bayes_seen;
+--+
| count(*) |
+--+
|  2266388 |
+--+
1 row in set (0.00 sec)

But sa-learn doesn't seem to know that:

# sa-learn --dump magic
0.000  0  3  0  non-token data: bayes db version
0.000  0  0  0  non-token data: nspam
0.000  0  0  0  non-token data: nham
0.000  0  0  0  non-token data: ntokens
0.000  0 2147483647  0  non-token data: oldest atime
0.000  0  0  0  non-token data: newest atime
0.000  0  0  0  non-token data: last journal sync atime
0.000  0 1208783399  0  non-token data: last expiry atime
0.000  0  0  0  non-token data: last expire atime delta
0.000  0  0  0  non-token data: last expire reduction 
count

(Output is the same on both MXes.)

Forcing expiry does nothing; with debugging on, I get:

[13442] dbg: bayes: expiry starting
[13442] dbg: bayes: database connection established
[13442] dbg: bayes: found bayes db version 3
[13442] dbg: bayes: Using userid: 6
[13442] dbg: bayes: expiry check keep size, 0.75 * max: 112500
[13442] dbg: bayes: token count: 0, final goal reduction size: -112500
[13442] dbg: bayes: reduction goal of -112500 is under 1,000 tokens, skipping 
expire
[13442] dbg: bayes: expiry completed

Consequently, my database is growing, apparently without bound.

Any ideas how I can get expiry to work properly again?  (Hopefully
without completely dumping the database?)

Thanks!

Chris St. Pierre
Unix Systems Administrator
Nebraska Wesleyan University


Re: Bayes DB growing without bound; expiry not working

2008-04-21 Thread Michael Parker


On Apr 21, 2008, at 8:17 AM, Chris St. Pierre wrote:


Consequently, my database is growing, apparently without bound.

Any ideas how I can get expiry to work properly again?  (Hopefully
without completely dumping the database?)



select * from bayes_vars;

What user do you run bayes under on your MXs?

Michael



Re: Bayes DB growing without bound; expiry not working

2008-04-21 Thread Chris St. Pierre

On Mon, 21 Apr 2008, Michael Parker wrote:


select * from bayes_vars;


...
2289 rows in set (0.00 sec)


What user do you run bayes under on your MXs?


I think you've found the issue.  We run as spamd.

# sa-learn -u spamd --dump magic
0.000  0  3  0  non-token data: bayes db version
0.000  01492123  0  non-token data: nspam
0.000  0 660634  0  non-token data: nham
0.000  0   73178711  0  non-token data: ntokens
0.000  0 1189775610  0  non-token data: oldest atime
0.000  0 1208785034  0  non-token data: newest atime
0.000  0  0  0  non-token data: last journal sync atime
0.000  0  0  0  non-token data: last expiry atime
0.000  0  0  0  non-token data: last expire atime delta
0.000  0  0  0  non-token data: last expire reduction 
count

That leads to two issues:

1.  I need to straighten things out and figure out why I've got a
strange mix of per-user and global data in my Bayes DB.  Whee.

2.  Does this mean that, if I use per-user Bayes, I have to run
expiration as each user individually?

Manual expiration was recommended to me a long time ago as a way to
increase database performance, but it seems like it may not be worth
it if I have to run N forced expirations, for potentially large values
of N.

Thanks for your help.

Chris St. Pierre
Unix Systems Administrator
Nebraska Wesleyan University



Re: gpg failure on sa-update due to non-cross-certified key

2008-04-21 Thread Vivek Khera


On Apr 18, 2008, at 11:30 AM, McDonald, Dan wrote:


http://wiki.apache.org/spamassassin/SaUpdateKeyNotCrossCertified?highlight=%28update%29

I had the same thing happen and all is well now.


Ah, thank you.  I dug around the wiki for an hour last night and  
didn't

find this article...



I cut/pasted the error message that gpg issued from the sa-update -D  
output, and this page was the first or second link in google.




Re: Bayes DB growing without bound; expiry not working

2008-04-21 Thread Michael Parker


On Apr 21, 2008, at 8:40 AM, Chris St. Pierre wrote:

On Mon, 21 Apr 2008, Michael Parker wrote:


select * from bayes_vars;


...
2289 rows in set (0.00 sec)


What user do you run bayes under on your MXs?


I think you've found the issue.  We run as spamd.

# sa-learn -u spamd --dump magic
0.000  0  3  0  non-token data: bayes db  
version

0.000  01492123  0  non-token data: nspam
0.000  0 660634  0  non-token data: nham
0.000  0   73178711  0  non-token data: ntokens
0.000  0 1189775610  0  non-token data: oldest atime
0.000  0 1208785034  0  non-token data: newest atime
0.000  0  0  0  non-token data: last journal  
sync atime
0.000  0  0  0  non-token data: last expiry  
atime
0.000  0  0  0  non-token data: last expire  
atime delta
0.000  0  0  0  non-token data: last expire  
reduction count


That leads to two issues:

1.  I need to straighten things out and figure out why I've got a
strange mix of per-user and global data in my Bayes DB.  Whee.



You should use the bayes override username if you want global and then  
just sa-learn -u username clear everything else (PITA, I know).  I  
personally don't believe individual bayes dbs are an issue, if you've  
got the space and CPU on your database machine.  See below for some  
solutions.





2.  Does this mean that, if I use per-user Bayes, I have to run
expiration as each user individually?

Manual expiration was recommended to me a long time ago as a way to
increase database performance, but it seems like it may not be worth
it if I have to run N forced expirations, for potentially large values
of N.



This is true for DBM based bayes databases, but generally (with an  
exception I'll talk about in a second) MySQL based bayes expiration is  
very fast (just a few seconds).  I would go ahead and turn auto-expire  
on, after running a manual expire to clear out the current backlog.


One reason that expiration slows down is an unoptimized db.  I've  
found for my small uses if I run optimization every couple of weeks I  
get much better performance. It looks like you get a lot more traffic  
so I would recommend running it more often.  With frequent  
optimizations and auto-expire your database will stay in much better  
shape.


Michael



Thanks for your help.

Chris St. Pierre
Unix Systems Administrator
Nebraska Wesleyan University





Re: flooded with undetected spam

2008-04-21 Thread Benny Pedersen

On Mon, April 21, 2008 04:10, Spamassassin List wrote:
 My inbox is flooded by some new spams. Any idea how do I block it?
 http://202.42.86.77/1.eml
 http://202.42.86.77/2.eml

both hits on spamhaus


Benny Pedersen
Need more webspace ? http://www.servage.net/?coupon=cust37098



S-P-A-M Extra long domain names rule?

2008-04-21 Thread Bookworm

I'm starting to see some new phishing/scam attempts.

What I was thinking was that it might be worthwhile to add a rule to not 
so much check links, but count periods. 


Here's the example that just came in my email -

(removing http:// ) - 
connect.colonialbank.webbizcompany.c6b5r64whf623lx426xq.secureserv.onlineupdatemirror81105.colonial.certificate.update.65tw.com/logon.htm


Notice that there are ten periods.  That makes it be an eleventh level 
domain name? :)


In general, you see fewer than four periods in a domain name - but I've 
seen this sort of behavior in spams before. 


Thoughts?

(I'm just a general administrator.  I use other people's rules, I 
haven't had time to learn to make my own)


BW



Re: S-P-A-M Extra long domain names rule?

2008-04-21 Thread Randy Ramsdell

Bookworm wrote:

I'm starting to see some new phishing/scam attempts.

What I was thinking was that it might be worthwhile to add a rule to 
not so much check links, but count periods.

Here's the example that just came in my email -

(removing http:// ) - 
connect.colonialbank.webbizcompany.c6b5r64whf623lx426xq.secureserv.onlineupdatemirror81105.colonial.certificate.update.65tw.com/logon.htm 



Notice that there are ten periods.  That makes it be an eleventh level 
domain name? :)


In general, you see fewer than four periods in a domain name - but 
I've seen this sort of behavior in spams before.

Thoughts?

(I'm just a general administrator.  I use other people's rules, I 
haven't had time to learn to make my own)


BW

I haven't, but I think a rule for this would be a good idea. I always 
write rules then check them every so often with a custom perl script.


Re: S-P-A-M Extra long domain names rule?

2008-04-21 Thread Benny Pedersen

On Mon, April 21, 2008 19:51, Bookworm wrote:

 Notice that there are ten periods.  That makes it be an eleventh level
 domain name? :)

the uri is just a domain with long tracking subdomain, its still a domain

see 20_uri_tests.cf for example on make your own rules against it :-)

 Thoughts?

http://uribl.com/



Benny Pedersen
Need more webspace ? http://www.servage.net/?coupon=cust37098



Re: S-P-A-M Extra long domain names rule?

2008-04-21 Thread Benny Pedersen

On Mon, April 21, 2008 19:59, Randy Ramsdell wrote:

 I haven't, but I think a rule for this would be a good idea. I always
 write rules then check them every so often with a custom perl script.

body LOGIN_RULE /\.com\/logon\./i
score LOGIN_RULE 0.1
describe LOGIN_RULE apache does not use that as default index file

a start :)


Benny Pedersen
Need more webspace ? http://www.servage.net/?coupon=cust37098



Re: flooded with undetected spam

2008-04-21 Thread Evan Platt

1.eml hits a 12.7 on my system:

 -- 
--

 1.3 RCVD_IN_BL_SPAMCOP_NET RBL: Received via a relay in bl.spamcop.net
 [Blocked - see 
http://www.spamcop.net/bl.shtml?201.233.220.168]

 3.1 RCVD_IN_XBLRBL: Received via a relay in Spamhaus XBL
[201.233.220.168 listed in 
sbl-xbl.spamhaus.org]
 2.6 NO_DNS_FOR_FROMDNS: Envelope sender has no MX or A DNS 
records

 0.5 RCVD_IN_PBLRBL: Received via a relay in Spamhaus PBL
[201.233.220.168 listed in zen.spamhaus.org]
 5.0 BOTNET Relay might be a spambot or virusbot
  
[botnet0.7,ip=201.233.220.168,maildomain=crochan.com,nordns]

 0.0 HTML_MESSAGE   BODY: HTML included in message
 0.1 RDNS_NONE  Delivered to trusted network by a host with 
no rDNS


2.eml hits  a 9.9

Content analysis details:   (9.9 points, 5.0 required)

 pts rule name  description
 -- 
--
 2.0 RCVD_IN_SORBS_DUL  RBL: SORBS: sent directly from dynamic IP 
address

[201.229.148.211 listed in dnsbl.sorbs.net]
 0.5 RCVD_IN_PBLRBL: Received via a relay in Spamhaus PBL
[201.229.148.211 listed in zen.spamhaus.org]
 0.7 DATE_IN_PAST_06_12 Date: is 6 to 12 hours before Received: date
 5.0 BOTNET Relay might be a spambot or virusbot
[botnet0.7,ip=201.229.148.211,hostname=tdev148-211.codetel.net.do,maildomain=smogexpressbelmont.com,baddns,client,ipinhostname] 


 0.0 HTML_MESSAGE   BODY: HTML included in message
 1.6 HTML_FONT_SIZE_LARGE   BODY: HTML font size is large
 0.1 RDNS_NONE  Delivered to trusted network by a host with 
no rDNS





Spamassassin List wrote:

Hi,

My inbox is flooded by some new spams. Any idea how do I block it?

http://202.42.86.77/1.eml
http://202.42.86.77/2.eml

Best regards



  




subscribe

2008-04-21 Thread Chris
 



Re: subscribe

2008-04-21 Thread mouss

Chris wrote:
 



  


http://wiki.apache.org/spamassassin/MailingLists


is this list open?


Re: S-P-A-M Extra long domain names rule?

2008-04-21 Thread Jack Pepper



Maybe try these:

describe SILLYLONGDOMAINURI  Includes a very long domain name gt 8 levels
uri SILLYLONGDOMAINURI  /^http?\:\/\/([a-z0-9_\-A-Z]+\.){8,}/
score SILLYLONGDOMAINURI  1.8

describe SILLYDOTSDOMAINURI  Includes a multiple dots domain name
body SILLYDOTSDOMAINURI   /^http?\:\/\/([a-z0-9_\-A-Z]+\.)+\./
score SILLYDOTSDOMAINURI 1.8

jp


Quoting Bookworm [EMAIL PROTECTED]:


I'm starting to see some new phishing/scam attempts.

What I was thinking was that it might be worthwhile to add a rule to  
not so much check links, but count periods. Here's the example that  
just came in my email -


(removing http:// ) -  
connect.colonialbank.webbizcompany.c6b5r64whf623lx426xq.secureserv.onlineupdatemirror81105.colonial.certificate.update.65tw.com/logon.htm


Notice that there are ten periods.  That makes it be an eleventh  
level domain name? :)


In general, you see fewer than four periods in a domain name - but  
I've seen this sort of behavior in spams before. Thoughts?


(I'm just a general administrator.  I use other people's rules, I  
haven't had time to learn to make my own)


BW




--
Framework?  I don't need no steenking framework!


@fferent Security Labs:  Isolate/Insulate/Innovate  
http://www.afferentsecurity.com




Re: subscribe

2008-04-21 Thread Benny Pedersen

On Mon, April 21, 2008 21:52, mouss wrote:
 Chris wrote:
 http://wiki.apache.org/spamassassin/MailingLists
 is this list open?

or Chris wanted to be, or is, or was, only owner and Chris now knows :-)



Benny Pedersen
Need more webspace ? http://www.servage.net/?coupon=cust37098



Re: S-P-A-M Extra long domain names rule?

2008-04-21 Thread Jack Pepper

OOpsie - typo:

body should have been uri in the second one.


describe SILLYDOTSDOMAINURI  Includes a multiple dots domain name
uri SILLYDOTSDOMAINURI   /^http?\:\/\/([a-z0-9_\-A-Z]+\.)+\./
score SILLYDOTSDOMAINURI 1.8


jp
Quoting Jack Pepper [EMAIL PROTECTED]:




Maybe try these:

describe SILLYLONGDOMAINURI  Includes a very long domain name gt 8 levels
uri SILLYLONGDOMAINURI  /^http?\:\/\/([a-z0-9_\-A-Z]+\.){8,}/
score SILLYLONGDOMAINURI  1.8

describe SILLYDOTSDOMAINURI  Includes a multiple dots domain name
body SILLYDOTSDOMAINURI   /^http?\:\/\/([a-z0-9_\-A-Z]+\.)+\./
score SILLYDOTSDOMAINURI 1.8

jp


Quoting Bookworm [EMAIL PROTECTED]:


I'm starting to see some new phishing/scam attempts.

What I was thinking was that it might be worthwhile to add a rule  
to not so much check links, but count periods. Here's the example  
that just came in my email -


(removing http:// ) -  
connect.colonialbank.webbizcompany.c6b5r64whf623lx426xq.secureserv.onlineupdatemirror81105.colonial.certificate.update.65tw.com/logon.htm


Notice that there are ten periods.  That makes it be an eleventh  
level domain name? :)


In general, you see fewer than four periods in a domain name - but  
I've seen this sort of behavior in spams before. Thoughts?


(I'm just a general administrator.  I use other people's rules, I  
haven't had time to learn to make my own)


BW




--
Framework?  I don't need no steenking framework!


@fferent Security Labs:  Isolate/Insulate/Innovate  
http://www.afferentsecurity.com




--
Framework?  I don't need no steenking framework!


@fferent Security Labs:  Isolate/Insulate/Innovate  
http://www.afferentsecurity.com




Re: S-P-A-M Extra long domain names rule?

2008-04-21 Thread Benny Pedersen

On Mon, April 21, 2008 21:59, Jack Pepper wrote:
 Maybe try these:
 describe SILLYLONGDOMAINURI  Includes a very long domain name gt 8 levels
 uri SILLYLONGDOMAINURI  /^http?\:\/\/([a-z0-9_\-A-Z]+\.){8,}/
 score SILLYLONGDOMAINURI  1.8

 describe SILLYDOTSDOMAINURI  Includes a multiple dots domain name
 body SILLYDOTSDOMAINURI   /^http?\:\/\/([a-z0-9_\-A-Z]+\.)+\./
 score SILLYDOTSDOMAINURI 1.8

X-Spam-Status: No, score=-1.224 tagged_above=-20 required=5
 tests=[ADJ_URIBL_BLACK=-1, ADJ_URIBL_JP_SURBL=-1, AWL=-1.361,
 GAPPY_SUBJECT=2.001, MAILLISTS=-2.5, MIME_QP_LONG_LINE=1.819,
 RCVD_IN_DNSWL_MED=-4, SPF_PASS=-0.001, URIBL_BLACK=1.961,
 URIBL_JP_SURBL=2.857]


so surbl and uribl now hit that domain


Benny Pedersen
Need more webspace ? http://www.servage.net/?coupon=cust37098



Re: S-P-A-M Extra long domain names rule?

2008-04-21 Thread John Hardin

On Mon, 21 Apr 2008, Jack Pepper wrote:


OOpsie - typo:

body should have been uri in the second one.

describe SILLYDOTSDOMAINURI  Includes a multiple dots domain name
uri SILLYDOTSDOMAINURI   /^http?\:\/\/([a-z0-9_\-A-Z]+\.)+\./
score SILLYDOTSDOMAINURI 1.8


Plus, you probably meant /^https?

--
 John Hardin KA7OHZhttp://www.impsec.org/~jhardin/
 [EMAIL PROTECTED]FALaholic #11174 pgpk -a [EMAIL PROTECTED]
 key: 0xB8732E79 -- 2D8C 34F4 6411 F507 136C  AF76 D822 E6E6 B873 2E79
---
  Vista is at best mildly annoying and at worst makes you want to
  rush to Redmond, Wash. and rip somebody's liver out.  -- Forbes
---
 34 days until the Mars Phoenix lander arrives at Mars


Re: S-P-A-M Extra long domain names rule?

2008-04-21 Thread mouss

Bookworm wrote:

I'm starting to see some new phishing/scam attempts.

What I was thinking was that it might be worthwhile to add a rule to 
not so much check links, but count periods.

Here's the example that just came in my email -

(removing http:// ) - 
connect.colonialbank.webbizcompany.c6b5r64whf623lx426xq.secureserv.onlineupdatemirror81105.colonial.certificate.update.65tw.com/logon.htm 





it doesn't resolve from here at this time, so I wonder what's the goal...


untested yet:

uri   URI_LONGISH m|https?://[\w\.-]{65}|
score   URI_LONGISH   3.0

uri  URI_GRDNSX m|https?://[^/]*[x\d]{7}|
score   URI_GRDNSX  1.5

uri  URI_LONGLABEL m|http?://[^/]*\w{16}|
score   URI_LONGLABEL0.5

uri  URI_DEEP5   m|https?://[\w-]\.[\w-]\.[\w-]\.[\w-]\.[\w-]\.|
score  URI_DEEP5   0.1

uri  URI_DEEP6   m|https?://[\w-]\.[\w-]\.[\w-]\.[\w-]\.[\w-]\.[\w-]\.|
score  URI_DEEP6   1.0

uri  URI_DEEP7   
m|https?://[\w-]\.[\w-]\.[\w-]\.[\w-]\.[\w-]\.[\w-]\.[\w-]\.|

score  URI_DEEP7   2.0

Notice that there are ten periods.  That makes it be an eleventh level 
domain name? :)


In general, you see fewer than four periods in a domain name - but 
I've seen this sort of behavior in spams before.

Thoughts?

(I'm just a general administrator.  I use other people's rules, I 
haven't had time to learn to make my own)


BW





Re: S-P-A-M Extra long domain names rule?

2008-04-21 Thread Jack Pepper

Quoting John Hardin [EMAIL PROTECTED]:



Plus, you probably meant /^https?



right you are, sir.  thx

--
Framework?  I don't need no steenking framework!


@fferent Security Labs:  Isolate/Insulate/Innovate  
http://www.afferentsecurity.com




Re: flooded with undetected spam

2008-04-21 Thread mouss

Benny Pedersen wrote:

On Mon, April 21, 2008 04:10, Spamassassin List wrote:
  

My inbox is flooded by some new spams. Any idea how do I block it?
http://202.42.86.77/1.eml
http://202.42.86.77/2.eml



both hits on spamhaus

  


but the question I would have is what is the '0' in

Received: from unknown (HELO tdev148-211.codetel.net.do) (201.229.148.211)
 by 0 with SMTP; 20 Apr 2008 16:27:31 -

is this a new MTA?




Re: flooded with undetected spam

2008-04-21 Thread Benny Pedersen

On Mon, April 21, 2008 23:13, mouss wrote:

 Received: from unknown (HELO tdev148-211.codetel.net.do) (201.229.148.211)
   by 0 with SMTP; 20 Apr 2008 16:27:31 -

 is this a new MTA?

in that case none want to use it :-)

but the body olso have fuzzy dot tld that are listed in surbl and uribl, maybe
spammer need to get some fresh air to be smart :-)


Benny Pedersen
Need more webspace ? http://www.servage.net/?coupon=cust37098



Re: S-P-A-M Extra long domain names rule?

2008-04-21 Thread Karsten Bräckelmann
On Mon, 2008-04-21 at 22:16 +0200, mouss wrote:
 untested yet:

 uri  URI_DEEP5   m|https?://[\w-]\.[\w-]\.[\w-]\.[\w-]\.[\w-]\.|
 score  URI_DEEP5   0.1
 
 uri  URI_DEEP6   m|https?://[\w-]\.[\w-]\.[\w-]\.[\w-]\.[\w-]\.[\w-]\.|
 score  URI_DEEP6   1.0
 
 uri  URI_DEEP7   
 m|https?://[\w-]\.[\w-]\.[\w-]\.[\w-]\.[\w-]\.[\w-]\.[\w-]\.|
 score  URI_DEEP7   2.0

Beware, those are adding up. Since you didn't anchor the end of the RE
to ($|/), whatever hits URI_DEEP7 hits the previous ones, too. Effective
score: 3.1

They don't work anyway. ;)  You are testing for single chars between the
dots. And the '-' should be first in a char class, if it is to represent
itself. Also, I'd prefer to keep them cleaner and more readable using
quantifiers, rather than copying parts 7 times...

uri  URI_DEEP7  m,https?://([-\w]+\.){6},

The above forces 6 dots, and thus 7 levels. Hits on even longer URIs,
too -- the same constraint of adding scores applies here.

Oh, and yes -- this one is untested, too. :)

  guenther


-- 
char *t=[EMAIL PROTECTED];
main(){ char h,m=h=*t++,*x=t+2*h,c,i,l=*x,s=0; for (i=0;il;i++){ i%8? c=1:
(c=*++x); c128  (s+=h); if (!(h=1)||!t[s+h]){ putchar(t[s]);h=m;s=0; }}}



Re: S-P-A-M Extra long domain names rule?

2008-04-21 Thread Karsten Bräckelmann
On Mon, 2008-04-21 at 14:59 -0500, Jack Pepper wrote:
 Maybe try these:
 
 describe SILLYLONGDOMAINURI  Includes a very long domain name gt 8 levels
 uri SILLYLONGDOMAINURI  /^http?\:\/\/([a-z0-9_\-A-Z]+\.){8,}/
 score SILLYLONGDOMAINURI  1.8
 
 describe SILLYDOTSDOMAINURI  Includes a multiple dots domain name
 body SILLYDOTSDOMAINURI   /^http?\:\/\/([a-z0-9_\-A-Z]+\.)+\./

The latter won't hit on correct URIs. The first part in parenthesis ends
with a dot -- followed by a dot.

  guenther


-- 
char *t=[EMAIL PROTECTED];
main(){ char h,m=h=*t++,*x=t+2*h,c,i,l=*x,s=0; for (i=0;il;i++){ i%8? c=1:
(c=*++x); c128  (s+=h); if (!(h=1)||!t[s+h]){ putchar(t[s]);h=m;s=0; }}}



Re: S-P-A-M Extra long domain names rule?

2008-04-21 Thread Karsten Bräckelmann
On Tue, 2008-04-22 at 01:29 +0200, Karsten Bräckelmann wrote:
 On Mon, 2008-04-21 at 14:59 -0500, Jack Pepper wrote:
  Maybe try these:
  
  describe SILLYLONGDOMAINURI  Includes a very long domain name gt 8 levels
  uri SILLYLONGDOMAINURI  /^http?\:\/\/([a-z0-9_\-A-Z]+\.){8,}/
  score SILLYLONGDOMAINURI  1.8
  
  describe SILLYDOTSDOMAINURI  Includes a multiple dots domain name
  body SILLYDOTSDOMAINURI   /^http?\:\/\/([a-z0-9_\-A-Z]+\.)+\./
 
 The latter won't hit on correct URIs. The first part in parenthesis ends
 with a dot -- followed by a dot.

Oops. Upon re-reading the silly in the rule name and the multiple
dots in the description, this might actually have been intentional. :)

Have you ever seen these? Would it work, does any MUA or browser
silently collapse multiple dots?

  guenther


-- 
char *t=[EMAIL PROTECTED];
main(){ char h,m=h=*t++,*x=t+2*h,c,i,l=*x,s=0; for (i=0;il;i++){ i%8? c=1:
(c=*++x); c128  (s+=h); if (!(h=1)||!t[s+h]){ putchar(t[s]);h=m;s=0; }}}



Re: S-P-A-M Extra long domain names rule?

2008-04-21 Thread Theo Van Dinter
I haven't run any real statistics about this, but it's worth realizing
that unless there's a significant number of spams that have this behavior,
a rule probably costs more in resource use than it provides in hits.

A quick:

pcregrep -ri 'http://(?:[^/.]+\.){7}'

in my corpus shows about 20 spam hits in some 245000 mails.  There could be
reasons this RE wouldn't hit, but in general I wouldn't bother.

On Tue, Apr 22, 2008 at 01:24:37AM +0200, Karsten Bräckelmann wrote:
 On Mon, 2008-04-21 at 22:16 +0200, mouss wrote:
  untested yet:
 
  uri  URI_DEEP5   m|https?://[\w-]\.[\w-]\.[\w-]\.[\w-]\.[\w-]\.|
  score  URI_DEEP5   0.1
  
  uri  URI_DEEP6   m|https?://[\w-]\.[\w-]\.[\w-]\.[\w-]\.[\w-]\.[\w-]\.|
  score  URI_DEEP6   1.0
  
  uri  URI_DEEP7   
  m|https?://[\w-]\.[\w-]\.[\w-]\.[\w-]\.[\w-]\.[\w-]\.[\w-]\.|
  score  URI_DEEP7   2.0
 
 Beware, those are adding up. Since you didn't anchor the end of the RE
 to ($|/), whatever hits URI_DEEP7 hits the previous ones, too. Effective
 score: 3.1
 
 They don't work anyway. ;)  You are testing for single chars between the
 dots. And the '-' should be first in a char class, if it is to represent
 itself. Also, I'd prefer to keep them cleaner and more readable using
 quantifiers, rather than copying parts 7 times...
 
 uri  URI_DEEP7  m,https?://([-\w]+\.){6},
 
 The above forces 6 dots, and thus 7 levels. Hits on even longer URIs,
 too -- the same constraint of adding scores applies here.
 
 Oh, and yes -- this one is untested, too. :)
 
   guenther
 
 
 -- 
 char *t=[EMAIL PROTECTED];
 main(){ char h,m=h=*t++,*x=t+2*h,c,i,l=*x,s=0; for (i=0;il;i++){ i%8? c=1:
 (c=*++x); c128  (s+=h); if (!(h=1)||!t[s+h]){ putchar(t[s]);h=m;s=0; }}}

-- 
Randomly Selected Tagline:
Hear Me, California!  Tomorrow you vote.  Again.  Good luck, and I hope
 you get the Governor you deserve.  I think it was Adlai Stevenson who said
 that there's nothing more inspiring in human society than the spectacle
 of the democratic process being bizarrely subverted by a well-funded
 partisan exploitation of a constitutional loophole.  How true that is.
 - Adam Felber, http://www.felbers.net/mt/archives/001654.html


pgpQh6HVqwpc5.pgp
Description: PGP signature


Re: S-P-A-M Extra long domain names rule?

2008-04-21 Thread Karsten Bräckelmann
On Mon, 2008-04-21 at 19:35 -0400, Theo Van Dinter wrote:
 I haven't run any real statistics about this, but it's worth realizing
 that unless there's a significant number of spams that have this behavior,
 a rule probably costs more in resource use than it provides in hits.

Yeah. I didn't say anything about this being useful or not. Merely
pointing out issues with the already posted rules.

FWIW, I explicitly mentioned the rule to be untested, because I am not
running it. I can't recall ever having seen something like this in low
scoring spam. I occasionally do see 5 levels in *phishing* mail, which
gets caught without SA even touching 'em.

  guenther


 A quick:
 
 pcregrep -ri 'http://(?:[^/.]+\.){7}'
 
 in my corpus shows about 20 spam hits in some 245000 mails.  There could be
 reasons this RE wouldn't hit, but in general I wouldn't bother.

-- 
char *t=[EMAIL PROTECTED];
main(){ char h,m=h=*t++,*x=t+2*h,c,i,l=*x,s=0; for (i=0;il;i++){ i%8? c=1:
(c=*++x); c128  (s+=h); if (!(h=1)||!t[s+h]){ putchar(t[s]);h=m;s=0; }}}



Re: S-P-A-M Extra long domain names rule?

2008-04-21 Thread Jack Pepper

Quoting Karsten Bräckelmann [EMAIL PROTECTED]:




 describe SILLYDOTSDOMAINURI  Includes a multiple dots domain name
 body SILLYDOTSDOMAINURI   /^http?\:\/\/([a-z0-9_\-A-Z]+\.)+\./


Have you ever seen these? Would it work, does any MUA or browser
silently collapse multiple dots?



I saw one of these in a phishing email.  I didn't know if it was  
supposed to be that way or not, but I was quite curious.  Firefox  
tries to connect to http://www..google.com . (click it and see)


Firefox will also try to connect to http://www.*.google.com .  On the  
blackhole DNS discussion boards, there were users reporting seeing  
wildcard (*) DNS entries in phishing emails.  Additionally, Yahoo and  
Flash both use wildcard DNS entries in their generated URLs. Is this  
SA evasion?


So as I pondered it, it seemed plausible that a phisher could create a  
zero-length subdomain which would evade scanning by regex processors  
(like SA) because it would not parse out as a valid URL.  But the  
browser will still try to connect.  Is this SA evasion?  Seems quite  
plausible.


Next up:  a SA rule to detect http://; followed by an invalid URL!

jp



--
Framework?  I don't need no steenking framework!


@fferent Security Labs:  Isolate/Insulate/Innovate  
http://www.afferentsecurity.com




Re: S-P-A-M Extra long domain names rule?

2008-04-21 Thread Theo Van Dinter
On Mon, Apr 21, 2008 at 10:26:02PM -0500, Jack Pepper wrote:
 I saw one of these in a phishing email.  I didn't know if it was  
 supposed to be that way or not, but I was quite curious.  Firefox  
 tries to connect to http://www..google.com . (click it and see)

Firefox can't find the server at www..google.com.

Doesn't seem like a good tactic.

 Firefox will also try to connect to http://www.*.google.com .

Firefox can't find the server at www.*.google.com.

 So as I pondered it, it seemed plausible that a phisher could create a  
 zero-length subdomain which would evade scanning by regex processors  
 (like SA) because it would not parse out as a valid URL.  But the  
 browser will still try to connect.  Is this SA evasion?  Seems quite  
 plausible.

Doesn't work.  I put http://www..google.com; in both text/plain and
text/html, SA finds it and parses out google.com.

SA found http://www.*.google.com;, domain of google.com, as a text/html href.
It doesn't find it as a parsed URL.

-- 
Randomly Selected Tagline:
 Zoidberg: So many memories, so many strange fluids gushing out 
of patients' bodies


pgp9640VLETrn.pgp
Description: PGP signature


Re: can we make AWL ignore mail from self to self?

2008-04-21 Thread Jo Rhett

Matt Kettler wrote:
There's 
nothing in trusted networks, I don't trust anything...


Jo, that's impossible in spamassasin. You cannot have an empty trust, it 
doesn't make any logical sense, and would cause spamassassin to fail 
miserably.


I should rather have said trust is only localhost.

If you don't declare a trusted_networks, SA will auto-guess for you. 
(And the auto-guesser is notorious for failing if your MX is NAT mapped)


And please, understand that trust here means trusted to never forge a 
received header not trusted to never relay any spam.


I know this.

In spamassassin, under trusting is BAD. It is just as bad as 
over-trusting. SA needs at least one trustworthy received header to work 
with.


How and why?  Are you saying I *must* have a 2nd-level MX host for SA to 
work?  That's not my experience, and 2-layer relays are backscatter 
sources.  Milter from the local MTA works just fine.


Also, to work properly, SA needs to be able to determine what is a part 
of your network, and what isn't. Unless you declare internal_networks 
separately, it bases internal vs external on the trust.


There is no network.  There is only a single host.  I don't control any 
other host on the subnet.


  trust no-one is NOT a valid option, and would actually result in the
problem you're suffering from. After all, if no headers are trusted, all 
email comes from no server, so SA would never be able to tell the 
difference between an email you really sent, vs a forgery from the outside.


This statement parses as nonsense.  SA can't parse an e-mail because it 
doesn't trust the source?  Isn't that all e-mail?


If your trust path is working properly, SA knows the difference. If it's 
not working, you get a broken AWL, broken RBLs, broken ALL_TRUSTED, and 
dozens of other broken things.


Okay, seriously I think you're both underestimating my understanding of 
this and further confusing the matter by making all sorts of unclear 
claims that don't reflect in reality.


I get trust paths.  This issue I reported is not related to trust paths. 
 It's not a broken trust path problem.  The e-mail came from an 
untrusted source, but was given a negative AWL score based on the sender 
name.  That has nothing to do with trust.


Re: can we make AWL ignore mail from self to self?

2008-04-21 Thread Jo Rhett

John Hardin wrote:
I'm only suggesting bypassing SA for mail that originates on the local 
network and is destined to the local network.


No.   I don't trust every user who can authenticate to this host to run 
active anti-virus on their hosts.  I scan all mail, everywhere.


And again, this isn't about local mail marked as spam.  It's about 
non-local mail being marked as ham.




Re: can we make AWL ignore mail from self to self?

2008-04-21 Thread Jo Rhett

Bob Proulx wrote:

Who to forge?  The answer is Everyone!  Any address that can be
obtained from a spam-virus infected PC and any address that can be
harvested from a web page.  Forge them all.  They are (mostly) valid
email addresses and will pass sender verification.  Send To: and From:
all of them.


You're going out of your way to miss the point.  That's hard work

Yes, a spammer can forge anyone.  Can they forge the exact e-mail 
addresses used by people I correspond with regularly?  Not in my 
experience.  Can they forge my e-mail to me?  Easily.


Re: can we make AWL ignore mail from self to self?

2008-04-21 Thread Jo Rhett

Justin Mason wrote:

hmm, I'm not sure.  It depends on your trusted_networks setting.
try running spamassassin -D and see what it logs...


I'm sorry -- feeling dense, how is this supposed to help?  From the  
headers quoted below you know what spamassassin is seeing.  There's  
nothing in trusted networks, I don't trust anything...



No, I don't know.  I'd have to run SpamAssassin to find out.  Since you're
asking, you can run it ;)


I would, but I can't find the exact situation that made this work nor 
the original message.  My other testing doesn't reproduce anything near 
a -10 score.


Is there any useful way to query the AWL database to find how this might 
have occurred?


trusted networks is just localhost, which is what Darryl recommended for 
single hosts without any trusted hosts.


Re: can we make AWL ignore mail from self to self?

2008-04-21 Thread Theo Van Dinter
On Mon, Apr 21, 2008 at 09:56:39PM -0700, Jo Rhett wrote:
 Yes, a spammer can forge anyone.  Can they forge the exact e-mail 
 addresses used by people I correspond with regularly?  Not in my 
 experience.  Can they forge my e-mail to me?  Easily.

Actually I don't think it's that hard, at least for conversations on public
lists.

Also, I've had spammers forge my email address from work to mail my personal
account.

fwiw.

-- 
Randomly Selected Tagline:
It's not you Bernie.  I guess I'm just not used to being chased around
 a mall at night by killer robots. - Linda from the movie Chopping Mall


pgpx9NISETr4Y.pgp
Description: PGP signature


Perl/SA permissions problem?

2008-04-21 Thread JLG

Problem: I can run sa-learn as root, but not as any other user.

I'm using SpamAssassin version 3.2.4, running on Perl version 5.8.6,  
running on Mac OS X Server 10.4.11. All of this was working before I  
updated to SpamAssassin 3.2.4, but I updated a lot of other perl  
modules at the same time, so I can't be sure that SA itself is the  
culprit. Best as I can tell, this is some sort of permissions problem,  
but it's a real bugger because it's broken all of my sa-learn tools,  
most of which execute sa-learn as clamav.


Logged in as root:

# /usr/local/bin/sa-learn --dbpath /var/amavis/.spamassassin --sync
bayes: synced databases from journal in 1 seconds: 1565 unique entries  
(3164 total entries)


However, it doesn't work when executed as another user:

# sudo -u clamav  /usr/local/bin/sa-learn --dbpath /var/ 
amavis/.spamassassin --sync
Can't locate Pod/Simple.pm in @INC (@INC contains: /System/Library/ 
Perl/5.8.6/darwin-thread-multi-2level /System/Library/Perl/5.8.6 / 
Library/Perl/5.8.6/darwin-thread-multi-2level /Library/Perl/5.8.6 / 
Library/Perl /Network/Library/Perl/5.8.6/darwin-thread-multi-2level / 
Network/Library/Perl/5.8.6 /Network/Library/Perl /System/Library/Perl/ 
Extras/5.8.6/darwin-thread-multi-2level /System/Library/Perl/Extras/ 
5.8.6 /Library/Perl/5.8.1/darwin-thread-multi-2level /Library/Perl/ 
5.8.1) at /System/Library/Perl/5.8.6/Pod/Text.pm line 34.
BEGIN failed--compilation aborted at /System/Library/Perl/5.8.6/Pod/ 
Text.pm line 34.
Compilation failed in require at /System/Library/Perl/5.8.6/Pod/ 
Usage.pm line 436.
BEGIN failed--compilation aborted at /System/Library/Perl/5.8.6/Pod/ 
Usage.pm line 443.

Compilation failed in require at /usr/local/bin/sa-learn line 26.
BEGIN failed--compilation aborted at /usr/local/bin/sa-learn line 26.

The @INC values are identical:

# perl -le 'print @INC'
/System/Library/Perl/5.8.6/darwin-thread-multi-2level /System/Library/ 
Perl/5.8.6 /Library/Perl/5.8.6/darwin-thread-multi-2level /Library/ 
Perl/5.8.6 /Library/Perl /Network/Library/Perl/5.8.6/darwin-thread- 
multi-2level /Network/Library/Perl/5.8.6 /Network/Library/Perl /System/ 
Library/Perl/Extras/5.8.6/darwin-thread-multi-2level /System/Library/ 
Perl/Extras/5.8.6 /Library/Perl/5.8.1/darwin-thread-multi-2level / 
Library/Perl/5.8.1 .


# sudo -u clamav perl -le 'print @INC'
/System/Library/Perl/5.8.6/darwin-thread-multi-2level /System/Library/ 
Perl/5.8.6 /Library/Perl/5.8.6/darwin-thread-multi-2level /Library/ 
Perl/5.8.6 /Library/Perl /Network/Library/Perl/5.8.6/darwin-thread- 
multi-2level /Network/Library/Perl/5.8.6 /Network/Library/Perl /System/ 
Library/Perl/Extras/5.8.6/darwin-thread-multi-2level /System/Library/ 
Perl/Extras/5.8.6 /Library/Perl/5.8.1/darwin-thread-multi-2level / 
Library/Perl/5.8.1 .


Permissions on the Bayes stuff are OK:

# ls -sla /var/amavis/.spamassassin
total 50024
0 drwx--6 clamav  amavis   204 Apr 22 00:23 .
0 drwxr-x---   10 clamav  amavis   340 Apr 21 21:18 ..
19760 -rw---1 clamav  amavis  10117120 Apr 22 00:23 auto- 
whitelist
   32 -rw-rw-rw-1 clamav  amavis 14976 Apr 22 00:23  
bayes_journal

20264 -rw---1 clamav  amavis  10375168 Apr 22 00:22 bayes_seen
 9968 -rw-rw-rw-1 clamav  amavis   5103616 Apr 22 00:22 bayes_toks

Here's an interesting bit: if I start amavisd-new as the clamav user,  
it gives a similar error (it complains about a different perl module,  
but has the same error syntax). If I start amavisd-new as root, it  
works perfectly--even though it's configuration is such that it  
switches to the clamav user internally as it starts up! So, this  
problem is not limited to SA; I'm thinking it's something with  
perl...but all the modules in question are properly installed, I've  
even done a force install on them to be sure. Everything compiles,  
tests, and installs without a hitch.


Any ideas? I'm at a loss.

Thanks,
Jon



smime.p7s
Description: S/MIME cryptographic signature


Re: can we make AWL ignore mail from self to self?

2008-04-21 Thread Bob Proulx
Jo Rhett wrote:
 Bob Proulx wrote:
 Who to forge?  The answer is Everyone!  Any address that can be
 
 You're going out of your way to miss the point.  That's hard work

It is you who are missing the point.  When spammers generate mail
from and to every possible combination they will eventually hit a
combination that you will see.  The distributed spamming engines of
the 'bot-nets are quite powerful and can generate this volume of
traffic.

Bob