Re: Lycos Make Love Not Spam screensaver
Noel K Hall II wrote: Fighting spam with a virus like attack...let's think about this one...not only will your ISP end up shutting down your connection for a violation of their TOS, you could possibly face court charges. Makes complete sense to me. My initial thought is, isn't the Internet slow enough without this clogging the pipes?
Re: Brightmail
I enabled it. And as noted it seemed remarkably ineffective, particularly as compared to SpamAssassin. I typically get 200 to 225 spams a day. Of this a few get through with SpamAssassin because I do not use SURBL. So about 3 or 4 a day sneak through. (It would be more without my custom jd_mangy_mortgage rules I've been building up for Mor t gage R at e spam.) Somebody needs to adapt the antidrug rules technology for anti- mortgage spam. I'm not clever enough to do that one, I fear. Anyway, I was seeing 20-30 with the spam blocker turned up to the low setting that is not stupid enough to send Are you really OK? messages to would be email senders. (I write those addresses into my procmailrc file and dump them to /dev/null. I don't care to correspond to such cretins. So I don't inflict it on others. They are trying to shift their personal wasted time to someone else. This ain't right. It's rude.) There may have been something I missed setting it all up. But I was on a dialup from Hell at Howard Johnson's outside the Orlando OCCC. Do NOT go there. Their phone lines appear to work for modem access only during the day, about 0830 to 1730. Outside those hours connections drop in a matter of seconds, something I found HIGHLY suspicious. Anyway. that made investigating setting it up in detail rather difficult. {^_^} - Original Message - From: [EMAIL PROTECTED] On Tue, 30 Nov 2004 22:37:53 -0800, jdow [EMAIL PROTECTED] wrote: I think Earthlink uses Brightmail. If that is so then Brightmail statistics are VERY bad. I doubt I had any false alarms - lost email. But it only got about 75% or less of the spam. I used it while on the road the last two weeks. I might comment that I was very unimpressed. Did you enable your filtering? I think that you have to enable it first from the Spamminator address. It's been ages since I had an Earthlink account, not sure if things are the same now.
Re: [Dshield] fingerprinting servers before accepting
Hi Joe, At 15:51 01-12-2004, Joe Emenaker wrote: That was the first thought through my mind when I read the original post. No need for a full-blown fingerprint... just see if they look server-ish or not. Try connecting to 25... and then maybe telnet, ssh, http, and imap. You cannot assume that any of these other services are running or accessible. There'd be some overhead involved in this, initially, but this could be mitigated by keeping a cache of previous call-backs. I imagine this would act like a sieve, where the hosts who send you the most mail (and, hence, would cause the greatest call-back load) would appear in the cache the soonest, and that would cut down on the call-back load the most. After a week or so, I imagine that the call-back load would be tapering off to those few odd hosts which connect. There are some sites which implement the above. Good thing they're well-known. We can add them to a file of known outgoing-only servers and can further cut down on the call-back load. Your users will scream while you determine which sites to whitelist. :) Regards, -sm
Test and Keep spam
Been getting a bunch of these lately, and they're falling on either side of the 5.0 margin. Two that came in under 5.0 today have unusual characteristics: The Bayes score on one is 60% and scores higher than one with an 80% Bayes score. You can see my current uncaught corpus here: http://home.sewingwitch.com:8000/Stuff/UncaughtSpam.mbox
Re: Test and Keep spam
Kenneth Porter wrote: Been getting a bunch of these lately, and they're falling on either side of the 5.0 margin. Two that came in under 5.0 today have unusual characteristics: The Bayes score on one is 60% and scores higher than one with an 80% Bayes score. You can see my current uncaught corpus here: http://home.sewingwitch.com:8000/Stuff/UncaughtSpam.mbox Kenneth, I've noticed with my corpus that BAYES_95 and BAYES_99 score less than say BAYES_80 ... which has been a little discouraging for me since most of the mail i'm filtering is japanese and other test don't hit often so I have to rely heavily on my (manually trained) Bayes database... having items that hit BAYES_99 only scoring 1.8 and change compared to the 2 and change that BAYES_80 scores has been a little frustrating. I'm tempted to change the scores for BAYES_95 and BAYES_99, but i'm concerned about what other effects that might have ... not sure if this information will be helpful or not, but thought i'd share anyways. alan p.s. I'm using SA 3.01 with MIMEDefang 2.49 on this machine. no 3rd party rulesets installed.
RE: Rules still not hitting right
Found the first problem and that was a typo in a rule that wasn't causing --lint to find it. I took all the rules out in my local.cf except for those two rules and the rules started to hit. I'm thinking it's the header rule I have that's causing the problem so I'm going to take a closer look at it. -Original Message- From: Johnson, S Sent: Wed 12/1/2004 2:56 PM To: Yackley, Matt; users@spamassassin.apache.org Cc: Subject: RE: Rules still not hitting right Gosh Im not the one that set this up. How do I know if this is the case? It looks like the local.cf only exists in the /etc/mail/spamassassin directory. _ From: Yackley, Matt [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 01, 2004 2:39 PM To: users@spamassassin.apache.org Subject: RE: Rules still not hitting right Quick question, are you running amavisd in chroot mode? If so make sure that your chroot location for /etc/mail/spamassassin also has the updated files, then restart amavis. -matt _ From: Johnson, S [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 01, 2004 2:20 PM To: Chris Santerre; users@spamassassin.apache.org Subject: RE: Rules still not hitting right Yes, I restarted the whole server The configs are in the local.cf file. _ From: Chris Santerre [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 01, 2004 2:03 PM To: Johnson, S; users@spamassassin.apache.org Subject: RE: Rules still not hitting right Yes it does require spamd to be restarted. Having restarted it and it still isn't working must be something else. Where exactly are these rules located? --Chris -Original Message- From: Johnson, S [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 01, 2004 2:40 PM To: Chris Santerre; users@spamassassin.apache.org Subject: RE: Rules still not hitting right Does adding a rule require restarting? From what I was reading, all you needed to do is run spammassassin -lint. But I'm not 100% sure on that... Anyway, I did reboot yesterday afternoon just to be sure and it's still missing the messages... _ From: Chris Santerre [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 01, 2004 1:41 PM To: Johnson, S; users@spamassassin.apache.org Subject: RE: Rules still not hitting right I believe you have to restart Amavis? --Chris -Original Message- From: Johnson, S [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 01, 2004 2:31 PM To: users@spamassassin.apache.org Subject: Rules still not hitting right I'm now running Spam Assassin 3.1 I've got two rules next to each other in the config: body REPLICA_WATCH /replicas of rolex/i score REPLICA_WATCH 5 body SAVE_UP_TO /save up to/i score SAVE_UP_TO2 When I have a message that I've been trying to stop with Rolex watches come in, I get the following response from Amavis: Dec 1 06:37:36 fw amavis[29225]: (29225-10) spam_scan: hits=0.267 tests=SAVE_UP_TO The text of the message contains the following: In our online store you can buy replicas of Rolex watches. They look and feel exactly like the real thing. - Save up to 40% compared to the
Re: Image Composition Analysis
On Wednesday, December 1, 2004, 3:25:42 PM, John Hardin wrote: On Wed, 2004-12-01 at 14:35, Chris Santerre wrote: We are seeing an increase in throw away domains being used to reroute to other domains that will NEVER show up directly in a spam. All in attempts to get passed SURBL. I'm going to bring up this idea again, in a slightly different context this time: Perhaps it would be useful to have a SURBL list that is automatically generated daily from the registrars' notifications of domains that have been recently created. This information is available for free download - I'm pretty sure I posted the location here a while ago. The definition of recently might require some testing to set properly, perhaps a starting point would be one week. Granted this SURBL would be more subject to FPs than a hand-maintained list, so it should have a correspondingly lower default score. And it wouldn't help too much if spammers don't start using their throwaway domains immediately after registering them. We still want SURBLs to be lists of domains (and a few IPs) that have actually occurred in spams. A list of all new registrations could perhaps be used as an internal data source, but I think it would have way too many false positives to use alone. The Outblaze data in ob.surbl.org somewhat fulfills your suggestion since it contains only domains that have been registered within the last 90 days *and which have appeared in a lot of spams lately. It tends to work well. Jeff C. -- Jeff Chan mailto:[EMAIL PROTECTED] http://www.surbl.org/
URIBL_JP_SURBL not working with 3.0.1 - but works with 3.0.0?
Hi there I have a few Fedora Core2 SA severs - the live ones with 3.0.1 (from tar), and my workstation running 3.0.0 (from rpm). A few spam got to my INBOX (shock! horror!), and just fer kicks I ran them through my local SA - and got 6.2/5. So 3.0.0 scored them as 6.2/5 - but the original SA server (running 3.0.1) scored them as 2.2/5... I then checked /etc/mail/spamassassin/local.cf - they are the same. Both had urirhssub URIBL_JP_SURBL multi.surbl.org.A 64 headerURIBL_JP_SURBL eval:check_uridnsbl('URIBL_JP_SURBL') describe URIBL_JP_SURBL Has URI in JP at http://www.surbl.org/lists.html tflagsURIBL_JP_SURBL net score URIBL_JP_SURBL4.0 at the bottom. And yet spamassassin -D spam.eml on both servers reports different results. Both succeed in doing the URIBL_JP_SURBL - but one gets 4 points and the 3.0.1 scored it as 0... Both SA 3.0.0 and 3.0.1 showed: debug: URIDNSBL: domain platinumprodirect.com listed (URIBL_WS_SURBL): 127.0.0.68 debug: URIDNSBL: domain platinumprodirect.com listed (URIBL_JP_SURBL): 127.0.0.68 But the URIBL_JP_SURBL score shows up on the 3.0.0 system - and not the 3.0.1 (BTW: it might help debugging if the score assigned to these RBL lookups showed up in the debugs? The 4.0 score is only mentioned in the output - not the debugs) Any ideas? -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
Re: URIBL_JP_SURBL not working with 3.0.1 - but works with 3.0.0?
On Thu, Dec 02, 2004 at 04:17:05PM +1300, Jason Haar wrote: headerURIBL_JP_SURBL eval:check_uridnsbl('URIBL_JP_SURBL') Any ideas? This has come up before: they're body rules now. -- Randomly Generated Tagline: I'm nothing ... I'm navel lint ... - From the movie True Lies pgp4z7iAquTEA.pgp Description: PGP signature
RE: spamd performance problems - again
Although i discounted the fact that that could be the problem, after your suggestion i did some testing, and yes, in fact, my MTA (postfix2) seems to be the problem here. Seems to only be able to relay about 70msgs/min, which is extremely slow. Thanks for putting me on the right track, looking into my MTA problem. Dimitry -Original Message- From: Gavin Cato [mailto:[EMAIL PROTECTED] Sent: Wednesday, 1 December 2004 7:27 PM To: Dimitry Peisakhov; users@spamassassin.apache.org Subject: Re: spamd performance problems - again Could your MTA be the bottleneck? On 1/12/04 5:14 PM, Dimitry Peisakhov [EMAIL PROTECTED] wrote: Hi Guys, I wrote to the list a few weeks ago asking for advice on spamd performance. I got some, and have implemented it, but dont know if i'm seeing a performance improvement. The performance i'm getting is far from other people are reporting, it seems. I'm running spamassassin 3.0.1 with postfix 2.0.11-4 on Redhat Enterprise linux, kernel 2.4.21-4.ELsmp. The box that this runs on is a dual-Xeon 1.5Ghz with 1.5gig memory. I've done lots of performance enhancing tricks, but havent had very noticable success. I've changed the language from redhat default utf-8 to regular US-en. I'm using spamc/spamd. I'm using a caching-only dns server on localhost. I've also played around with the max-children and max-conn-per-child settings for spamd. Since i have lots of memory available i've set max-children to 30 amd max-conn-per-child to 250. Doing this hasnt increased performance though, i think. The box is able to process about 1msg/sec and no better, although i've been told that a box of this config should be able to do about 8msgs/sec. I know that spamassassin3 pre-spawns all its children. I've noticed however that when a mail queue builds up i notice only a max of 15 or so working at any one time (looking via 'top'), and sometimes none are working at all. I've got another scanner type app running on the machine as well, Anomy Sanitizer, which is launched from the same script as spamc, an attachment deleter which doesnt actually run daemonised, but disabling it completely doesnt give much of a performance boost. Currently the system is in testing, so all the mail coming in is actually 100% spam. This shouldnt effect spamd performance though? Can anyone give me some advice about this? At peak spam times I currently get mail queue build-ups of 2000+ messages, which results in about 15-20min delay of message delivery, which is somewhat unacceptable. thanks in advance, Regards, Dimitry Peisakhov Systems Administrator HENRY WALKER ELTIN 02 8875 4721
Re: URIBL_JP_SURBL not working with 3.0.1 - but works with 3.0.0?
Theo Van Dinter wrote: On Thu, Dec 02, 2004 at 04:17:05PM +1300, Jason Haar wrote: headerURIBL_JP_SURBL eval:check_uridnsbl('URIBL_JP_SURBL') Any ideas? This has come up before: they're body rules now. Sheesh - two seconds of looking on Google for URIBL_JP_SURBL body tells me what you say is true. For the record What was (in 3.0.0) header URIBL_JP_SURBL eval:check_uridnsbl('URIBL_JP_SURBL') in 3.0.1 it's body URIBL_JP_SURBL eval:check_uridnsbl('URIBL_JP_SURBL') Problem fixed. Thanks! -- Cheers Jason Haar Information Security Manager, Trimble Navigation Ltd. Phone: +64 3 9635 377 Fax: +64 3 9635 417 PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
Re: URIBL_JP_SURBL not working with 3.0.1 - but works with 3.0.0?
Theo Van Dinter wrote: This has come up before: they're body rules now. Maybe we should parse either in the URIBL module until 3.1? Daniel -- Daniel Quinlan http://www.pathname.com/~quinlan/
Re: RulesDuJour web site?
Hello, can anyone tell a me what this can be used for ? I have SA 3.01 set up and running fine now - what does this script acutally update ? Thanks, Chris - Original Message - From: Rob [EMAIL PROTECTED] To: users@spamassassin.apache.org Sent: Wednesday, December 01, 2004 9:46 PM Subject: Re: RulesDuJour web site? On Wed, Dec 01, 2004 at 02:25:08PM -0600, Brian Byers wrote: Not sure what happened, but I have version 1.18 of the script. I've been attempting to look for an update, but, as you mentioned, the site is broken. http://www.cbbyers.net/rules_du_jour.txt The url that rules_du_jour checks for an update to itself is http://sandgnat.com/rdj/rules_du_jour and the file is still available there for download. The version there is still 1.18. Rob
How to block rolex spam
I have been getting bombarded by spam trying to sell me Rolex watches of one variety or another. I have had experience writing rules as yet but may need to start. The following is the smtp header from one of the messages. cleta is my server running webshield before SA gets a hold of it on another system. What are my options on blocking this ? Thanks, Ron Received: From datamar.com.ar ([61.248.28.182]) by cleta (WebShield SMTP v4.5 MR1a P0803.345); id 1101847291937; Tue, 30 Nov 2004 15:41:31 -0500 Received: from 138.153.251.19 by smtp.prim.is; Tue, 30 Nov 2004 20:45:02 + Message-ID: [EMAIL PROTECTED] From: Patty Leblanc [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: Order Rolex or other Swiss watches online Date: Wed, 01 Dec 2004 00:45:00 +0400 MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-Spam-Status: No, hits=5.562 tagged_above=2 required=6.31 tests=BAYES_50, RAZOR2_CF_RANGE_51_100, RAZOR2_CHECK, RCVD_IN_BL_SPAMCOP_NET, RCVD_IN_DSBL, RCVD_IN_SORBS_HTTP, RCVD_IN_SORBS_MISC X-Spam-Level: * Return-Path: [EMAIL PROTECTED] X-OriginalArrivalTime: 30 Nov 2004 20:41:39.0400 (UTC) FILETIME=[FE04CC80:01C4D71C] Ron Nutter [EMAIL PROTECTED] Network Manager Information Technology Services(502)863-7002 Georgetown College Georgetown, KY40324-1696
Re: How to block rolex spam
On Thursday 02 December 2004 13:12, Ronald I. Nutter might have typed: I have been getting bombarded by spam trying to sell me Rolex watches of one variety or another. I have had experience writing rules as yet but may need to start. The following is the smtp header from one of the messages. cleta is my server running webshield before SA gets a hold of it on another system. What are my options on blocking this ? mejc.com the URL by any chance? If so, it should be in SURBL at least.
Re: How to block rolex spam
These are working for me. They haven't been mass-checked yet, and the scoring may be aggressive. body LW_ROLEX /\broll?ex\b/i score LW_ROLEX 1 describe LW_ROLEX Mentions Rolex body __LW_OBREPLICA /\brepIicas?\b/i body __LW_REPLICA /\breplicas?\b/i body __LW_WATCHES /\bwatch(?:es)?\b/i meta LW_ROLEXWATCH LW_ROLEX __LW_WATCHES score LW_ROLEXWATCH 1 describe LW_ROLEXWATCH Mentions rolex watches meta LW_FAKEROLEX LW_ROLEX __LW_REPLICA score LW_FAKEROLEX 5 describe LW_FAKEROLEX Talks about rolex and replicas body LW_WANTAROLEX /Want a (?:\w+ ){0,3}Rolex(?: Watch)?\?/i # Want a cheap Rolex Watch? score LW_WANTAROLEX 5 describe LW_WANTAROLEX Asks if you want a rolex watch meta LW_ROLEXOBFU __LW_OBREPLICA LW_ROLEX score LW_ROLEXOBFU 5 describe LW_ROLEXOBFU Obfuscating replica rolexes! Loren
Trashed Bayes DBase
The Bayes DBase we use here was set to the default autolearn levels. IT was first set about 1 year ago and the first anyone looked at it was when I started looking at how it was performing. Basically, it was tagging about 75% of mail as BAYES_00 (we receive about 70% SPAM here), so the BAYES was way off I reset the autolearn level for HAM as -4, as I suspected that it had been learning things that weren't really ham. The autolearn level for SPAM stayed the same. I hoped this would be a temporary fix to the problem, until I could come up with something more permanent. What this actually did, was mean that the Bayes DB kept seeing more and more SPAM messages with our mail header in it. Hence over time, it began to think more and more clearly that anything with our mail header in it was definitely SPAM. This has finally reached a head and we have had to disable Bayes altogether until we can iron this out. So, my question is, how on earth do I go about repairing this mess? --- This email from dns has been validated by dnsMSS Managed Email Security and is free from all known viruses. For further information contact [EMAIL PROTECTED]
SA only doing bayes every second mail
Hello, I have been running SA succesfully for quite some time now, but lately I am experiencing a strange problem: only every second mail that is checked by SpamAssassin is being scored by the bayes rules. I started noticing this when I saw that there was no BAYES_XX score in the SpamCheck header for some unmarked Spam. When I run spamassassin -D --lint everything seems fine. So I set Debug = Yes Debug SpamAssassin = Yes in my MailScanner configuration (I run SA from MailScanner), and it confirmed my suspicion. You can look at the output of a batch run at http://iua-mail.upf.es/mailscanner.txt If you look at the debug: tests= lines, you can see that only every second mail is bayes-checked $ grep debug: tests= mailscanner.txt debug: tests=ALL_TRUSTED,MISSING_HEADERS,MISSING_SUBJECT,NO_REAL_NAME debug: tests=ALL_TRUSTED,AWL,BAYES_00 debug: tests=ALL_TRUSTED,AWL debug: tests=ALL_TRUSTED,AWL,BAYES_00 debug: tests=ALL_TRUSTED,AWL debug: tests=ALL_TRUSTED,AWL,BAYES_00 debug: tests=ALL_TRUSTED,AWL debug: tests=ALL_TRUSTED,AWL,BAYES_00 debug: tests=AWL,HTML_80_90,HTML_MESSAGE,HTML_TAG_EXIST_TBODY And what is very strange as well is that is says both: debug: bayes: Not available for scanning, only 0 spam(s) in Bayes DB 200 and debug: bayes corpus size: nspam = 6040, nham = 20334 Obviously, the only 0 spam(s) line is wrong. Note that it always comes in combination with a database sync. debug: refresh: 10537 refresh /var/lib/MailScanner/bayes.mutex debug: synced Bayes databases from journal in 0 seconds: 74 unique entries (74 total entries) debug: Syncing complete. debug: bayes: Not available for scanning, only 0 spam(s) in Bayes DB 200 I am not sure when this problem started, but I don't think it was like this from the beginning. I updated to 3.0.1, but this did not help. I hope somebody has an idea of what is happening... I can provide you with more information if needed. Kind regards, Maarten
SA with qmail not working remotely
Hi, I have qmail running along with Spam Assassin, but I am having a peculiar problem. Mail that is sent locally to any source local or remote is picked up and analyzed by SA, but, SA does NOT process any mail sent from a remote host to a local address; it goes through normally, as if SA was not there. This is the second server that I put this combination on and the first works correctly. I have searched through both qmail's and SA's list archives, but have seen nothing that has helped. I am hoping that someone can help or at least point me in the right direction for figure this out. Thanks, Thomas Whitney Following are the details of the installation: OS: Debian Linux, Qmail: Installed and running under daemontools. qmail-queue patch has been applied. - In /etc/profile is: export QMAILQUEUE='/usr/bin/qmail-spamc' - In /usr/bin/ is: lrwxrwxrwx1 root root 26 Nov 26 15:30 /usr/bin/qmail-queue - /var/qmail/bin/qmail-queue lrwxrwxrwx1 root root 26 Nov 26 11:12 /usr/bin/qmail-spamc - /var/qmail/bin/qmail-spamc - In /var/qmail/bin is: -rws--x--x1 qmailq qmail 12924 Nov 26 14:39 qmail-queue -rwxr-xr-x1 root qmail 5711 Nov 26 15:29 qmail-spamc - env directory in /var/qmail/supervise/qmail-smtpd looks like this: drwxr-xr-x2 root root4096 Nov 26 14:54 env - In env is: -rw-r--r--1 root root 21 Nov 26 14:54 QMAILQUEUE - % less QMAILQUEUE (yields) /usr/bin/qmail-spamc - In /service is: lrwxrwxrwx1 root root 32 Aug 15 00:00 qmail-pop3d - /var/qmail/supervise/qmail-pop3d lrwxrwxrwx1 root root 31 Nov 25 07:24 qmail-send - /var/qmail/supervise/qmail-send lrwxrwxrwx1 root root 32 Aug 15 00:00 qmail-smtpd - /var/qmail/supervise/qmail-smtpd - tcp-smtp looks like this: 127.:allow,RELAYCLIENT= 10.0.0.:allow,RELAYCLIENT= :allow - run in /var/qmail/supervise/qmail-smtpd #!/bin/sh QMAILDUID=`id -u qmaild` NOFILESGID=`id -g qmaild` exec \ envdir /var/qmail/relay-ctrl \ relay-ctrl-chdir \ /usr/local/bin/softlimit -m 300 \ /usr/local/bin/tcpserver -H -R -v -p \ -x /var/qmail/rules/tcp.smtp.cdb \ -u $QMAILDUID -g $NOFILESGID 0 smtp \ relay-ctrl-check \ /var/qmail/bin/qmail-smtpd 21
Re: Is it not recommanded to learn a message already flaged as spam?
At 10:17 AM 12/2/2004 +0100, Nicolas wrote: With mutt, I'd like to define a macro which learn the mail as spam, report it to razor, and delete it. I'd like to know if it is not recommanded to learn a mail as spam, while it's already flaged as spam by SA? It IS recommended to learn mail that's already been flagged. Even if it's flagged BAYES_99 SA can still learn worthwhile tokens from a message. sa-learn recognizes SA's own spam tags, and will automatically strip those out before learning it. The only thing I'd avoid in training messages is I'd not intentionally train the same message twice. But even this is only because it's a minor waste of time.. SA will just ignore them, no harm done, but it's pointless to go out of your way to retrain the same message. Also, if you use spamassassin -r on the message, it will strip tags, learn as spam, and report it to razor, spamcop and any other hash systems you have installed (ie: dcc or pyzor.) So all your macro needs to do is call spamssassin -r message.txt and then delete the message.
Re: Is it not recommanded to learn a message already flaged as spam?
On Thu, Dec 02, 2004 at 10:17:55AM +0100, Nicolas wrote: With mutt, I'd like to define a macro which learn the mail as spam, report it to razor, and delete it. I'd like to know if it is not recommanded to learn a mail as spam, while it's already flaged as spam by SA? That's because I see some of my mails which are flagged as spam which aren't listed in razor. I don't want to use several keys (one to learn spam, one to report to razor). Reporting spam via spamassassin -r will report it to Razor (also pyzor, DCC and SpamCop if they configured) and learn the mail as spam. No need for multiple macros. Yes, I would recommend it. Michael pgpuJEBJP2E1K.pgp Description: PGP signature
Re: Image Composition Analysis
On Tue, Nov 30, 2004 at 04:27:14PM -0600, Smart,Dan wrote: Catching image only E-mail with pornographic images is really difficult. My users are offended when they get one, and wonder how I could not catch it. Explaining that the document was text, filled with bayes poison, and the one porn image with no porn words in the document doesn't seem to have much of an impression on them. Tell your users or set it up for them to not view html email or at the very least, not to view images by default. Both of these are unnecessarily set by default by most GUI based email clients and they are a privacy and security issue for the user. BTW, I get bayes poisoned, image only mails and they score above 10 on my system all the time. Mike -- /-\ | Michael Barnes [EMAIL PROTECTED] | | UNIX Systems Administrator | | College of William and Mary | | Phone: (757) 879-3930 | \-/
Re: Image Composition Analysis
On Tue, Nov 30, 2004 at 07:25:45PM -0500, Matt Kettler wrote: Yes, but DCC is still more reliable and faster. (I use both) I had to score DCC with 0.1 because it has way too many false positives. My local.cf section dealing with this: # too many false posives with this guy, meta corrected below score DCC_CHECK 0.1 metaDCC_PYZOR (DCC_CHECK PYZOR_CHECK) score DCC_PYZOR 2.9 DCC seems to have a large number of _solicited_ bulk email in its database, and my users get very upset when they sign up for junk email and it gets marked anywhere near spam. Just in my experience, I've noticed that there is a high correlation between DCC positive hits Pyzor positive hits real spam, so I scored it that way. Mike -- /-\ | Michael Barnes [EMAIL PROTECTED] | | UNIX Systems Administrator | | College of William and Mary | | Phone: (757) 879-3930 | \-/
Re: URIBL_JP_SURBL not working with 3.0.1 - but works with 3.0.0?
On Thu, Dec 02, 2004 at 12:12:14AM -0800, Dan Quinlan wrote: Maybe we should parse either in the URIBL module until 3.1? I don't think it's worth it, but wouldn't be opposed to a patch. (it should be pretty trivial iirc) -- Randomly Generated Tagline: Anyone who thinks UNIX is intuitive should be forced to write 5000 lines of code using nothing but vi or emacs. ACK! (Discussion in comp.os.linux.misc on the intuitiveness of commands, especially Emacs.) pgpiUzyh5pE9e.pgp Description: PGP signature
Re: Trashed Bayes DBase
Gray, Richard wrote: Basically, it was tagging about 75% of mail as BAYES_00 (we receive about 70% SPAM here), so the BAYES was way off [snip] This has finally reached a head and we have had to disable Bayes altogether until we can iron this out. So, my question is, how on earth do I go about repairing this mess? 1) Wipe your existing bayes_* files. Given what you're saying here, they have totally incorrect data and if anything wiping them completely should *improve* your spam detection rate. g 2) Enable Bayes and autolearn. Leave the autolearn=ham threshold low; although you might want to bring it up to -0.1 or so to learn high-scoring hams. 3) Collect some hand-classified ham and spam. Feed both to Bayes. Watch for messages that get misclassified - ignore the scores. Feed the misclassified messages back into Bayes as appropriate. Note that the feedback process is ongoing to keep up with the changing flow of spam! I'm still feeding misclassified mail into the Bayes dbs on several systems in various configurations - although not nearly as often, nor as many messages as when I started. 4) Make sure you're using SURBL - this will significantly help spam scores get further separated from ham scores, and allow more spam to be autolearned correctly. Bayes (and any other learning system) needs fairly close attention for the first little while; after a few weeks it should be working much smoother. -kgd -- Get your mouse off of there! You don't know where that email has been!
Latest spamassassin install problem
Hi all, I trying to install latest spamassasin (3.0.1) on FreeBSD latest 4.x with perl 5.8.5 and not working: cpan: /usr/bin/perl spamc/configure.pl --prefix=/usr/local --sysconfdir=/etc/mail/spamassassin --datadir=/usr/local/share/spamassassin --enable-ssl=no cd spamc /usr/local/bin/perl version.h.pl version.h.pl: creating version.h spamc/configure.pl: version.h.pl: Failed to get the version from Mail::SpamAssassin. Please use the --with-version= switch to specify it manually. The error was: version.h.pl: version.h.pl: version.h.pl: version.h.pl: version.h.pl: version.h.pl: version.h.pl: version.h.pl: Digest::SHA1 object version 2.01 does not match bootstrap parameter 2.10 at /usr/local/lib/perl5/5.8.5/mach/DynaLoader.pm line 253. Compilation failed in require at ../lib/Mail/SpamAssassin/EvalTests.pm line 33. BEGIN failed--compilation aborted at ../lib/Mail/SpamAssassin/EvalTests.pm line 33. Compilation failed in require at ../lib/Mail/SpamAssassin/PerMsgStatus.pm line 56. BEGIN failed--compilation aborted at ../lib/Mail/SpamAssassin/PerMsgStatus.pm line 56. Compilation failed in require at ../lib/Mail/SpamAssassin.pm line 74. BEGIN failed--compilation aborted at ../lib/Mail/SpamAssassin.pm line 74. Compilation failed in require at version.h.pl line 27. *** Error code 2 Stop in /root/.cpan/build/Mail-SpamAssassin-3.0.1. /usr/bin/make -- NOT OK Running make test Can't test without successful make thats all... don`t know match about perl. thanks, Casper
Re: Latest spamassassin install problem
On Thu, Dec 02, 2004 at 06:03:32PM +0200, Kaspars wrote: version.h.pl: version.h.pl: version.h.pl: Digest::SHA1 object version 2.01 does not match bootstrap parameter 2.10 at /usr/local/lib/perl5/5.8.5/mach/DynaLoader.pm line 253. Compilation failed in require at ../lib/Mail/SpamAssassin/EvalTests.pm line 33. Your installation of Digest::SHA1 is screwed up. You have 1 version of the perl module, and another version of the compiled XS library. This is the exact issue discussed at: http://wiki.apache.org/spamassassin/RazorCantLocateNew -- Randomly Generated Tagline: I cannot have an aide who will not look up. You will be forever walking into things. - Dukhat on Babylon 5 pgpHVWNTHlsfe.pgp Description: PGP signature
Re: wich is the best milter interface for spamassassin?
Kelson wrote: Matias Lopez Bergero wrote: I'm also running clamav with clamav-milter, and I would like to hit the best performance, that's why I was asking for comments :) MIMEDefang will tie into both SpamAssassin and ClamAV, plus any of a dozen other virus scanners, plus run its own tests. I read about MIMEDefand a couple of months ago, but for some reason I chose not to use it. I'm going to have a look on it again. Thanks for the info :) BR, Matías.
Re: Image Composition Analysis
Michael Barnes wrote: Matt Kettler wrote: Yes, but DCC is still more reliable and faster. (I use both) I had to score DCC with 0.1 because it has way too many false positives. [...] DCC seems to have a large number of _solicited_ bulk email in its database, and my users get very upset when they sign up for junk email and it gets marked anywhere near spam. Of course DCC will contain solicited bulk email in the database! You *completely* misunderstand the entire purpose of DCC. Please read along with me the first few paragraphs of the documentation. man dcc DESCRIPTION The Distributed Checksum Clearinghouse or DCC is a cooperative, distributed system intended to detect bulk mail or mail sent to many people. It allows individuals receiving a single mail message to determine that many other people have received essentially identical copies of the message and so reject or discard the message. How the DCC Is Used The DCC can be viewed as a tool for end users to enforce their right to opt-in to streams of bulk mail by refusing bulk mail except from sources in a whitelist. Whitelists are the responsibility of DCC clients, since only they know which bulk mail they solicited. DCC is not about spam. DCC is about bulk email. Those are two completely different things. DCC is a tool to determine that other people have received the same message that you just received. If you have subscribed to a mailing list then of course the mailing list messages will be in DCC. [Although it turns out that most don't because most people follow the rules and whitelist subscribed mailing lists thereby avoiding logging legitimate mailing list messages to the database. And since well behaved mailing lists are not the problem there is no reason to log them there.] Bob
Bayes databases losing file ownership
spamd runs as mail and that's what the bayes_ files are owned as. A few days ago we started seeing an increase in spam and looking into the problem today, I found that the bayes_toks file (but not bayes_seen) was owned as root. Anyone have any ideas what could cause this? We do run a script that runs as root that calls sa-learn occasionally. Could that interaction somehow cause the file ownership to be changed? If so, any recommendations for the proper locking between spamd and sa-learn? Thanks, Chris
Re: wich is the best milter interface for spamassassin?
Michael W Cocke wrote: On Wed, 01 Dec 2004 11:12:27 -0800, you wrote: Matias Lopez Bergero wrote: I'm also running clamav with clamav-milter, and I would like to hit the best performance, that's why I was asking for comments :) The absolute top performance improvement that you can do to your mail server is to get off sendmail. I switched from sendmail to postfix last year. WHOOSH! :-D I'll keep that in mind. How you are measuring the performance of the MTA? I would like to know how fast my server is running, and what fast could be running. BR, Matías.
Re: Bayes databases losing file ownership
Chris Blaise wrote: spamd runs as mail and that's what the bayes_ files are owned as. A few days ago we started seeing an increase in spam and looking into the problem today, I found that the bayes_toks file (but not bayes_seen) was owned as root. Anyone have any ideas what could cause this? Running spamd as 'root' instead of 'mail', most likely. When it writes the files they will be owned by the current user. If they are owned by root then the current user at that moment was the root user. Since the superuser has permissions to take over any file but the reverse is not allowed this is a one-way street. That is, a mistake can latch into this mode and running as the 'mail' user can't fix it. We do run a script that runs as root that calls sa-learn occasionally. Could that interaction somehow cause the file ownership to be changed? If so, any recommendations for the proper locking between spamd and sa-learn? That is probably your problem. Run that script as the 'mail' user. The root user can do this easily enough. su mail -c sa-learn --options-here Bob
Re: Bayes databases losing file ownership
On Thu, Dec 02, 2004 at 09:32:13AM -0700, Chris Blaise wrote: spamd runs as mail and that's what the bayes_ files are owned as. A few days ago we started seeing an increase in spam and looking into the problem today, I found that the bayes_toks file (but not bayes_seen) was owned as root. Anyone have any ideas what could cause this? Running sa-learn/spamassassin as any other user besides mail can cause this. Bayes will occasionally run expire or a journal sync that will recreate the files. We do run a script that runs as root that calls sa-learn occasionally. Could that interaction somehow cause the file ownership to be changed? If so, any recommendations for the proper locking between spamd and sa-learn? Look at bayes_file_mode, then it won't matter if the owner changes. Michael pgpItzkvy2si3.pgp Description: PGP signature
RE: Bayes databases losing file ownership
Thanks. I'll modify the script to run as mail. Can you think of why it wouldn't always happen? I'd expect running the script would always change the ownership, but it's been well over a month (the script is run every couple of days) and this is the first time its happened. And any not both files? Chris -Original Message- From: Bob Proulx [mailto:[EMAIL PROTECTED] Sent: Thursday, December 02, 2004 9:37 AM To: [EMAIL PROTECTED] Subject: Re: Bayes databases losing file ownership Chris Blaise wrote: spamd runs as mail and that's what the bayes_ files are owned as. A few days ago we started seeing an increase in spam and looking into the problem today, I found that the bayes_toks file (but not bayes_seen) was owned as root. Anyone have any ideas what could cause this? Running spamd as 'root' instead of 'mail', most likely. When it writes the files they will be owned by the current user. If they are owned by root then the current user at that moment was the root user. Since the superuser has permissions to take over any file but the reverse is not allowed this is a one-way street. That is, a mistake can latch into this mode and running as the 'mail' user can't fix it. We do run a script that runs as root that calls sa-learn occasionally. Could that interaction somehow cause the file ownership to be changed? If so, any recommendations for the proper locking between spamd and sa-learn? That is probably your problem. Run that script as the 'mail' user. The root user can do this easily enough. su mail -c sa-learn --options-here Bob
Re: How to block rolex spam
Ronald I. Nutter wrote: I have been getting bombarded by spam trying to sell me Rolex watches of X-Spam-Status: No, hits=5.562 tagged_above=2 required=6.31 tests=BAYES_50, RAZOR2_CF_RANGE_51_100, RAZOR2_CHECK, RCVD_IN_BL_SPAMCOP_NET, RCVD_IN_DSBL, RCVD_IN_SORBS_HTTP, RCVD_IN_SORBS_MISC X-Spam-Level: * That email hist enough blacklists that it should have got marked but you raised your required score to 6.31 - and you didn't raise the blacklist scoreing to match. I'd add .3 to .5 to spamcop, DSBL and each of the SORBS I see a few rolex spam in my quarantine but I've never had one in my inbox yet.
RE: Is it not recommanded to learn a message already flaged as spam?
Cool thanks, I was getting kind of confused. So I guess my next task will be to add razor. thanks -Original Message- From: Matt Kettler [mailto:[EMAIL PROTECTED] Sent: Thursday, December 02, 2004 7:04 AM To: Nicolas; spamassassin-users mailing-list Subject: Re: Is it not recommanded to learn a message already flaged as spam? At 10:17 AM 12/2/2004 +0100, Nicolas wrote: With mutt, I'd like to define a macro which learn the mail as spam, report it to razor, and delete it. I'd like to know if it is not recommanded to learn a mail as spam, while it's already flaged as spam by SA? It IS recommended to learn mail that's already been flagged. Even if it's flagged BAYES_99 SA can still learn worthwhile tokens from a message. sa-learn recognizes SA's own spam tags, and will automatically strip those out before learning it. The only thing I'd avoid in training messages is I'd not intentionally train the same message twice. But even this is only because it's a minor waste of time.. SA will just ignore them, no harm done, but it's pointless to go out of your way to retrain the same message. Also, if you use spamassassin -r on the message, it will strip tags, learn as spam, and report it to razor, spamcop and any other hash systems you have installed (ie: dcc or pyzor.) So all your macro needs to do is call spamssassin -r message.txt and then delete the message.
Re: Image Composition Analysis
At 11:29 AM 12/2/2004, Bob Proulx wrote: DCC seems to have a large number of _solicited_ bulk email in its database, and my users get very upset when they sign up for junk email and it gets marked anywhere near spam. Of course DCC will contain solicited bulk email in the database! You *completely* misunderstand the entire purpose of DCC. Please read along with me the first few paragraphs of the documentation. Actually, In my experience, DCC contains very little solicited bulk. It also contains much less solicited bulk mail than razor does. This is of course completely contrary to Razor's goal of not containing solicited email, and DCC's claim of not caring. This experience is also consistent with the mass-check results in STATISTICS-set3.txt for SA 3.0. DCC has a noticably higher S/O ratio than Razor does. 4.936 10.1125 0.03010.997 0.792.17 DCC_CHECK 35.260 71.0900 1.29800.982 0.381.51 RAZOR2_CHECK When DCC fired in this test, 99.7% of the matches were really spam. For razor, 98.2% of it's matches were really spam. Razor's total spam hit rate is MUCH higher, but it's accuracy is worse. I'd treat the DCC and Razor design goals with a huge grain of salt compared to their real-world behaviors. Both have some FPs, but then again, so does every rule. Most of my FPs on either Razor or DCC are solicited bulk mail. Also if most of your DCC problems are based on a particular sender, or only a few senders, you can configure DCC to not match that sender's mail using the whiteclnt file. I've not needed to do this, but it's easy to set up.
Re: [Dshield] fingerprinting servers before accepting
On Wed, 1 Dec 2004, Robert LeBlanc wrote: One workaround might be to use a local DNSBL (e.g. rbldnsd), and create a new IP address entry in the DNSBL based on the p0f results. A script This actually sounds like it would be a good public DNSBL. Rather than have everyone fingerprint, the central DNSBL would perform fingerprinting of IPs that are requested and not in the cache, then cache the results. Otherwise, everyone running the fingerprints could add up to a good amount of traffic. == Chris Candreva -- [EMAIL PROTECTED] -- (914) 967-7816 WestNet Internet Services of Westchester http://www.westnet.com/
Re: Bayes databases losing file ownership
Chris Blaise wrote: spamd runs as mail and that's what the bayes_ files are owned as. A few days ago we started seeing an increase in spam and looking into the problem today, I found that the bayes_toks file (but not bayes_seen) was owned as root. Anyone have any ideas what could cause this? Yeah, when you run sa-learn your script changes the file permisons. Just put a chmod at the end of your script to change them back...
suggestion
I am new to the list but did search the archive for related topics... A suggestion for the architects of Spamassassin: The _SCORE_ variable is great but it would be even better if it had leading zeros/spaces so that my users could sort on the subject field which would ease the task of visually assessing the emails' likelyhood of not being spam. Thanks Stephen Moss
SURBLS
What tests can I do to make sure SURBLS is working? I havent seen any scores for SURBLS in any of the caught or uncaught emails. Thanks, --Mike
SA 3.x install problem
I'm trying to build a new box with SA 3.0.1, but a perl missing perl module won't install. HTML::Parser keeps throwing: undefined symbol: sv_catpvn_utf8_upgrade So obviously, there's something about UTF-8 encoding/decoding I'm missing, but Google searches, the SA Wiki, and other resources are not helping. Can anyone offer any thoughts? Oh... This box runs RH9. Steve
Re: suggestion
At 01:53 PM 12/2/2004, Stephen Moss wrote: I am new to the list but did search the archive for related topics... A suggestion for the architects of Spamassassin: The _SCORE_ variable is great but it would be even better if it had leading zeros/spaces so that my users could sort on the subject field which would ease the task of visually assessing the emails' likelyhood of not being spam. Hmm, is this a limitation of SA, or is your client suffering from persistent design flaw number four: http://www.asktog.com/Bughouse/10MostPersistentBugs.html (Yes, I do point that out mostly for humor reasons.. however, the article does make a good point about really dumb sorts)
Re: Bayes databases losing file ownership
On Thu, Dec 02, 2004 at 12:35:53PM -0600, Martin McWhorter wrote: Chris Blaise wrote: spamd runs as mail and that's what the bayes_ files are owned as. A few days ago we started seeing an increase in spam and looking into the problem today, I found that the bayes_toks file (but not bayes_seen) was owned as root. Anyone have any ideas what could cause this? Yeah, when you run sa-learn your script changes the file permisons. Just put a chmod at the end of your script to change them back... You should just let SA handle this for you. Look at the bayes_file_mode config option, this will tell SA what permissions to keep the bayes files. Michael pgpDrl8chKXgY.pgp Description: PGP signature
Re: suggestion
On Thu, Dec 02, 2004 at 02:53:58PM -0400, Stephen Moss wrote: I am new to the list but did search the archive for related topics... A suggestion for the architects of Spamassassin: The _SCORE_ variable is great but it would be even better if it had leading zeros/spaces so that my users could sort on the subject field which would ease the task of visually assessing the emails' likelyhood of not being spam. Assuming you are running 3.0 perldoc Mail::SpamAssassin::Conf search for _SCORE(PAD)_ Michael pgpuhOie2TVOb.pgp Description: PGP signature
spamd process using to much cpu (again)
Hello, A couple of days ago, I post a msg asking for help with SA because it was to slow and the spamd processes was using to much resources. Well, I fix the speed problem that I was having with SA, but I still have the CPU consumption problem. From time to time, some spamd process sticks on top of the top listing with an ~90% CPU utilization, like this: 27639 mselig39 19 30104 29M 2472 R N 105.2 1.9 60:04 0 spamd Here the spamd process is using 105.2% of my CPU resources, and 29MB of memory. Normally the spamd process uses between 10 and 30% of CPU resources, the processes that use around 90% stay there until I kill them. There is a way to prevent this? What can be causing this hi CPU usage? Any help will be most welcome. BR, Matías.
Re: SURBLS
At 02:01 PM 12/2/2004, Mike Carlson wrote: What tests can I do to make sure SURBLS is working? I havent seen any scores for SURBLS in any of the caught or uncaught emails. Send yourself an email with the surbl tespoint in it: http://www.surbl-org-permanent-test-point.com/ That should trigger the spamcop URI list from surbl.
Re: suggestion (already solved)
Brilliant, Thank you On 2 Dec 2004 at 13:16, Michael Parker wrote: On Thu, Dec 02, 2004 at 02:53:58PM -0400, Stephen Moss wrote: I am new to the list but did search the archive for related topics... A suggestion for the architects of Spamassassin: The _SCORE_ variable is great but it would be even better if it had leading zeros/spaces so that my users could sort on the subject field which would ease the task of visually assessing the emails' likelyhood of not being spam. Assuming you are running 3.0 perldoc Mail::SpamAssassin::Conf search for _SCORE(PAD)_ Michael *** The St. Thomas University IT Department SERVICE - RESPECT - QUALITY Stephen Moss Information Technology St Thomas University Sir James Dunn Hall Room 202 Fredericton, NB E3B 5G3 (506) 452-0484 [EMAIL PROTECTED] www.stu.ca Route junk mail to: [EMAIL PROTECTED]
Re: SA 3.x install problem
Steve Bondy wrote: I'm trying to build a new box with SA 3.0.1, but a perl missing perl module won't install. HTML::Parser keeps throwing: undefined symbol: sv_catpvn_utf8_upgrade So obviously, there's something about UTF-8 encoding/decoding I'm missing, but Google searches, the SA Wiki, and other resources are not helping. Can anyone offer any thoughts? Oh... This box runs RH9. Steve I really have no idea if this is going to help, but have you tried changing LANG= to en_US in /etc/sysconfig/i18n? mine looks like: [EMAIL PROTECTED] config]# cat /etc/sysconfig/i18n LANG=en_US SUPPORTED=en_US.UTF-8:en_US:en SYSFONT=latarcyrheb-sun16 There have been tons of problems with perl on rh9 because of the default LANG setting. Im not sure if this is related or not but i guess its worth a shot. -Jim
RE: SURBLS
I sent an email with that URL in it and it didnt get tagged. --Mike From: Matt Kettler [mailto:[EMAIL PROTECTED] Sent: Thu 12/2/2004 1:25 PM To: Mike Carlson; users@spamassassin.apache.org Subject: Re: SURBLS At 02:01 PM 12/2/2004, Mike Carlson wrote: What tests can I do to make sure SURBLS is working? I havent seen any scores for SURBLS in any of the caught or uncaught emails. Send yourself an email with the surbl tespoint in it: snip That should trigger the spamcop URI list from surbl.
Re: spamd process using to much cpu (again)
On Thu, Dec 02, 2004 at 04:25:46PM -0300, Matias Lopez Bergero wrote: Well, I fix the speed problem that I was having with SA, but I still have the CPU consumption problem. From time to time, some spamd process sticks on top of the top listing with an ~90% CPU utilization, like this: 27639 mselig39 19 30104 29M 2472 R N 105.2 1.9 60:04 0 spamd Here the spamd process is using 105.2% of my CPU resources, and 29MB of memory. Normally the spamd process uses between 10 and 30% of CPU resources, the processes that use around 90% stay there until I kill them. There is a way to prevent this? Possibly. What can be causing this hi CPU usage? I can't be 100% sure, but I'd put money on bayes expiration, which can be very CPU and IO intensive while it runs. Depending on your setup you could turn off bayes_auto_expire and do the expiration manually (ie sa-learn --force-expire), at a time that you control. Unfortunately, unless you are running a sitewide bayes config, this may not work well with a lot of individual bayes dbs. If you're running 3.0, you could move your bayes databases to SQL which has a much faster expiration time (roughly 7 times faster). This would also allow you to offload some of the CPU and IO consumption to a separate machine. Hope that helps. Michael pgppPgA9kvTWO.pgp Description: PGP signature
RE: SA 3.x install problem
Yes, that's one of the first things I change on a RH9 box. I also tried: unset LANG Before I started my CPAN session but no joy. Steve -Original Message- From: Jim Maul [mailto:[EMAIL PROTECTED] Sent: Thursday, December 02, 2004 1:31 PM To: Steve Bondy Cc: users@spamassassin.apache.org Subject: Re: SA 3.x install problem Steve Bondy wrote: I'm trying to build a new box with SA 3.0.1, but a perl missing perl module won't install. HTML::Parser keeps throwing: undefined symbol: sv_catpvn_utf8_upgrade So obviously, there's something about UTF-8 encoding/decoding I'm missing, but Google searches, the SA Wiki, and other resources are not helping. Can anyone offer any thoughts? Oh... This box runs RH9. Steve I really have no idea if this is going to help, but have you tried changing LANG= to en_US in /etc/sysconfig/i18n? mine looks like: [EMAIL PROTECTED] config]# cat /etc/sysconfig/i18n LANG=en_US SUPPORTED=en_US.UTF-8:en_US:en SYSFONT=latarcyrheb-sun16 There have been tons of problems with perl on rh9 because of the default LANG setting. Im not sure if this is related or not but i guess its worth a shot. -Jim
RE: SURBLS
At 02:35 PM 12/2/2004, Mike Carlson wrote: I sent an email with that URL in it and it didnt get tagged. Well, it won't get tagged by SA based on that alone. There are very few sure fire spam rules in SA, and none of the SURBLs are on that small list. What you need to do is look at your hits list for the message, and see if it matched the SC SURBL rule. If you're using SA 3.x it should show up as URIBL_SC_SURBL in your hits. If your setup doesn't put the hits list in some message header even for nonspam, you'll probably have to do it manually using spamassassin -t on the command line. You can also send that URL in an email that also contains a GTUBE string. The GTUBE will force a spam tag, and presumably then your setup, no matter how strange, should list out all the hits. You should make sure it hits both the GTUBE rule and the SURBL rule. http://spamassassin.apache.org/gtube/
RE: SURBLS
It wasnt in the hits either. --Mike From: Matt Kettler [mailto:[EMAIL PROTECTED] Sent: Thu 12/2/2004 1:36 PM To: Mike Carlson; users@spamassassin.apache.org Subject: RE: SURBLS At 02:35 PM 12/2/2004, Mike Carlson wrote: I sent an email with that URL in it and it didnt get tagged. Well, it won't get tagged by SA based on that alone. There are very few sure fire spam rules in SA, and none of the SURBLs are on that small list. What you need to do is look at your hits list for the message, and see if it matched the SC SURBL rule. If you're using SA 3.x it should show up as URIBL_SC_SURBL in your hits. If your setup doesn't put the hits list in some message header even for nonspam, you'll probably have to do it manually using spamassassin -t on the command line. You can also send that URL in an email that also contains a GTUBE string. The GTUBE will force a spam tag, and presumably then your setup, no matter how strange, should list out all the hits. You should make sure it hits both the GTUBE rule and the SURBL rule. http://spamassassin.apache.org/gtube/
RE: SURBLS
Nah, but your could put it in debug mode and tail your log to see whats wrong. Run it on a different port for temporary test CC'ing to users@ in case anyone wants to chime in regarding your setup. Console #1 on Server running SA # spamd -D -p 800 21 | grep postcard Should show... After you run the echo on Console #2 below... debug: uri: uri found: http://postcards.com debug: uridnsbl: domains to query: postcards.com debug: uridnsbl: domain postcards.com listed (URIBL_WS_SURBL): 127.0.0.4 debug: uridnsbl: query for postcards.com took 0 seconds to look up (multi.surbl.org.:postcards.com) debug: uridnsbl: query for postcards.com took 0 seconds to look up (sbl.spamhaus.org.:90.64.69.64) debug: uridnsbl: query for postcards.com took 0 seconds to look up (sbl.spamhaus.org.:61.64.69.64) Console #2 on Server running SA # echo -e From: blah\n\nhttp://postcards.com\n; | spamc -p 800 Should show... * 2.5 URIBL_WS_SURBL Contains an URL listed in the WS SURBL blocklist * [URIs: postcards.com] Dallas -Original Message- From: Mike Carlson [mailto:[EMAIL PROTECTED] Sent: Thursday, December 02, 2004 1:35 PM To: Dallas L. Engelken Subject: RE: SURBLS Ok, I got the same results you did. Is there a spamd switch I can use to test it? --Mike From: Dallas L. Engelken [mailto:[EMAIL PROTECTED] Sent: Thu 12/2/2004 1:24 PM To: Mike Carlson Subject: RE: SURBLS You can always do manual tests from the box you run SA on... [EMAIL PROTECTED] etc]# host -tA protosoft.org.multi.surbl.org protosoft.org.multi.surbl.org has address 127.0.0.84 [EMAIL PROTECTED] etc]# host -tA postcards.com.multi.surbl.org postcards.com.multi.surbl.org has address 127.0.0.4 [EMAIL PROTECTED] etc]# host -tA nmgi.com.multi.surbl.org Host nmgi.com.multi.surbl.org not found: 3(NXDOMAIN) First 2 are listed, nmgi.com is not :) Dallas -Original Message- From: Mike Carlson [mailto:[EMAIL PROTECTED] Sent: Thursday, December 02, 2004 1:13 PM To: Dallas L. Engelken Subject: RE: SURBLS Thanks for the tips. I do have dns_available yes coded in my local.cf and the host command does return the name servers for Google. I am wondering if there is a specific test I can use with SA to make sure that SURBLS are working with SA. I am firing SA from MIMEDefang so I am trying to eliminate SA from my troubleshooting process if I can. --Mike From: Dallas L. Engelken [mailto:[EMAIL PROTECTED] Sent: Thu 12/2/2004 1:04 PM To: Mike Carlson Subject: RE: SURBLS Thx.. The raptor firewalls screw with NS resource record resoltion, and cause SURBL's not to fire. You might test NS resolution to make sure its working... # host -tNS google.com Or on windows C:\ nslookup set query=NS google.com If that works, then its something else. You are not running the -L|--local flag on spamd right? Have you tried hard-coding Dns_available yes In your local.cf? Maybe spamd doesn't think your dns is working.. Assuming you are running spamc/spamd. Dallas -Original Message- From: Mike Carlson [mailto:[EMAIL PROTECTED] Sent: Thursday, December 02, 2004 1:07 PM To: Dallas L. Engelken Subject: RE: SURBLS No, we are using a linux distro for our firewall. --Mike From: Dallas L. Engelken [mailto:[EMAIL PROTECTED] Sent: Thu 12/2/2004 1:00 PM To: Mike Carlson Subject: RE: SURBLS What tests can I do to make sure SURBLS is working? I havent seen any scores for SURBLS in any of the caught or uncaught emails. Curious, do you have a symantec firewall?
Re: [Dshield] fingerprinting servers before accepting
Christopher X. Candreva wrote: On Wed, 1 Dec 2004, Robert LeBlanc wrote: This actually sounds like it would be a good public DNSBL. Rather than have everyone fingerprint, the central DNSBL would perform fingerprinting of IPs that are requested and not in the cache, then cache the results. Otherwise, everyone running the fingerprints could add up to a good amount of traffic. ... especially on spammers' connections. :) The only problem with having a central fingerprint server would be DoS attacks by the spammers. - Joe
RE: SURBLS
Ok, I tried the command, but I am SSH's so the output redirection didnt work. I did do spamd -D -p 800 | grep postcard and this is the plethora of stuff I got. Sorry for all the crap, but I figured I would just send it all. I did run this as root instead of spamd so the bayes error I understand. hades# spamd -D -p 800 | grep postcard trying to connect to syslog/unix... no error connecting to syslog/unix logging enabled: facility: mail socket: unix output: syslog creating INET socket: Listen: 128 LocalAddr: 127.0.0.1 LocalPort: 800 Proto: 6 ReuseAddr: 1 Type: 1 debug: SpamAssassin version 3.0.1 debug: Score set 0 chosen. debug: Storable module v2.13 found debug: Preloading modules with HOME=/tmp/spamd-2121-init debug: ignore: test message to precompile patterns and load modules debug: using /usr/local/etc/mail/spamassassin/init.pre for site rules init.pre debug: config: read file /usr/local/etc/mail/spamassassin/init.pre debug: using /usr/local/share/spamassassin for default rules dir debug: config: read file /usr/local/share/spamassassin/10_misc.cf debug: config: read file /usr/local/share/spamassassin/20_anti_ratware.cf debug: config: read file /usr/local/share/spamassassin/20_body_tests.cf debug: config: read file /usr/local/share/spamassassin/20_compensate.cf debug: config: read file /usr/local/share/spamassassin/20_dnsbl_tests.cf debug: config: read file /usr/local/share/spamassassin/20_drugs.cf debug: config: read file /usr/local/share/spamassassin/20_fake_helo_tests.cf debug: config: read file /usr/local/share/spamassassin/20_head_tests.cf debug: config: read file /usr/local/share/spamassassin/20_html_tests.cf debug: config: read file /usr/local/share/spamassassin/20_meta_tests.cf debug: config: read file /usr/local/share/spamassassin/20_phrases.cf debug: config: read file /usr/local/share/spamassassin/20_porn.cf debug: config: read file /usr/local/share/spamassassin/20_ratware.cf debug: config: read file /usr/local/share/spamassassin/20_uri_tests.cf debug: config: read file /usr/local/share/spamassassin/23_bayes.cf debug: config: read file /usr/local/share/spamassassin/25_body_tests_es.cf debug: config: read file /usr/local/share/spamassassin/25_hashcash.cf debug: config: read file /usr/local/share/spamassassin/25_spf.cf debug: config: read file /usr/local/share/spamassassin/25_uribl.cf debug: config: read file /usr/local/share/spamassassin/30_text_de.cf debug: config: read file /usr/local/share/spamassassin/30_text_fr.cf debug: config: read file /usr/local/share/spamassassin/30_text_nl.cf debug: config: read file /usr/local/share/spamassassin/30_text_pl.cf debug: config: read file /usr/local/share/spamassassin/50_scores.cf debug: config: read file /usr/local/share/spamassassin/60_whitelist.cf debug: using /usr/local/etc/mail/spamassassin for site rules dir debug: config: read file /usr/local/etc/mail/spamassassin/70_sare_adult.cf debug: config: read file /usr/local/etc/mail/spamassassin/70_sare_bayes_poison_nxm.cf debug: config: read file /usr/local/etc/mail/spamassassin/70_sare_genlsubj.cf debug: config: read file /usr/local/etc/mail/spamassassin/70_sare_genlsubj_eng.cf debug: config: read file /usr/local/etc/mail/spamassassin/70_sare_header.cf debug: config: read file /usr/local/etc/mail/spamassassin/70_sare_header_eng.cf debug: config: read file /usr/local/etc/mail/spamassassin/70_sare_highrisk.cf debug: config: read file /usr/local/etc/mail/spamassassin/70_sare_html.cf debug: config: read file /usr/local/etc/mail/spamassassin/70_sare_html4.cf debug: config: read file /usr/local/etc/mail/spamassassin/70_sare_html_eng.cf debug: config: read file /usr/local/etc/mail/spamassassin/70_sare_oem.cf debug: config: read file /usr/local/etc/mail/spamassassin/70_sare_random.cf debug: config: read file /usr/local/etc/mail/spamassassin/70_sare_ratware.cf debug: config: read file /usr/local/etc/mail/spamassassin/70_sare_specific.cf debug: config: read file /usr/local/etc/mail/spamassassin/70_sare_spoof.cf debug: config: read file /usr/local/etc/mail/spamassassin/70_sare_unsub.cf debug: config: read file /usr/local/etc/mail/spamassassin/70_sare_uri.cf debug: config: read file /usr/local/etc/mail/spamassassin/70_sc_top200.cf debug: config: read file /usr/local/etc/mail/spamassassin/72_sare_bml_post25x.cf debug: config: read file /usr/local/etc/mail/spamassassin/72_sare_redirect_post3.0.0.cf debug: config: read file /usr/local/etc/mail/spamassassin/99_sare_fraud_post25x.cf debug: config: read file /usr/local/etc/mail/spamassassin/evilnumbers.cf debug: config: read file /usr/local/etc/mail/spamassassin/local.cf debug: plugin: loading Mail::SpamAssassin::Plugin::URIDNSBL from @INC debug: plugin: registered Mail::SpamAssassin::Plugin::URIDNSBL=HASH(0x8c7c6d0) debug: plugin: loading Mail::SpamAssassin::Plugin::Hashcash from @INC debug: plugin: registered Mail::SpamAssassin::Plugin::Hashcash=HASH(0x8c3587c) debug: plugin:
RE: SURBLS
Ok, I tried the command, but I am SSH's so the output redirection didnt work. I did do spamd -D -p 800 | grep postcard # spamd -D -p 800 21 | grep postcard Notice the stderr to stdout redirector?? d
Re: spamd process using to much cpu (again)
Michael Parker wrote: On Thu, Dec 02, 2004 at 04:25:46PM -0300, Matias Lopez Bergero wrote: From time to time, some spamd process sticks on top of the top listing with an ~90% CPU utilization There is a way to prevent this? Possibly. What can be causing this hi CPU usage? I can't be 100% sure, but I'd put money on bayes expiration, which can be very CPU and IO intensive while it runs. Yes, I forgot to say that I was also having an increasing iowait usage. Some times I see about 150 to 190% of the CPU resources waisted iowait. Depending on your setup you could turn off bayes_auto_expire and do the expiration manually (ie sa-learn --force-expire), at a time that you control. Unfortunately, unless you are running a sitewide bayes config, this may not work well with a lot of individual bayes dbs. I am running sitewide installation. But now I got another question. I need to run sa-learn by hand? There is no way to configure spamd to do that?? If you're running 3.0, you could move your bayes databases to SQL which has a much faster expiration time (roughly 7 times faster). This would also allow you to offload some of the CPU and IO consumption to a separate machine. That would be nice :) Unfortunately I cannot put the bayes db on another machine, but I have a local mysql service, so if that improves the performance I could move the bayes db there. What would be your suggestion? Hope that helps. It really helps, Thank you! Matías.
RE: SURBLS
If I run that command I get this: hades# spamd -D -p 800 21 | grep postcard Ambiguous output redirect. hades# --Mike From: Dallas L. Engelken [mailto:[EMAIL PROTECTED] Sent: Thu 12/2/2004 2:05 PM To: Mike Carlson Cc: users@SpamAssassin.apache.org Subject: RE: SURBLS Ok, I tried the command, but I am SSH's so the output redirection didnt work. I did do spamd -D -p 800 | grep postcard # spamd -D -p 800 21 | grep postcard Notice the stderr to stdout redirector?? d
Problems with upgrade.
I just upgraded my Spam Assassin engine cluster from 2.61 to 3.0.1 and now, I'm getting some spam slipping through because it didn't get processed. I did some testing with running GTUBE manually with spamc and sometimes it kicks out the message immediately not processed, other times, it just hangs, and then other times, it does it correctly.. What's going on? Thanks, Billy +--+ | Billy Huddleston Senior Systems Administrator | | Net-Express http://www.nxs.net | | 114 Sherway Rd. Voice: 865-691-2011 | | Knoxville, TN 37922 Fax: 865-691-9894 | | [EMAIL PROTECTED]| +--+
Problems with upgrade.
I just upgraded my Spam Assassin engine cluster from 2.61 to 3.0.1 and now, I'm getting some spam slipping through because it didn't get processed. I did some testing with running GTUBE manually with spamc and sometimes it kicks out the message immediately not processed, other times, it just hangs, and then other times, it does it correctly.. What's going on? Thanks, Billy +--+ | Billy Huddleston Senior Systems Administrator | | Net-Express http://www.nxs.net | | 114 Sherway Rd. Voice: 865-691-2011 | | Knoxville, TN 37922 Fax: 865-691-9894 | | [EMAIL PROTECTED]| +--+
Re: spamd process using to much cpu (again)
On Thu, Dec 02, 2004 at 05:07:48PM -0300, Matias Lopez Bergero wrote: Depending on your setup you could turn off bayes_auto_expire and do the expiration manually (ie sa-learn --force-expire), at a time that you control. Unfortunately, unless you are running a sitewide bayes config, this may not work well with a lot of individual bayes dbs. I am running sitewide installation. But now I got another question. I need to run sa-learn by hand? There is no way to configure spamd to do that?? You can script it, or run it via cron. spamd doesn't have this ability. If you're running 3.0, you could move your bayes databases to SQL which has a much faster expiration time (roughly 7 times faster). This would also allow you to offload some of the CPU and IO consumption to a separate machine. That would be nice :) Unfortunately I cannot put the bayes db on another machine, but I have a local mysql service, so if that improves the performance I could move the bayes db there. What would be your suggestion? I'm a little biased, but I suggest the mysql route. http://www.apache.org/~parker/presentations/ Michael pgpHiNd2VSt8N.pgp Description: PGP signature
What is up with surbl.org?
How come surbl.org has a Bad status at rating.cloudmark.com? (see below). This is not good. - Mark System Administrator Asarian-host.org --- If you were supposed to understand it, we wouldn't call it code. - FedEx asarian-host: {root} % dig surbl.org.rating.cloudmark.com txt ; DiG 8.4 surbl.org.rating.cloudmark.com txt ;; res options: init recurs defnam dnsrch ;; got answer: ;; -HEADER- opcode: QUERY, status: NOERROR, id: 1760 ;; flags: qr rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 1, ADDITIONAL: 1 ;; QUERY SECTION: ;; surbl.org.rating.cloudmark.com, type = TXT, class = IN ;; ANSWER SECTION: surbl.org.rating.cloudmark.com. 1M IN TXT Service Policy: http://rating.cloudmark.com/sidtoc/; surbl.org.rating.cloudmark.com. 1M IN TXT Rating: 62 surbl.org.rating.cloudmark.com. 1M IN TXT Confidence: 11 surbl.org.rating.cloudmark.com. 1M IN TXT Status: Bad surbl.org.rating.cloudmark.com. 1M IN TXT Cloudmark Rating Version: 1.0 ;; AUTHORITY SECTION: rating.cloudmark.com. 9m42s IN NS dignity.cloudmark.com. ;; ADDITIONAL SECTION: dignity.cloudmark.com. 9m42s IN A 66.151.150.36 ;; Total query time: 81 msec ;; FROM: asarian-host.net to SERVER: 127.0.0.1 ;; WHEN: Thu Dec 2 21:21:32 2004 ;; MSG SIZE sent: 48 rcvd: 296
Re: [Dshield] fingerprinting servers before accepting
On Thu, 2004-12-02 at 11:53, Joe Emenaker wrote: Christopher X. Candreva wrote: On Wed, 1 Dec 2004, Robert LeBlanc wrote: This actually sounds like it would be a good public DNSBL. Rather than have everyone fingerprint, the central DNSBL would perform fingerprinting of IPs that are requested and not in the cache, then cache the results. Otherwise, everyone running the fingerprints could add up to a good amount of traffic. ... especially on spammers' connections. :) Unfortunately p0f is passive so no DDoS on the spammers. :( :) The only problem with having a central fingerprint server would be DoS attacks by the spammers. Distributed DNS is well understood. Might be spendy, tho... A DNSRBL of Windows desktop OS (W98, WME, W2kPro, WXPPro) SMTP sources with a fairly short expiry might be useful. Have the trusted spamtraps run p0f and collect the data, and update the distributed DNS in realtime. -- John Hardin Internal Systems Administrator (Seattle) CRS Retail Systems, Inc. 3400 188th Street SW, Suite 185 Lynnwood, WA 98037 voice: (425) 672-1304 fax: (425) 672-0192 email: [EMAIL PROTECTED] web: http://www.crsretail.com --- If you smash a computer to bits with a mallet, that appears to count as encryption in the state of Nevada. - CRYPTO-GRAM 12/2001 ---
Re: spamd process using to much cpu (again)
Michael Parker wrote: On Thu, Dec 02, 2004 at 05:07:48PM -0300, Matias Lopez Bergero wrote: If you're running 3.0, you could move your bayes databases to SQL which has a much faster expiration time (roughly 7 times faster). This would also allow you to offload some of the CPU and IO consumption to a separate machine. That would be nice :) Unfortunately I cannot put the bayes db on another machine, but I have a local mysql service, so if that improves the performance I could move the bayes db there. What would be your suggestion? I'm a little biased, but I suggest the mysql route. I'm going to do that. Thanks a lot Michael! BR, Matías.
Re: What is up with surbl.org?
On Thu, Dec 02, 2004 at 08:42:34PM +, Mark wrote: How come surbl.org has a Bad status at rating.cloudmark.com? (see below). This is not good. That's really a question for the Cloudmark people. My guess is that domains that have mailing lists which talk about spam get a bad rating since people still incorrectly scan those lists through anti-spam software. -- Randomly Generated Tagline: Meanwhile the US military officials are looking for their next target in the war on terrorism. Today President Bush restated his commitment to the war on terror, saying, You're either with us, or against us, or, in the case of Saudi Arabia, both.- Bill Maher pgpij1LtQYgyq.pgp Description: PGP signature
RE: SURBLS
At 02:46 PM 12/2/2004, Mike Carlson wrote: It wasnt in the hits either. --Mike Hmm. Do you have Net::DNS installed? Do any normal RBLs work?
Re: What is up with surbl.org?
At 03:50 PM 12/2/2004, Theo Van Dinter wrote: *** BEGIN PGP VERIFIED MESSAGE *** On Thu, Dec 02, 2004 at 08:42:34PM +, Mark wrote: How come surbl.org has a Bad status at rating.cloudmark.com? (see below). This is not good. That's really a question for the Cloudmark people. My guess is that domains that have mailing lists which talk about spam get a bad rating since people still incorrectly scan those lists through anti-spam software. Another guess would be that rating is using data from razor reports. If someone reports a SA tagged message to razor without stripping the tags, the surbl.org domain is going to appear in the message body as a byproduct of the rule descriptions.
RE: What is up with surbl.org?
-Original Message- From: Chris Santerre [mailto:[EMAIL PROTECTED] Sent: donderdag 2 december 2004 21:56 To: 'Mark'; SURBL Discussion list (E-mail) Subject: RE: What is up with surbl.org? How come surbl.org has a Bad status at rating.cloudmark.com? (see below). This is not good. Jealousy? :) Maybe they are related to that fifth dentist? Perhaps Jeff should contact them with a What up, dude? email? Perhaps he should. Seriously, I heavily rely on the rating.cloudmark.com data; surbl.org should *never* have been in there. The folks at cloudmark really need to manually inspect their entries; next thing this list has a bad rep too. - Mark System Administrator Asarian-host.org --- If you were supposed to understand it, we wouldn't call it code. - FedEx
RE: SA 3.x install problem
I've been working backward with the HTML::Parser variants and have discovered that all versions after 3.38 (nov 11, 2004) fail with this error. Apparently a patch was applied starting with 3.39 that breaks my build. I have this working with 3.38, and I'll be sending a note to the maintainer of HTML::Parser. Steve Steve Bondy wrote: Yes, that's one of the first things I change on a RH9 box. I also tried: unset LANG Before I started my CPAN session but no joy. Steve Jim Maul wrote: Steve Bondy wrote: I'm trying to build a new box with SA 3.0.1, but a perl missing perl module won't install. HTML::Parser keeps throwing: undefined symbol: sv_catpvn_utf8_upgrade So obviously, there's something about UTF-8 encoding/decoding I'm missing, but Google searches, the SA Wiki, and other resources are not helping. Can anyone offer any thoughts? Oh... This box runs RH9. Steve I really have no idea if this is going to help, but have you tried changing LANG= to en_US in /etc/sysconfig/i18n? mine looks like: [EMAIL PROTECTED] config]# cat /etc/sysconfig/i18n LANG=en_US SUPPORTED=en_US.UTF-8:en_US:en SYSFONT=latarcyrheb-sun16 There have been tons of problems with perl on rh9 because of the default LANG setting. Im not sure if this is related or not but i guess its worth a shot. -Jim
Upgrade to 3.0.1 results in false positives
I've been using SA 2.6x to block spam at our site for some time. With the release of 3.0.1 I decided to upgrade. Unfortunately, once the upgrade was complete I found that my test e-mails were marked as spam. I also received several false positives from end-users as well. I could understand some of the FP's, but when a simple test mail (subject test test body test test) is marked I'm in trouble. The test I sent was from my Comcast account, and ended up with the following header: X-Spam-Status: Yes, hits=17.8 tagged_above=2.0 required=3.0 tests=AWL, BAYES_00, DNS_FROM_RFC_POST, DNS_FROM_RFC_WHOIS, NO_REAL_NAME, RCVD_BY_IP, RCVD_DOUBLE_IP_LOOSE, RCVD_NUMERIC_HELO I added up the scores for these rules, and they didn't seem to add up to 17.anything. I'm using a Postfix MTA setup with Amavisd-new to call SA, and the only thing that has changed is SA so I don't think this is due to Amavisd-new. Why would a simple test mail from my Comcast account generate this?
whitelist_from override all the spam check?
If I use whitelist_from [EMAIL PROTECTED] , it should not classify the email as spam no matter what the score is, right? -Andrew
Blacklist one address
Hello, I read that adding black_list [EMAIL PROTECTED] to my local.cf file would block mail from coming from that person but after doing spamassassin --lint I got: config: SpamAssassin failed to parse line, skipping: black_list [EMAIL PROTECTED] He is not necessarily a spammer he just refuses to stop sending one of our employees e-mails. Thanks for any help. Brian O'Neill
Re: whitelist_from override all the spam check?
Andrew Xiang wrote: If I use whitelist_from [EMAIL PROTECTED] , it should not classify the email as spam no matter what the score is, right? No, 'whitelist_from' is a -100 score. -- Regards, Marco.
Re: Blacklist one address
Brian O'Neill wrote: I read that adding black_list [EMAIL PROTECTED] to my local.cf file would block mail from coming from that person but after doing spamassassin --lint I got: config: SpamAssassin failed to parse line, skipping: black_list [EMAIL PROTECTED] The syntax is 'blacklist_from address', like this: blacklist_from [EMAIL PROTECTED] -- Regards, Marco.
RE: Blacklist one address
I thought the format was blacklist_from -Original Message- From: Brian O'Neill [mailto:[EMAIL PROTECTED] Sent: Thursday, December 02, 2004 2:10 PM To: users@spamassassin.apache.org Subject: Blacklist one address Hello, I read that adding black_list [EMAIL PROTECTED] to my local.cf file would block mail from coming from that person but after doing spamassassin --lint I got: config: SpamAssassin failed to parse line, skipping: black_list [EMAIL PROTECTED] He is not necessarily a spammer he just refuses to stop sending one of our employees e-mails. Thanks for any help. Brian O'Neill
Re: Blacklist one address
Marco van den Bovenkamp wrote: Brian O'Neill wrote: I read that adding black_list [EMAIL PROTECTED] to my local.cf file would block mail from coming from that person but after doing spamassassin --lint I got: config: SpamAssassin failed to parse line, skipping: black_list [EMAIL PROTECTED] The syntax is 'blacklist_from address', like this: blacklist_from [EMAIL PROTECTED] Great! Thank you very much Brian O'Neill
Re: Blacklist one address
At 02:10 PM 12/2/2004, you wrote: Hello, I read that adding black_list [EMAIL PROTECTED] to my local.cf file would block mail from coming from that person but after doing spamassassin --lint I got: config: SpamAssassin failed to parse line, skipping: black_list [EMAIL PROTECTED] He is not necessarily a spammer he just refuses to stop sending one of our employees e-mails. As someone pointed out, that's the wrong syntax. Also note, SpamAssassin will _not_ block mail. If you want to _block_ mail, that's better done at the MTA level.
Re: whitelist_from override all the spam check?
guess that should be good enough. -100. - Original Message - From: Marco van den Bovenkamp [EMAIL PROTECTED] To: users@spamassassin.apache.org Sent: Thursday, December 02, 2004 5:11 PM Subject: Re: whitelist_from override all the spam check? Andrew Xiang wrote: If I use whitelist_from [EMAIL PROTECTED] , it should not classify the email as spam no matter what the score is, right? No, 'whitelist_from' is a -100 score. -- Regards, Marco.
Re: Blacklist one address
At 05:10 PM 12/2/2004, Brian O'Neill wrote: I read that adding black_list [EMAIL PROTECTED] to my local.cf file would block mail from coming from that person but after doing spamassassin --lint I got: config: SpamAssassin failed to parse line, skipping: black_list [EMAIL PROTECTED] that's blacklist_from not black_list. http://spamassassin.apache.org/full/3.0.x/dist/doc/Mail_SpamAssassin_Conf.html
lint failed
spamassassin --lint config: SpamAssassin failed to parse line, skipping: urirhssub URIBL_JP_SURBL multi.surbl.org.A 64 Failed to run URIBL_JP_SURBL SpamAssassin test, skipping: (Can't locate object method check_uridnsbl via package Mail::SpamAssassin::PerMsgStatus (perhaps you forgot to load Mail::SpamAssassin::PerMsgStatus?) at /usr/local/lib/perl5/site_perl/5.6.1/Mail/SpamAssassin/PerMsgStatus.pm line 2296. ) lint: 2 issues detected. please rerun with debug enabled for more information. local.cf: urirhssub URIBL_JP_SURBL multi.surbl.org.A 64 body URIBL_JP_SURBL eval:check_uridnsbl('URIBL_JP_SURBL') describe URIBL_JP_SURBL Has URI in JP at http://www.surbl.org/lists.html tflagsURIBL_JP_SURBL net score URIBL_JP_SURBL4.0
Re: Bayes question
By the way - are the bayes databases on disk portable (in the sense I could import or copy them to another server and use them accordingly)? Thanks in advance
Re: Bayes question
On Thu, 2 Dec 2004 22:27:05 +, Ricardo Oliveira [EMAIL PROTECTED] wrote: By the way - are the bayes databases on disk portable (in the sense I could import or copy them to another server and use them accordingly)? Thanks in advance I haven't had a problem doing that, moving from one Sparc to another. Mike
FW: SA 3.x install problem
I heard from Gisle Aas, the maintiner of the HTML::Parser module and he writes: This problem has been fixed for upcoming 3.42. The current released versions require 5.8.1 or better to get Unicode support. So for now, I'm using the 3.80, and will go to 3.42 when it shows up on CPAN Thanks to everyone for their input. Steve Steve Bondy wrote: I'm trying to build a new box with SA 3.0.1, but a perl missing perl module won't install. HTML::Parser keeps throwing: undefined symbol: sv_catpvn_utf8_upgrade So obviously, there's something about UTF-8 encoding/decoding I'm missing, but Google searches, the SA Wiki, and other resources are not helping. Can anyone offer any thoughts? Oh... This box runs RH9. Steve The default version of Perl on RH9 is quite broken. The specific issue is well documented (but I can't for the life of me remember what it is). RedHat has even acknowledged it but for some reason never released an update to fix the problem. Upgrading Perl to any newer version should resolve your problem. Daryl
Re: lint failed
What version of SA are you using? That rule will only work under SA 3.0.x. It's not compatible with earlier versions, including SA 2.6x with the SpamCopURI patch. If you're using 3.x, make sure you've got the URIDNSBL plugin loaded and working in the first place before you add the JP rule. For safety you can surround that rule with an ifplugin statement: ifplugin Mail::SpamAssassin::Plugin::URIDNSBL urirhssub URIBL_JP_SURBL multi.surbl.org.A 64 body URIBL_JP_SURBL eval:check_uridnsbl('URIBL_JP_SURBL') describe URIBL_JP_SURBL Has URI in JP at http://www.surbl.org/lists.html tflagsURIBL_JP_SURBL net score URIBL_JP_SURBL4.0 endif # Mail::SpamAssassin::Plugin::URIDNSBL That way if the plugin isn't loaded, the rule will be skipped entirely, just like the other URIBL rules. At 05:24 PM 12/2/2004, Andrew Xiang wrote: spamassassin --lint config: SpamAssassin failed to parse line, skipping: urirhssub URIBL_JP_SURBL multi.surbl.org.A 64 Failed to run URIBL_JP_SURBL SpamAssassin test, skipping: (Can't locate object method check_uridnsbl via package Mail::SpamAssassin::PerMsgStatus (perhaps you forgot to load Mail::SpamAssassin::PerMsgStatus?) at /usr/local/lib/perl5/site_perl/5.6.1/Mail/SpamAssassin/PerMsgStatus.pm line 2296. ) lint: 2 issues detected. please rerun with debug enabled for more information. local.cf: urirhssub URIBL_JP_SURBL multi.surbl.org.A 64 body URIBL_JP_SURBL eval:check_uridnsbl('URIBL_JP_SURBL') describe URIBL_JP_SURBL Has URI in JP at http://www.surbl.org/lists.html tflagsURIBL_JP_SURBL net score URIBL_JP_SURBL4.0
Understanding the AWL (was Upgrade to 3.0.1 results in false posi tives)
Thanks to Mark Martinec on the amavisd-new list I've managed to narrow this down to my comcast address being assigned about 18 points by the AWL. Now I just have to figure out why. I read the (not very expansive) POD docs on the AWL, and it's clear that I don't really understand how it works. Why doesn't it like my address? Also, why does it like me so much less under 3.0.1 than it did under 2.64? -Original Message- From: Aaron Grewell [mailto:[EMAIL PROTECTED] Sent: Thursday, December 02, 2004 1:55 PM To: users@spamassassin.apache.org Subject: Upgrade to 3.0.1 results in false positives I've been using SA 2.6x to block spam at our site for some time. With the release of 3.0.1 I decided to upgrade. Unfortunately, once the upgrade was complete I found that my test e-mails were marked as spam. I also received several false positives from end-users as well. I could understand some of the FP's, but when a simple test mail (subject test test body test test) is marked I'm in trouble. The test I sent was from my Comcast account, and ended up with the following header: X-Spam-Status: Yes, hits=17.8 tagged_above=2.0 required=3.0 tests=AWL, BAYES_00, DNS_FROM_RFC_POST, DNS_FROM_RFC_WHOIS, NO_REAL_NAME, RCVD_BY_IP, RCVD_DOUBLE_IP_LOOSE, RCVD_NUMERIC_HELO I added up the scores for these rules, and they didn't seem to add up to 17.anything. I'm using a Postfix MTA setup with Amavisd-new to call SA, and the only thing that has changed is SA so I don't think this is due to Amavisd-new. Why would a simple test mail from my Comcast account generate this?
RE: SURBLS
I have Net::DNS installed. It's a FreeBSD 4.9 with SA being called by MIMEDefang. I am not sure if any of the RBL stuff is working. I figured I would work on one thing at a time. --Mike -Original Message- From: Matt Kettler [mailto:[EMAIL PROTECTED] Sent: Thursday, December 02, 2004 2:55 PM To: Mike Carlson; users@spamassassin.apache.org Subject: RE: SURBLS At 02:46 PM 12/2/2004, Mike Carlson wrote: It wasnt in the hits either. --Mike Hmm. Do you have Net::DNS installed? Do any normal RBLs work?
RE: Understanding the AWL (was Upgrade to 3.0.1 results in false positives)
Nevermind. I found it in the list archives. -Original Message- From: Aaron Grewell Sent: Thursday, December 02, 2004 3:03 PM To: Aaron Grewell; users@spamassassin.apache.org Subject: Understanding the AWL (was Upgrade to 3.0.1 results in false positives) Thanks to Mark Martinec on the amavisd-new list I've managed to narrow this down to my comcast address being assigned about 18 points by the AWL. Now I just have to figure out why. I read the (not very expansive) POD docs on the AWL, and it's clear that I don't really understand how it works. Why doesn't it like my address? Also, why does it like me so much less under 3.0.1 than it did under 2.64? -Original Message- From: Aaron Grewell [mailto:[EMAIL PROTECTED] Sent: Thursday, December 02, 2004 1:55 PM To: users@spamassassin.apache.org Subject: Upgrade to 3.0.1 results in false positives I've been using SA 2.6x to block spam at our site for some time. With the release of 3.0.1 I decided to upgrade. Unfortunately, once the upgrade was complete I found that my test e-mails were marked as spam. I also received several false positives from end-users as well. I could understand some of the FP's, but when a simple test mail (subject test test body test test) is marked I'm in trouble. The test I sent was from my Comcast account, and ended up with the following header: X-Spam-Status: Yes, hits=17.8 tagged_above=2.0 required=3.0 tests=AWL, BAYES_00, DNS_FROM_RFC_POST, DNS_FROM_RFC_WHOIS, NO_REAL_NAME, RCVD_BY_IP, RCVD_DOUBLE_IP_LOOSE, RCVD_NUMERIC_HELO I added up the scores for these rules, and they didn't seem to add up to 17.anything. I'm using a Postfix MTA setup with Amavisd-new to call SA, and the only thing that has changed is SA so I don't think this is due to Amavisd-new. Why would a simple test mail from my Comcast account generate this?
RE: SURBLS
Make sure your mimedefang is configuring SA to use the RBL. In the mimedefang-filter file make sure this is there: $SALocalTestsOnly = 0; In the sa-mimedefang.cf file, whatever it's named in FBSD: skip_rbl_checks 0 .. -Original Message- From: Mike Carlson [mailto:[EMAIL PROTECTED] Sent: Thursday, December 02, 2004 5:14 PM To: Matt Kettler; users@spamassassin.apache.org Subject: RE: SURBLS I have Net::DNS installed. It's a FreeBSD 4.9 with SA being called by MIMEDefang. I am not sure if any of the RBL stuff is working. I figured I would work on one thing at a time. --Mike
RE: SURBLS
I have both those options set. --Mike -Original Message- From: Guyang Mao [mailto:[EMAIL PROTECTED] Sent: Thursday, December 02, 2004 5:13 PM To: users@spamassassin.apache.org Subject: RE: SURBLS Make sure your mimedefang is configuring SA to use the RBL. In the mimedefang-filter file make sure this is there: $SALocalTestsOnly = 0; In the sa-mimedefang.cf file, whatever it's named in FBSD: skip_rbl_checks 0 .. -Original Message- From: Mike Carlson [mailto:[EMAIL PROTECTED] Sent: Thursday, December 02, 2004 5:14 PM To: Matt Kettler; users@spamassassin.apache.org Subject: RE: SURBLS I have Net::DNS installed. It's a FreeBSD 4.9 with SA being called by MIMEDefang. I am not sure if any of the RBL stuff is working. I figured I would work on one thing at a time. --Mike