Re: FW: List of 700,000 IP addresses of virus infected computers
On Thursday 13 September 2007, jdow wrote: And you just fed the troll-chain, yourself, silly person. {^_-} At least I trim my replies... -- Phil Barnett AI4OF SKCC #600
Re: FW: List of 700,000 IP addresses of virus infected computers
On Wednesday 12 September 2007, Jason Bertoch wrote: Is there any chance we can get a moderator on this, please? This is clearly not a SA topic and I'm weary of insults, flames, and advertisements from Marc. You guys are almost as good as smurf amplifiers. Don't feed the trolls and instead of 30 off topic posts we'd have 3. This is not a new concept. -- Phil Barnett AI4OF SKCC #600
Re: How do I temporarily disable SpamAssassin?
On Saturday 18 August 2007, [EMAIL PROTECTED] wrote: I have a FreeBSD machine running qmail, SpamAssassin and ClamAV. The machine is receiving 200,000 e-mail messages per day, courtesy of Rumpelstiltskin attacks from thousands of different IP addresses each day, and SpamAssassin appears to be overwhelmed. I have about 50,000 e-mail messages in my qmail queue and the queue is growing by more than 1,000 e-mail messages per hour. I want to temporarily disable SpamAssassin to free up enough resources to let the mail queue clear. How do I do that? If anyone knows how to temporarily disable ClamAV too, I'd be ecstatic to learn how to do that too. I've read Life with qmail and the SpamAssassin documentation at http://spamassassin.apache.org/ but I'm not connecting the dots. Unfortunately, I didn't set up this machine and I don't have a good grasp of qmail, SpamAssassin and ClamAV. Thanks in advance for any guidance and all practical suggestions you can offer. I can direct you to a person who probably knows qmail as well as any person on the planet, John Simpson. He doesn't run SA, but he really knows qmail inside and out, so any questions to him about qmail are well directed. You can view his website here and probably find a link to his email addy if you can read and interpret whats in that box in the 'Other useful pages' section... http://www.jms1.net/ -- Phil Barnett AI4OF SKCC #600
Re: hallmark greeting card spam and broken spf records.
On Friday 03 August 2007, Michael Scheidell wrote: (yes, spf is broken) especially when companies like hallmark, who know they are being used as 'phishing' targets list the whole world as authoritative mail servers. I say damn them all, blacklist hallmark till they at least fix their spf records: (i suspect its the :12 9 )? shb a period? I have a good friend who patches his qmail so that if it sees a spf record that is extra wide, he reverses it's meaning. - Quoting from qmail.jms1.net Some people are improperly treating SPF pass as a strong non-spam flag when evaluating the spam level of a message. Spammers ARE taking advantage of this by placing +all in the SPF records of the domains that they purchase for the purposes of sending spam. What this does is tells the receiving server that ANY IP ADDRESS is allowed to send messages claiming to be From: that domain. Obviously this is not a good thing, for two reasons. First, spammers are bypassing the filtering that SPF should be offering. Second, people are placing a lot more trust in SPF than they should. An SPF failure result can be used to place a lower trust value on a particular message, but as long as spammers are able to purchase their own domain names and create their own SPF records, an SPF pass result should not be used to place any higher trust value on a message. I have added an option to treat a +all term found within an SPF record as if it said -all. This can be enabled by creating an SPF_BLOCK_PLUS_ALL environment variable with a value other than 0. Note that this variable is checked at the time the SPF check itself is done, which means if you want to add, change, or delete this variable using the AUTH_SET variables, you can. Linky here: http://qmail.jms1.net/patches/combined-details.shtml -- Phil Barnett AI4OF SKCC #600
Re: Rulesemporium
On Thursday 12 July 2007, Justin Mason wrote: Phil Barnett writes: On Wednesday 11 July 2007, SARE Webmaster wrote: There has been discussion of taking down the public site, opening something new ( private access, invite only, acl by ip, etc), in hopes to avoid ddos and provide better services, more requent rule updates, and so on. � � We are trying our best to keep it alive, but there is only so much we can do with the limited time and resources we have. How about releasing the ruleset via torrent or something similar. Anything that you could do to distribute the load and location would make a ddos attack less effective. While there might not be a lot of people on this list who can use their server to take on the entire DDOS for you, there are a LOT of servers here that could participate in a pool. If you're going to be looking into new methods to distribute rulesets, may I suggest sa-update? ;) Is it DDOS resistant already? -- Phil Barnett AI4OF SKCC #600
Re: Rulesemporium
On Wednesday 11 July 2007, SARE Webmaster wrote: There has been discussion of taking down the public site, opening something new ( private access, invite only, acl by ip, etc), in hopes to avoid ddos and provide better services, more requent rule updates, and so on. We are trying our best to keep it alive, but there is only so much we can do with the limited time and resources we have. How about releasing the ruleset via torrent or something similar. Anything that you could do to distribute the load and location would make a ddos attack less effective. While there might not be a lot of people on this list who can use their server to take on the entire DDOS for you, there are a LOT of servers here that could participate in a pool. Maybe a DNS round robin? Just some ideas. -- Phil Barnett AI4OF SKCC #600
Re: OT: Motivating good behavior from negligent ISP's
On Wednesday 11 July 2007, Philip Prindeville wrote: Michele Neylon :: Blacknight wrote: Philip Prindeville wrote: No joy. How long ago did you report it? Which time? It happens regularly, and it's been going on over a month. Ok. That changes things, but you didn't say anything in your post about it going on for a month I note also that they aren't using exponential back-off with a 2 hour maximum retry interval as suggested by the RFC's: Jul 11 00:08:19 mail mimedefang.pl[26738]: filter_relay rejected host 194.250.131.236 (smtp-wifi.orange.fr) (snip) We've started to take defensive measures... That would earn them a rule in my firewall. -- Phil Barnett AI4OF SKCC #600
Re: Rulesemporium
On Wednesday 11 July 2007, Yet Another Ninja wrote: On 7/12/2007 12:50 AM, Phil Barnett wrote: On Wednesday 11 July 2007, SARE Webmaster wrote: There has been discussion of taking down the public site, opening something new ( private access, invite only, acl by ip, etc), in hopes to avoid ddos and provide better services, more requent rule updates, and so on. We are trying our best to keep it alive, but there is only so much we can do with the limited time and resources we have. How about releasing the ruleset via torrent or something similar. Anything that you could do to distribute the load and location would make a ddos attack less effective. While there might not be a lot of people on this list who can use their server to take on the entire DDOS for you, there are a LOT of servers here that could participate in a pool. Maybe a DNS round robin? Just some ideas. hey great ideas - who volunteers to setup the Torrent stuff and manage it all ? Thinking further, torrent is not exactly what is needed. Torrents need to be reseeded for every change, so that's a maintenance nightmare. RSS has some of the pieces, but i'm not sure if it can be just a file delivery method. rsync has obvious benefist in reducing bandwidth, but doesn't have any security built into it. I think some brainstorming to come up with a peer distributed subscription service is the starting point. If there isn't one, that's the next battle. We can't be the first people to come up against this problem. How have others solved it? -- Phil Barnett AI4OF SKCC #600
Re: Spoofed URI's or fake websites ?
On Thursday 05 July 2007 06:47, Samuel Krieg wrote: Thanks for your answer. You confirm my thoughts. By the way I contacted ThePlanet sometimes ago for such websites. The redirection has been cleaned up and the websites are still online. PS: I'm not talking about my servers. They are healthy and running Linux :-) Don't think that this can't happen to a Linux based server. I've had both Coppermine and Geeklog compromised in the last month with phish sites. Fortunately, it was simple to see and secure the path on the Coppermine, which was letting new users have picture posting rights, but I never did figure out how they got in on Geeklog, so it's now banned from my server. -- Phil Barnett AI4OF SKCC #600
Re: Patch for rules_du_jour
On Thursday 28 June 2007 15:22, Lindsay Haisley wrote: Attached is a proposed patch for /var/lib/spamassassin/rules_du_jour which addresses the problem of the refresh URL which Rules Emporium sometimes sends out instead of a valid cf file. Basically, this patch greps the downloaded file for the string META HTTP-EQUIV, which should never occur in a valid rules file, but is part if the refresh URL. If the downloaded file is a refresh URL, it's deleted, the script waits 1 second and tries again, up to 3 times. If the download fails after 3 tries, the bad file is deleted and the script moves on. You might try running rules_du_jour from a cron job with the -D option and redirecting the output to a /tmp file and see if you get any notices about Download of FAILED after 3 tries, in which case I've mis-diagnosed the problem somewhat. In any event, the problem file should be deleted rather than causing a --lint failure in spamassassin. I'm going to try this, but with a 5 minute wait. I run it in the middle of the night anyway, who cares how long it takes. Actually, the proper response might be a random wait. -- Phil Barnett AI4OF SKCC #600
Re: Patch for rules_du_jour
On Thursday 28 June 2007 17:02, Lindsay Haisley wrote: I don't know what would be gained by a random wait. The idea of a random wait for contention resolution is long standing. It's built into the TCP/IP protocol for example. For example, say my cron job runs at 3 am. Lot's of them probably do. This causes the congestion. Waiting a random time makes the peaks gradually level out on the second retry. -- Phil Barnett AI4OF SKCC #600
Re: Fwd: RulesDuJour Run Summary on taz5.fiberhosting.net
On Friday 22 June 2007 12:32, jdow wrote: Take a quick look at tripwire and its newer equivalent. They should be about the same thing. Loading both will result in the rules that may share a name between the files having the newer version superseded by the older version because files load in alphabetical order. I checked. RDJ is pulling the new one and naming it tripwire.cf in the working rule directory. At least they have the same date/time stamp and identical content. So I think I'm only using the newer one. Thanks. -- Phil Barnett AI4OF SKCC #600
Fwd: RulesDuJour Run Summary on taz5.fiberhosting.net
Is anyone else getting these failed messages on their tripwire.cf updates? I've been getting this message for several days now. It looks to me like the new tripwire.cf is very broken. -- Forwarded Message -- Subject: RulesDuJour Run Summary on taz5.fiberhosting.net Date: Thursday 21 June 2007 02:26 From: To: RulesDuJour Run Summary on taz5.fiberhosting.net: TripWire has changed on taz5.fiberhosting.net. Version line: ***WARNING***: spamassassin --lint failed. Rolling configuration files back, not restarting SpamAssassin. Rollback command is: mv -f /usr/share/spamassassin/tripwire.cf /usr/share/spamassassin/RulesDuJour/99_ --- FVGT_Tripwire.cf.2; mv -f /usr/share/spamassassin/RulesDuJour/tripwire.cf.20070621-0225 /usr/share/spamassassin/tripwire.cf; Lint output: [24363] warn: config: failed to parse line, skipping: HTMLHEADMETA HTTP-EQUIV=Refresh CONTENT=0.1 [24363] warn: config: failed to parse line, skipping: META HTTP-EQUIV=Pragma CONTENT=no-cache [24363] warn: config: failed to parse line, skipping: META HTTP-EQUIV=Expires CONTENT=-1 [24363] warn: config: failed to parse line, skipping: /HEAD/HTML [24363] warn: lint: 4 issues detected, please rerun with debug enabled for more information -- Phil Barnett AI4OF SKCC #600 RulesDuJour Run Summary on taz5.fiberhosting.net: TripWire has changed on taz5.fiberhosting.net. Version line: ***WARNING***: spamassassin --lint failed. Rolling configuration files back, not restarting SpamAssassin. Rollback command is: mv -f /usr/share/spamassassin/tripwire.cf /usr/share/spamassassin/RulesDuJour/99_FVGT_Tripwire.cf.2; mv -f /usr/share/spamassassin/RulesDuJour/tripwire.cf.20070621-0225 /usr/share/spamassassin/tripwire.cf; Lint output: [24363] warn: config: failed to parse line, skipping: HTMLHEADMETA HTTP-EQUIV=Refresh CONTENT=0.1 [24363] warn: config: failed to parse line, skipping: META HTTP-EQUIV=Pragma CONTENT=no-cache [24363] warn: config: failed to parse line, skipping: META HTTP-EQUIV=Expires CONTENT=-1 [24363] warn: config: failed to parse line, skipping: /HEAD/HTML [24363] warn: lint: 4 issues detected, please rerun with debug enabled for more information
Re: Fwd: RulesDuJour Run Summary on taz5.fiberhosting.net
On Thursday 21 June 2007 03:38, Matthias Keller wrote: Just try to delete the downloaded files in your rules_du_jour folder (for example /etc/mail/spamassassin/rules_du_jour/* ), respectively just the rule(s) that go wrong.I then redownloads the rules correctly and you're clear to go with RDJ again Did that two days ago. And everything came in fine and worked. I linted it then and tonight and the current ruleset lints fine. The error messages are from the RDJ script pulling in a new file. It does look like the RDJ script is pulling the wrong file because the lint error shows html tags and there aren't any in my current tripwire.cf file. If it is true that there are no updates, then why is the RDJ script trying to update anything? Is the RDJ server still being DOS'd? -- Phil Barnett AI4OF SKCC #600
Re: Fwd: RulesDuJour Run Summary on taz5.fiberhosting.net
On Thursday 21 June 2007 08:47, jdow wrote: Unless something has changed with the most recent versions of SpamAssassin I see two configuraton errors present. 1) YOu do NOT use /use/share/spamassassin to store rules. They belong in /etc/mail/spamassassin or some other such place. This is a configurable option in SA, so you can put it anywhere you want. Why the people at Plesk decided to put it there is beyond me, but it works, so I'm leaving it alone. Also, /usr/share/spamassassin may be wiped out on each upgrade, but /usr/share/spamassassin/rulesdujour is not, and the next run of RDJ repopulates the former. 2) Why are you running tripwire.cf (obsolete) and 99_FGTTripWire.cf at the same time? When RDJ downloads 99FGTTripWire.cf, it renames it to tripwire.cf when it moves it. I had no idea that tripwire was obsolete. Where is this information distributed? -- Phil Barnett AI4OF SKCC #600
Re: Fwd: RulesDuJour Run Summary on taz5.fiberhosting.net
On Friday 22 June 2007 00:54, jdow wrote: I think it was mentioned around these precincts about the time tripwire was converted to 99_FVGTTripWire.cf and added to the SARE repositories as a SARE rule set. I also note that I don't use it here anymore. The return on CPU cycles investment was not sufficient to run that set anymore. When I'm looking for a place to shed load, I'll remember. Right now, this is a quad processor box, so I'll take all the rules I can get. We have a pretty good spam marking rate right now. Not many things hit tripwire, but all the ones that do are spam, so I find it useful to drive the score up. -- Phil Barnett AI4OF SKCC #600
Re: Spamasssassin 3.2.1 fun
On Tuesday 12 June 2007 00:51, Robert - eLists wrote: Yeah, real sweet, can't even pull up a simple web page... When stuff doesn't work at all, free does not matter. It appears things are so tight now, you can hear the pucker. Why do people cron things that could screw up something that is fairly mission critical anyways? At one time, the Top 100 list was being updated pretty regularly, and it more effective when it was updated more often. Also, the original instructions for the RDJ included suggestions to set up a cron job to automate the process. Where would you suggest a person should listen to find out that it't time to manually intervene and get any changes? I don't have a problem removing the cron job, but I don't want it to turn into an unmaintained appendage. -- Phil Barnett AI4OF SKCC #600
Re: Spamasssassin 3.2.1 fun
On Tuesday 12 June 2007 01:09, Robert - eLists wrote: I seem to recall that they (those in authority and inner knowledge) said there would be announcements about any updates and to not cron anymore cause of the attack problems. My question was: Where to listen for the announcements? Here on this list? Some other channel? I've been getting RDJ updates via cron once a week for a while now. I don't see how that can be construed as abusive, but I'm game to unhook it while they figure out what to do. I'm not wanting to be a burden, but I loath unmaintained systems. -- Phil Barnett AI4OF SKCC #600
Spamassassin debug test
I recently saw this happening when testing. Is this stuff left over from some older version, or something not installed? What should I do with the undefined dependencies? [29724] info: rules: meta test DIGEST_MULTIPLE has undefined dependency 'DCC_CHECK' [29724] info: rules: meta test SARE_SPEC_PROLEO_M2a has dependency 'MIME_QP_LONG_LINE' with a zero score [29724] info: rules: meta test SARE_HEAD_SUBJ_RAND has undefined dependency 'SARE_XMAIL_SUSP2' [29724] info: rules: meta test SARE_HEAD_SUBJ_RAND has undefined dependency 'SARE_HEAD_XAUTH_WARN' [29724] info: rules: meta test SARE_HEAD_SUBJ_RAND has dependency 'X_AUTH_WARN_FAKED' with a zero score [29724] info: rules: meta test SARE_HEAD_8BIT_NOSPM has undefined dependency '__SARE_HEAD_8BIT_DATE' [29724] info: rules: meta test SARE_HEAD_8BIT_NOSPM has undefined dependency '__SARE_HEAD_8BIT_RECV' [29724] info: rules: meta test SARE_MULT_RATW_03 has undefined dependency '__SARE_MULT_RATW_03E' [29724] info: rules: meta test SARE_RD_SAFE has undefined dependency 'SARE_RD_SAFE_MKSHRT' [29724] info: rules: meta test SARE_RD_SAFE has undefined dependency 'SARE_RD_SAFE_GT' [29724] info: rules: meta test SARE_RD_SAFE has undefined dependency 'SARE_RD_SAFE_TINY' [29724] info: rules: meta test SARE_MSGID_LONG40 has undefined dependency '__SARE_MSGID_LONG50' [29724] info: rules: meta test SARE_MSGID_LONG40 has undefined dependency '__SARE_MSGID_LONG55' [29724] info: rules: meta test SARE_MSGID_LONG40 has undefined dependency '__SARE_MSGID_LONG65' [29724] info: rules: meta test SARE_MSGID_LONG40 has undefined dependency '__SARE_MSGID_LONG75' [29724] info: rules: meta test SARE_MSGID_LONG45 has undefined dependency '__SARE_MSGID_LONG50' [29724] info: rules: meta test SARE_MSGID_LONG45 has undefined dependency '__SARE_MSGID_LONG55' [29724] info: rules: meta test SARE_MSGID_LONG45 has undefined dependency '__SARE_MSGID_LONG65' [29724] info: rules: meta test SARE_MSGID_LONG45 has undefined dependency '__SARE_MSGID_LONG75' X
Re: razor and pyzor
On Sunday 13 May 2007 23:25, Gary V wrote: On Sunday 13 May 2007 12:28, Gary V wrote: Thanks for the excellent notes! The run 'pyzor discover'. This creates /root/.pyzor/servers which is a file that contains the IP address and port to the main pyzor server. Don't use that server. Edit and change to 82.94.255.100:24441 Why? -- Phil Barnett Pyzor is not actively maintained. It has not been for a while. All new pyzor installations use the main pyzor server. That server is overloaded and queries will often timeout (5 seconds wasted). Some generous person (Milton?) created a mirror a while ago and it responds much quicker. The mailing list archives tell the tale: https://sourceforge.net/mailarchive/forum.php?forum_name=pyzor-users Gary V Do you mind if I include your notes with attribution to my document on building a MailServer applicance? -- Phil Barnett AI4OF SKCC #600
Re: razor and pyzor
On Monday 14 May 2007 06:20, Mikael Syska wrote: Will your notes be available online ? Yes. -- Phil Barnett AI4OF SKCC #600
Re: razor and pyzor
On Monday 14 May 2007 09:48, Gary V wrote: Do you mind if I include your notes with attribution to my document on building a MailServer applicance? -- Phil Barnett No, of course I don't mind, and credit isn't necessary. But thanks. Great, now if I can learn how to properly spell applicance, I'll be all set... -- Phil Barnett AI4OF SKCC #600
Re: razor and pyzor
On Sunday 13 May 2007 12:28, Gary V wrote: Thanks for the excellent notes! The run 'pyzor discover'. This creates /root/.pyzor/servers which is a file that contains the IP address and port to the main pyzor server. Don't use that server. Edit and change to 82.94.255.100:24441 Why? -- Phil Barnett AI4OF SKCC #600
Re: Spamassassin: Best Practices
On Monday 23 April 2007 03:35, Pradeep Mishra wrote: Hello Friends I am a newbie on spamassassin and would like to know.. 1) How can we train the spamassassin using bayesian to FILTER ALL OUTGOING AS WELL AS INCOMING messages from my server. What do you figure your outbound spam to ham ratio is? -- Phil Barnett AI4OF SKCC #600
Re: sa-learn: have i seen this before?
On Tuesday 17 April 2007 00:17, Faisal N Jawdat wrote: On Apr 16, 2007, at 9:34 PM, Matt Kettler wrote: Try to learn it, if it comes back with something to the affect of: learned from 0 messages, processed 1.. then it's already been learned. this seems to be the common suggestion. it has a couple drawbacks, as i see it: 1. it's relatively cpu-intensive if i want to do it all the time (e.g. scan my spam folder to learn only the messages which haven't already been learned) Move the messages to a different folder after you learn them. -- Phil Barnett AI4OF SKCC #600
Re: Messages receiving High Score but still getting through
On Sunday 01 April 2007 22:06, kiwidesign wrote: I am relatively new to SpamAssassin and I am having a bit of difficulty tracking down the reason for some spam messages getting through. When I test the message it comes up with score of say 23 points with 5 required. To me this indicates that it should have been stopped as spam, however it is still getting through. Spamassassin doesn't delete spam or block it. It simply marks it as spam. It's up to you to decide what to do with it after that. -- Ballmer is basically saying: We know there's a problem but we're not going to tell you what it is because we want to ambush you in the future. http://blogs.zdnet.com/hardware/?p=154
Re: SpamAssassin as a filter, without running a mail server?
On Thursday 29 March 2007 12:03, Chris Rouffer wrote: I've been given the job of adding an Internet Content filter, firewall, and spam filter to a small network in a non-profit organizaiton. Right now there are about 5 email accounts, and their mail server is at their web-host. Is it possible for me to run SpamAssassin as a filter on the firewall box, so that it simply filters email when the user retrieves it from the mail server: I'd start with IPCop for the firewall. http://www.ipcop.org Then I'd add CopPlus (which is Dan's Guardian packaged for IPCop). http://home.earthlink.net/~copplus/ Then I'd add CopFilter to finish it off. http://www.copfilter.org That would get you a nice appliance that has all the administration controls available via web pages. An alternative to CopFilter would be to build a second box and build up a MailScanner box for the email. For a small place, that would probably be overkill. For a large outfit, I'd probably go that route and split mail scanning off from the firewall. I have a recipe for a MailScanner box here: http://www.leap-cf.org/presentations/MailScanner/ I believe that ClarkConnect can also do all the things you mention, but I think you have to purchase some of the modules. -- Ballmer is basically saying: We know there's a problem but we're not going to tell you what it is because we want to ambush you in the future.
Re: comprehensive perl module site like cpan or other for SA needs ???
On Sunday 14 January 2007 12:51, R Lists06 wrote: I don't know at any given time what is the most stable version of any perl module in relation to SA use. Generally, perl modules only do one thing, and the parameters seldom change. When a bug is found, it's fixed and made available. This means that the latest one available is generally the one you want. This is not windows. -- Men occasionally stumble over the truth, but most of them pick themselves up and hurry off as if nothing had happened. Winston Churchill
Re: spamhaus' PBL is now *active* (in beta ... but still active). now what?
On Sunday 07 January 2007 13:00, John Rudd wrote: Have you put your own server into your trusted networks? It's a Plesk install and I generally don't edit their configuration files. I'll look into it. Have you put your own server into any of the various configs in Botnet.cf (the skip or pass lists)? Haven't touched these at all. I'll also look at this when time permits. Thanks for the feedback. -- My other computer is your Windows machine
Re: spamhaus' PBL is now *active* (in beta ... but still active). now what?
On Sunday 07 January 2007 08:22, Sander Holthaus wrote: But, to get back on topic, the new PBL in ZEN marks mail originating from ip's and netblocks which should not be running (mail-sending) mailservers, such as dynamic ip-ranges for cable/dsl/dailup-access (at least, that is my understanding). So unless you customers try to connect to mailservers directly to deliver mail (which is something most ISP's block btw) you shouldn't be in trouble. For example, I send myself a mail and I see this: *** Received: (qmail 20532 invoked from network); 7 Jan 2007 11:24:43 -0500 Received: from fl-69-34-131-91.dyn.embarqhsd.net (HELO ?192.168.100.209?) (69.34.131.91) by vhost.fiberhosting.com with SMTP; 7 Jan 2007 11:24:43 -0500 From: Phil Barnett philb at philb.us To: philb at philb.us Subject: test Date: Sun, 7 Jan 2007 11:24:46 -0500 Now, to me, this certianly looks like the mail originated from my machine, not the server, and it's from a DSL high speed network. And, I typically send mail directly to my server, not the earthlink servers. What keeps this mail from being marked? Botnet has been marking these mails. -- My other computer is your Windows machine
Re: spamhaus' PBL is now *active* (in beta ... but still active). now what?
On Saturday 06 January 2007 23:05, Theo Van Dinter wrote: On Sat, Jan 06, 2007 at 05:24:35PM -0800, snowcrash+spamassassin wrote: i regularly run updates via cron on the hour. :) : running it again, or at all, will change what/where? The recent 3.1 updates include the ZEN rules. If you're asking what files are changed by sa-update, please see man sa-update and the other documentation referenced therein. i'm asking what *specifically* needs to change, if anything, in SA ... i'd prefer NOT to be blind about it. Nothing needs to be changed, the update has everything necessary. From what I read, I have to be concerned with my setup. I provide mailboxes, but I'm not an ISP, so no mail originates on my server. From what I have read, the new ZEN rules will negatively impact my scores on all legitimate mail coming from my server. Is that really the case? -- My other computer is your Windows machine
Re: SA-UPDATE and recent branches/3.1 rules?
On Monday 01 January 2007 01:23, Theo Van Dinter wrote: Generally, updates get put in, and then whenever someone feels like pushing it, they can. I usually put in small commits for specific sets of rules, and could do multiple edits before I want an update to occur. So, does that mean that sa_update brings the update to my machine and then I have to do something else or that I have to run sa_update to bring them and install them? -- My other computer is your Windows machine
Re: SA-UPDATE and recent branches/3.1 rules?
On Monday 01 January 2007 01:46, Theo Van Dinter wrote: On Mon, Jan 01, 2007 at 01:41:33AM -0500, Phil Barnett wrote: So, does that mean that sa_update brings the update to my machine and then I have to do something else or that I have to run sa_update to bring them and install them? That's basically the same thing. man sa-update and reading http://wiki.apache.org/spamassassin/RuleUpdates is a good start. In short, if you run sa-update, and it downloads a new set of rules (exit code 0), you should restart your daemon if you're using one to get the new rules. That's what I thought, but you mentioned some manual step without stating that sa_update was the manual step you were referring to. I thought I had that right in my head. I'm running it on a cron job. By the way, thanks for everything and Happy New Year. -- My other computer is your Windows machine
Re: sa-learn explained
On Friday 29 December 2006 08:23, Vernon Webb wrote: These guys are beginning to drive me nuts and obvioulsy I have something wrong as others are telling me these are being caught as SPAM on their systems. My first question would be: Have you installed Rules Du Jour and set it up to have comprehensive coverage? Is pyzor, razor and DCC running? Are you using an RBL? -- My other computer is your Windows machine
Re: sa-learn explained
On Friday 29 December 2006 14:50, Vernon Webb wrote: What are you using? Right now, I'm using sbl-xbl. -- My other computer is your Windows machine
Re: sa-learn explained
On Friday 29 December 2006 16:23, Duane Hill wrote: Phil Barnett wrote: On Friday 29 December 2006 14:50, Vernon Webb wrote: What are you using? Right now, I'm using sbl-xbl. I could be mistaken. sbl-xbl is being replaced by zen.spamhaus.org. That is what I'm currently using. Their web site currently states this: http://www.spamhaus.org/sbl/howtouse.html -- My other computer is your Windows machine
Re: sa-learn explained
On Friday 29 December 2006 23:55, snowcrash+spamassassin wrote: and this, http://www.spamhaus.org/zen Caution: zen.spamhaus.org replaces sbl-xbl.spamhaus.org. If you are currently using sbl-xbl.spamhaus.org you can now replace 'sbl-xbl' with 'zen' (sbl-xbl.spamhaus.org will eventually become obsolete and may in the future be withdrawn from service). zen.spamhaus.org should now be the only spamhaus.org DNSBL in your configuration. You should not use ZEN together with other Spamhaus blocklists or you will simply be wasting DNS queries and slowing your mail queue. It makes me wonder why there are no links to it from anywhere on their front page, from any FAQ or from any menu from the front page. Perhaps it's not ready for prime time. I can't imagine that if it was they would not be making it headline news. How did everyone hear about this when there is no apparent attempt at the spamhaus.org website to let anyone know that there is a change coming? -- My other computer is your Windows machine
Re: SA not catching apostrophes in sender's addressess?
On Tuesday 26 December 2006 12:13, Luis Hernán Otegui wrote: OK, I'm using sa-update AND Rules Du Jour. However, I'm not sure about which rulesets are te most convenient to download. Could somebody pass a config file for RDJ? The ruleset you want will vary based on how strict or loose you want the rules to be. Here's the one I use: (fix the email address) # cat /etc/rulesdujour/config SA_DIR=/usr/share/spamassassin MAIL_ADDRESS=[EMAIL PROTECTED] SINGLE_EMAIL_ONLY=true SA_RESTART=/etc/init.d/spamassassin restart # Ruleset descriptions found at http://www.rulesemporium.com TRUSTED_RULESETS=TRIPWIRE SARE_EVILNUMBERS0 SARE_EVILNUMBERS1 SARE_EVILNUMBERS2 RANDOMVAL SARE_ADULT SARE_FRAUD SARE_BML SARE_SPOOF SARE_BAYES_POISON_NXM SARE_OEM SARE_RANDOM SARE_OBFU0 SARE_SPAMCOP_TOP200 SARE_GENLSUBJ SARE_HTML SARE_UNSUB SARE_URI SARE_REDIRECT_POST300 SARE_STOCKS SARE_WHITELIST SARE_SPECIFIC SARE_HEADER -- My other computer is your Windows machine
Re: Botnet 0.7 soon
On Monday 18 December 2006 20:16, John Rudd wrote: New things: Snippo of neat things that were added I think that's everything... Just need another day or two of testing before I release it. One thing I noticed from the previous version was there was no mention of version numbers anywhere in the package. Not in the name, not in the files. That makes it difficult to determine if we need to upgrade because there's no way to tell where we are. Also, did you fix the over 50 character description that was breaking the spamassassin --lint command? I changed it from the text you had to describeBOTNET Orig mail server looks like part of a Botnet That stopped spamassassin --lint from complaining. I really like what BOTNET is doing for my SA installation. Thanks and keep up the good work! -- My other computer is your Windows machine
Re: Breaking up the Bot army - we need a plan
On Tuesday 12 December 2006 07:28, JamesDR wrote: There is nothing in SPF to keep a spammer with a botnet from putting 0.0.0.0/0 as their approved domain limit. Sounds like a good spam sign to me. Let the spammers put 0.0.0.0/0 in their spf records, I'll pop in 3 points for good measure. But, you are making some assumptions at this point and that is the crux of why SPF can't work very well. Say you give points for that one. So, where do you draw the line. Do you give points for (for example) 123.0.0.0/8? What if that is someone's legitimate domain space? Bot masters can easily set up SPF addresses that will encompass giant subnets of bots. You'll never know where to draw the line. -- My other computer is your Windows machine
Re: Breaking up the Bot army - we need a plan
On Monday 11 December 2006 16:50, JamesDR wrote: Would you care to elaborate on why SPF doesn't work for sender verification? Its pretty simple, doesn't get much more simple that what SPF does... If SPF doesn't work, nothing will. There is nothing in SPF to keep a spammer with a botnet from putting 0.0.0.0/0 as their approved domain limit. -- My other computer is your Windows machine
Re: New advice spam
On Sunday 10 December 2006 16:31, John Rudd wrote: It can be downloaded from: http://people.ucsc.edu/~jrudd/spamassassin/Botnet.tar Thanks, John. I downloaded it and installed it earlier today. It appears to be working fine, but I got with this tonight when RulesDuJour ran: RulesDuJour Run Summary on taz5.fiberhosting.net: ***NOTICE***: spamassassin --lint failed. This means that you have an error somwhere in your SpamAssassin configuration. To determine what the problem is, please run 'spamassassin --lint' from a shell and notice the error messages it prints. For more (debug) information, add the -D switch to the command. Usually the problem will be found in local.cf, user_prefs, or some custom rulelset found in /usr/share/spamassassin. Here are the errors that 'spamassassin --lint' reported: warning: description for BOTNET is over 50 chars lint: 1 issues detected. please rerun with debug enabled for more information. -- My other computer is your Windows machine
Re: SpamAssassin in Plesk
On Thursday 16 November 2006 07:30, twofers wrote: 1. I have tried putting some canned .cf files into /etc/mail/spamassassin/ and have discovered that I have limitations on the size of these file(s) that SA will work with. I have 512 M memory and it seems large .cf files filled with rules and blacklist_from will prevent SA from even starting until I reduce the file size by deleting entries. I can't imagine this is normal. What is it that I need to be aware of? I don't believe all this information needs to be inside the local.cf file. Just that it needs an extension of .cf Am I S-O-L because of file size? I've been running Plesk since they started up. 1. You need more ram. If you are running a server and want to pull out the big guns, you need a big playfield. No amount of trimming will fix this. 2. I have a machine running Plesk for years. It has a gig of ram and ran 70 sites and around 350 mailboxes. It never ran out of ram. I just decommissioned it for a machine with 4G of ram. 3. I never added any huge block lists. I use Rules Du Jour. 4. Consider putting your blocking lists in at the firewall level instead of in SA. -- My other computer is your Windows machine
Re: spam filter working but not...
On Saturday 09 September 2006 12:06, Poohba wrote: There are more messages in my spam folder(file) than what shows in evolution. Same goes for almost... Procmail shows its sending emails there but I don't see them. If I open the file using a text editor I see the emails but not in evolution. Is something wrong with the rule? :0fw: spamassassin.lock * 256000 | spamassassin | :0: * ^X-Spam-Level: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\* almost-certainly-spam :0: * ^X-Spam-Status: Yes spam The whole point of shunting mail to the almost-certainly-spam file is that they won't end up in your mailbox. -- My other computer is your Windows machine
Re: Regex help...confused about spaces.
On Sunday 22 January 2006 12:14, wrote: All, I'm confused as to how to block words with spaces. For example, V ia G ra M o r t g a g e This seems to be very effective. v.?[|[EMAIL PROTECTED]@] I also like and use the Sare rulesets, which pretty much catch all of this stuff. but if you insist on rolling your own, then the above is working for most instances. .? = skip 0 or 1 character. -- Outlook not so good. That magic 8-ball knows everything! I'll ask about Exchange Server next.
Re: Anyone ever see this?
On Tuesday 30 August 2005 05:40 pm, [EMAIL PROTECTED] wrote: Got a nasty spam with an extremly oversized Thread-Index header. (I set my word wrap to 72 characters, I don't know if it will hold up however when I hit send). Does anyone know if it is exploiting a known Outlook/Exchange security hole? There was something about an elm vuln today. Probably that one. -- Don't think that a small group of dedicated individuals can't change the world. it's the only thing that ever has.
Re: ANNOUNCE: SpamAssassin 3.1.0-rc2 release candidate available!
On Monday 29 August 2005 11:57 pm, John Rudd wrote: Does this fix the problem with SIGCHLD? Do you really need to quote the entire message? -- Don't think that a small group of dedicated individuals can't change the world. it's the only thing that ever has.
Re: sa-learn - bayes training...
On Friday 15 April 2005 08:03 am, Jean Caron wrote: Again, how can I tell for sure ? Look in the header and see what the bayes score was on the FN. -- In the beginning of a change, the patriot is a brave and scarce man, hated and scorned. When the cause succeeds, however, the timid join him...for then it costs nothing to be a patriot. -Mark Twain
Re: Obfuscation (was: Millions and Billions)
On Sunday 27 February 2005 10:35 am, Kenneth Porter wrote: --On Thursday, February 24, 2005 6:07 PM -0500 Phil Barnett [EMAIL PROTECTED] wrote: i or l = [|ííiil1] a = [EMAIL PROTECTED] e = [eé3] o = [o0] It seems like this is getting overly-complicated. Are there any libraries for doing fuzzy string matching and obfuscation detection that could be used instead of Perl regex's? I believe there are three premises here: 1: Never make software more complex than it needs to be. 2: Always make software as complex as is needed to get the job done. 3: Spammers aren't necessarily all stupid. All you have requested here is for someone else to do the complicated stuff and make it easy for you. Someone has to get the code as complex as it needs to be. If not you, then the guy that makes the library you seek. -- The significant problems we face cannot be solved at the same level of thinking we were at when we created them Albert Einstein
Re: Obfuscation (was: Millions and Billions)
On Sunday 27 February 2005 06:31 pm, Kenneth Porter wrote: --On Sunday, February 27, 2005 11:48 AM -0500 Phil Barnett [EMAIL PROTECTED] wrote: All you have requested here is for someone else to do the complicated stuff and make it easy for you. Someone has to get the code as complex as it needs to be. If not you, then the guy that makes the library you seek. I just question whether regex's are the right complicated solution. How does Google or one of the dictionary sites guess the correct spelling for a misspelled word? Great, why don't you go see if google can guess the correct spelling for c l @ L i @ s -- The significant problems we face cannot be solved at the same level of thinking we were at when we created them Albert Einstein
Re: Millions and Billions
On Thursday 24 February 2005 05:42 pm, [EMAIL PROTECTED] wrote: Stuart Johnston wrote: [EMAIL PROTECTED] wrote: Stuart Johnston wrote: body L_MILLBILL /[mb]i(?:\|l|l\||\|\|)ions?/i body L_MILLBILL /[mb]i[l|][l|]ions?/i I started with something similar to that but it will also match millions which we don't want. Touché! OK, how about body L_MILLBILL /[mb]il?\|+l?ions?/i Also catches mi|ions, mil||ions Matthew.van.Eerde (at) hbinc.com 805.964.4554 x902 Hispanic Business Inc./HireDiversity.com Software Engineer perl -emap{y/a-z/l-za-k/;print}shift Jjhi pcdiwtg Ptga wprztg, Over a period of time, here's some of the character groups that I've seen substituted. Feel free to use them wherever. i or l = [|ííiil1] a = [EMAIL PROTECTED] e = [eé3] o = [o0] And, don't forget that a .? means either 1 or no characters ignored. so [EMAIL PROTECTED] matches prozac p r o z a c [EMAIL PROTECTED] pmrwomawzmawc (not that you'd want to...) etc. -- The significant problems we face cannot be solved at the same level of thinking we were at when we created them Albert Einstein
Re: upgrading methods
On Thursday 13 January 2005 03:44 pm, Thomas Arend wrote: Because SuSE stores spamd in /usr/sbin/spamd and the tarball stores it in /usr/bin/spamd the SA does not run. You could have put a symlink in /usr/bin ln -s /usr/sbin/spamd /usr/bin/spamd -- Top ten reasons to procrastinate. 1.
Re: upgrading methods
On Thursday 13 January 2005 07:19 pm, [EMAIL PROTECTED] wrote: Phil Barnett wrote: I'm feeling puckish today so I'll say it. Or even symlink /usr/sbin to /usr/bin (shock, horror) :-) Gasp, You've gone too far, now... ;-) -- Top ten reasons to procrastinate. 1.