Re: SA filter load: massive increase
Garry Glendown writes: > Justin Mason wrote: > > for what it's worth, I would suggest an iterative search -- remove all > > extra rulesets, and re-add them gradually until you spot one or two > > that are causing the load issues. > > I'll try, but I still can't see that the rulesets are actually causing > this ... Well, there are several in the past that certainly did -- bigevil, for instance. It's a known issue -- SpamAssassin does not limit rules' memory/CPU load itself, so you have to do it yourself. > I just did another check of what else might have changed since beginning > of october ... all I can see is > > - ClamAV update from 0.88.4 via 0.88.5 to now 0.88.6 > - SpamAssassin Update from 3.1.5 to 3.1.6 > > Any idea if either of those might be causing it? Dunno about clamav, but that SpamAssassin upgrade, I very much doubt it. --j.
RE: SA filter load: massive increase
Title: RE: SA filter load: massive increase > -Original Message- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] > Sent: Wednesday, November 08, 2006 5:00 AM > To: Garry Glendown > Cc: Matt Kettler; users@spamassassin.apache.org > Subject: Re: SA filter load: massive increase > > > > Garry Glendown writes: > > Matt Kettler wrote: > > > In general I'd take a look at the sizes of the rule files > themselves.. > > > Look for ones that are significantly larger than 128k or so. > > > > Of those, there only few: > > > > -rw-r--r-- 1 root root 384645 Oct 30 2005 70_sare_header.cf > > -rw-r--r-- 1 root root 158513 Oct 1 2005 70_sare_obfu.cf > > > > Given both are significantly older than the occurrence of the > > performance decrease, neither should be the cause ... in > fact, the only > > sare-rules that have dates newer than Oct 1st are sare_stocks and > > sc_top200 ... > > for what it's worth, I would suggest an iterative search -- remove all > extra rulesets, and re-add them gradually until you spot one or two > that are causing the load issues. > > --j. I'm shocked!! As you are a good coder! Why not follow the rule of halfs? Take half the rules out, if the problem goes away, its the other half of the rules, throw half the other rules in, and so on, always cutting in half. Far less time then adding one by one. Hmm... Theo, go check all of JM's code ;) --Chris
Re: SA filter load: massive increase
Garry Glendown writes: > Matt Kettler wrote: > > In general I'd take a look at the sizes of the rule files themselves.. > > Look for ones that are significantly larger than 128k or so. > > Of those, there only few: > > -rw-r--r-- 1 root root 384645 Oct 30 2005 70_sare_header.cf > -rw-r--r-- 1 root root 158513 Oct 1 2005 70_sare_obfu.cf > > Given both are significantly older than the occurrence of the > performance decrease, neither should be the cause ... in fact, the only > sare-rules that have dates newer than Oct 1st are sare_stocks and > sc_top200 ... for what it's worth, I would suggest an iterative search -- remove all extra rulesets, and re-add them gradually until you spot one or two that are causing the load issues. --j.
Re: SA filter load: massive increase
Matt Kettler wrote: > In general I'd take a look at the sizes of the rule files themselves.. > Look for ones that are significantly larger than 128k or so. Of those, there only few: -rw-r--r-- 1 root root 384645 Oct 30 2005 70_sare_header.cf -rw-r--r-- 1 root root 158513 Oct 1 2005 70_sare_obfu.cf Given both are significantly older than the occurrence of the performance decrease, neither should be the cause ... in fact, the only sare-rules that have dates newer than Oct 1st are sare_stocks and sc_top200 ... -gg
Re: SA filter load: massive increase
Garry Glendown wrote: > Hi, > > after fixing sone lint errors that had gone unnoticed for some time, our > MailScanner/SA filter server has started bogging under the daily flood > of mail (~100k mails per day) - a load that had not done anything to the > box before ... As the only change had been fixing the lint error, > followed by RDJ update, I suspect one or multiple of the rules have > caused the load increase ... here's the list of rules I use: > > TRUSTED_RULESETS="SARE_REDIRECT_POST300 SARE_EVILNUMBERS2 > SARE_BAYES_POISON_NXM SARE_HTML0 SARE_HTML1 SARE_HTML2 SARE_HTML3 > SARE_HTML0 SARE_HTML1 SARE_HTML2 SARE_HTML3 SARE_SPECIFIC SARE_ADULT > SARE_BML SARE_FRAUD SARE_SPOOF SARE_RANDOM SARE_SPAMCOP_TOP200 SARE_OEM > SARE_GENLSUBJ0 SARE_GENLSUBJ1 SARE_GENLSUBJ2 SARE_GENLSUBJ3 SARE_UNSUB > SARE_URI0 SARE_URI1 SARE_URI3 SARE_WHITELIST_SPF SARE_WHITELIST_RCVD > SARE_OBFU SARE_STOCKS EVILNUMBERS SARE_ADULT SARE_BAYES_POISON_NXM > SARE_BML SARE_CODING SARE_FRAUD SARE_HEADER SARE_OEM SARE_RANDOM > SARE_REDIRECT_POST300 SARE_SPECIFIC SARE_SPOOF TRIPWIRE ZMI_GERMAN"; > > Anything that could cause massive backlog and should be dropped? > Nothing jumps out at me as causing your problem. However, if you have network tests enabled, ditch SARE_SPAMCOP_TOP200. This is really only intended as a tool for folks that can't use network tests, and is 100% redundant with the network tests built into versions of SA higher than 3.0.0. And given that you're using SARE_WHITELIST_SPF, you have network tests enabled, and are using a recent version of SA. In general I'd take a look at the sizes of the rule files themselves.. Look for ones that are significantly larger than 128k or so. In general the files should be in /etc/mail/spamassassin, /etc/spamassassin, or /usr/local/etc/mail/spamassassin, depending on what platform, package and build options were used. > Thanks! > > -garry > >
SA filter load: massive increase
Hi, after fixing sone lint errors that had gone unnoticed for some time, our MailScanner/SA filter server has started bogging under the daily flood of mail (~100k mails per day) - a load that had not done anything to the box before ... As the only change had been fixing the lint error, followed by RDJ update, I suspect one or multiple of the rules have caused the load increase ... here's the list of rules I use: TRUSTED_RULESETS="SARE_REDIRECT_POST300 SARE_EVILNUMBERS2 SARE_BAYES_POISON_NXM SARE_HTML0 SARE_HTML1 SARE_HTML2 SARE_HTML3 SARE_HTML0 SARE_HTML1 SARE_HTML2 SARE_HTML3 SARE_SPECIFIC SARE_ADULT SARE_BML SARE_FRAUD SARE_SPOOF SARE_RANDOM SARE_SPAMCOP_TOP200 SARE_OEM SARE_GENLSUBJ0 SARE_GENLSUBJ1 SARE_GENLSUBJ2 SARE_GENLSUBJ3 SARE_UNSUB SARE_URI0 SARE_URI1 SARE_URI3 SARE_WHITELIST_SPF SARE_WHITELIST_RCVD SARE_OBFU SARE_STOCKS EVILNUMBERS SARE_ADULT SARE_BAYES_POISON_NXM SARE_BML SARE_CODING SARE_FRAUD SARE_HEADER SARE_OEM SARE_RANDOM SARE_REDIRECT_POST300 SARE_SPECIFIC SARE_SPOOF TRIPWIRE ZMI_GERMAN"; Anything that could cause massive backlog and should be dropped? Thanks! -garry