-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Jeff Tucker writes:
> Michael Parker wrote:
> > On Tue, Oct 05, 2004 at 10:22:42AM -0700, Morris Jones wrote:
> > 
> >>I watched a spamd child grow to 250MB yesterday on a single message.  I
> >>have a suspicion that the memory usage growth is happening on a whitelist
> >>or bayes database maintenance event of some sort.
> >>
> > 
> > 
> > For folks that are seeing huge jumps in memory, instead of gradual
> > growth, how are you calling SA?
> > 
> > Thanks,
> > Michael
> > 
> 
> I, too, am seeing the 250MB spamd processes. I've been doing some 
> research on this and I'll share what I know.
> 
> I'm using SA 3.0.0 with spamd and the qmail-spamc wrapper which really 
> just pipes the email into spamc. It's pretty common that one of the 
> machines doing the filtering will have a spamd process jump to 250MB or 
> more of memory usage. I have several machines doing filtering and each 
> will have a problem several times per hour. I am not using Bayes at all, 
> so this doesn't have anything to do with that. Since I'm using spamc, 
> I've got its default behavior which is to not scan any message over 250KB.
> 
> First, I've been told that perl never really frees memory, even when it 
> isn't using it any more. So, if a single message causes a spamd to jump 
> up to hundreds of MB's, it will stay that way until the process exits 
> entirely. Setting the --max-conn-per-child to a lower number will help 
> make sure these don't stick around too long. With the default of 200 and 
> a bunch of spamd processes, it could be hours before a spamd dies on a 
> server that isn't busy. Pick a time limit that seems reasonable to you, 
> say five minutes, and adjust --max-conn-per-child to make the processes 
> die about that often.
> 
> I captured an exact copy of one of the messages that was being scanned 
> when this happened. This particular message took something like 10 
> minutes to scan. We deliver to maildirs, so it was easy to get a copy of 
> it, although it's possible there was an extra header line by the time I 
> had it. Rescanning the same message by calling spamc didn't cause the 
> problem. The scan completed in just a couple of seconds.
> 
> One interesting thing is that I have some machines that do this and a 
> couple that don't. The ones that don't were built slightly later than 
> the ones that do. Actually, the SpamAssassin installs were done at the 
> same time and are identical, but other system software isn't exactly the 
> same vintage. The machines that are having problems all have perl 5.8.0. 
> The machines that don't all run perl 5.8.4. There might be small 
> differences in the versions of other libraries, too. Maybe this is 
> exercising a perl bug?
> 
> This extreme memory usage does cause real problems for us. If two 
> spamd's get messed up at the same time, the machine starts thrashing to 
> swap. (The machines all have 1 GB of RAM). When that happens, mail 
> processing slows way down, which unfortunately means it takes even 
> longer for the spamd's to process enough messages to go ahead and die.

Bear in mind, BTW, that's it's perfectly safe to kill off one of the
child spamds.  What will happen is:

  - spamc will get a "connection closed" and pass the message through as
    unscanned, therefore nonspam;
  - spamd master will restart a new spamd child.

> One last note, when the spamd is processing a trouble message, it seems 
> like it is usually running 99% CPU. It might drop CPU usage after that 
> message, but like I said earlier it doesn't drop memory usage until it 
> dies. Why a small message would cause spamd to go to 99% CPU for 
> minutes, I don't know.

me neither. :(   annoying problem!

The 5.8.0 perls -- are there any differences in perl -V output compared
to the 5.8.4s?   In particular, threading or MULTIPLICITY?

- --j.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)
Comment: Exmh CVS

iD8DBQFBZhEUQTcbUG5Y7woRAn8rAKCy9XwELFEg8EIrkA72kmA4BOrwlACgmI7F
QgVPo49xS/8LZwIDzXsqj/w=
=lIcR
-----END PGP SIGNATURE-----

Reply via email to