On Sat, Jul 28, 2001 at 11:20:37AM -0400, Philip Mak wrote:
> On 28 Jul 2001, MarkD wrote:
> 
> > By volume I meant how many emails per hour. Number of users is largely
> > irrelevant.
> 
> Okay... I just did "grep | wc" (count lines matching a pattern) on my
> qmail log directory.
> 
> In the last 10 minutes (I only have logs going back that far, because the
> logs are limited to 1 MB total), there were 1300 deliveries to local
> users. It's a quiet time of the day right now, so I suspect it might get
> even more heavily loaded later.
> 
> > If you're doing this per delivery, I'm not surprised. But it should be
> > easy to measure for sure with vmstat/top/acct, etc.
> 
> Yes, it's per delivery. The forwarding program tends to take up around 5%
> of CPU according to top.


We have a similar script.  Watch a mailing list hit and the load average
goes way up.  We only use it where WYSIWYG changes to forwarding addresses
are required; the per email compilation is way out of line.

I've always assumed the "right" way would be some sort of UNIX socket 
connection with a persistent daemon and a local database server + 
backup cache or exit 111.  I wonder if one could not put a filter in 
front of some app-server to do just that....

If changes happen infrequently relative to time to rebuild the cdb file, 
then doing that for each change sounds like it would be simplest AND most 
efficient.  The only complexity would be triggering that rebuild on db 
change.  Not so easy with mySQL but maybe your mechanism for updating 
could do it.

At least it solves MY problem; that will work nicely for us.  :-)

cfm


-- 

Christopher F. Miller, Publisher                               [EMAIL PROTECTED]
MaineStreet Communications, Inc           208 Portland Road, Gray, ME  04039
1.207.657.5078                                         http://www.maine.com/
Content/site management, online commerce, internet integration, Debian linux



Reply via email to