Where would I start in the code to modify the QMQP servers list so that it
would load balance between all of the servers in the list instead of just
using the first one it can contact?  This would be very useful to me.  I
assume qmail-qmqpc.c is one of them, are there others I would need to play
around with?

Jay

-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 14, 2000 3:55 PM
To: '[EMAIL PROTECTED]'
Subject: Re: questions about performance and setup


On Fri, Jul 14, 2000 at 02:29:06PM -0500, Austad, Jay wrote:
> I already have Mandrake Linux 7.0 and 7.1 running on multiple Dell boxes
> with no trouble, some of them took work to get going, but it runs well.  I
> have a few Crystal PC's here also that I may use instead, dual PIII 550's
> with 512MB ram and 9 or 18GB 10000rpm drives.  I'll probably use these for
> testing.

I agree with the earlier poster that more spindles for your queue
(c/- raid) is a good thing in general.

> The bulk of the messages will be the same content to many rcpt's.
However,
> once in awhile we'll have 100,000 different messages go out to 100,000
> different people.
> 
> Since the QMQP support under mini-qmail doesn't load balance, can I feed
it
> a hostname with multiple dns entries (round-robin dns)?  Or better yet,
how
> easy would it be to modify the qmail code to just load balance between
them?

The manpage for qmail-qmqpc tells us that they have to be IP addresses
in qmqpservers so a RR DNS won't help. If all of the messages are generated
on one machine, then I'd be inclined to go for a much simpler solution
than modifying qmail. I'd have an instance of qmail for each outbound
server with the appropriate qmqpservers entry, then have your queue
insertion script do a round-robin itself by simply cycling thru
the qmail-inject command associated with each instance.

for instance in 1 2 3 4 5
do
        getnext_message_details()
        /var/qmail{$instance}/bin/qmail-inject currentmessage .... details
done

Or some such.


Alternatively, if you have money to burn, maybe a layer four switch
with load-balancing skills.


Mark.


> 
> Jay
> 
> 
> 
> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
> Sent: Friday, July 14, 2000 2:09 PM
> To: '[EMAIL PROTECTED]'
> Subject: Re: questions about performance and setup
> 
> 
> > Here's what I need to know:
> > 
> > 1.  How well does qmail take advantage of multiple processors?  How much
> 
> Indreectly, quite well as it forks many processes, thus if the OS takes
> good advantage of your CPUs, then qmail inherits that advantage.
> 
> > memory and disk will I need?  (we're at 50 million messages per month
now,
> 
> Are these message unique per target address or the same. If unique, your
> requirements are vastly different and very queue/disk intensive. If they
> are the same and you take advantage or VERP support on qmail, then
> your load will mainly be sending related which will benefit from
> more memory, multiple instances, etc.
> 
> > and we only send out monday-friday, so that's over 2 million messages
per
> > day, and it's only going up)
> > 
> > 2.  How many messages per day would one estimate that each of these
> servers
> > could do?
> > 
> > 3. I read about mini-qmail and how it's about 100 times faster blasting
> out
> > email to QMQP servers.  Since you can specify multiple QMQP servers, if
I
> > have a fourth machine running mini-qmail and managing the actual mailing
> > list, can I add the other 3 as QMQP servers and have it load balance
> between
> > all 3 for sending out mail?  (this way I could add more servers easily
if
> I
> > needed to)
> 
> The qmqp support doesn't load balance. It simply takes the first one
> it can connect to.
>  
> > 4. Can I easily make qmail run an external script for each bounced mail?
> 
> Absolutely.
> 
> > 5.  Anything else I should know?
> 
> That all hinges on whether your emails are unique for each recipient or
> not. Or more importantly, the average number of recipients per unique
> email.
> 
> 
> Regards.

Reply via email to