Over the last few weeks, I've been setting up relaying for a
few high traffic lists of over 1K recipients. As that has moved
into a production phase, it struck me that there is quite a
demarkation when looking at xdelay stats.
I grabbed the approx 10MB of logs I had on hand and ran it through
qmailanalog's matchup and zrxdelay.
Of the 345 recipients listed, 299 had an xdelay of less than 10.
The distribution is even more interesting:
0<= x < 1 6
1<= x < 2 35
2<= x < 3 63
3<= x < 4 60
4<= x < 5 55
5<= x < 6 26
6<= x < 7 20
7<= x < 8 12
8<= x < 9 13
9<= x <10 2
What strikes me is this:
I've already segmented the server's regular queue from its
relay queue... My goal is to run with the lowest concurrency
possible on the relay queue while maintaining decent throughput.
The key to successfully lowering concurrencyremote to to
eliminate hosts with a large xdelay from the queue.
I don't mind having a larger concurrencyremote for a queue
if the messages are being sent to hosts with a large xdelay,
and have a single recipient.
The goal, I guess, is to create a queue which can efficiently
handle large numbers of recipients, but smooth out the bursts
which characterize a large-recipient mailing list.
The advantage of large concurrencies is the ability to deliver
a high latency protocol quickly, right?
So what if I were to have high concurrency secondary queue for
the hosts which have a historically measured xdelay above N,
and a lower concurrency primary queue for hosts with a
historically measured xdelay below N.
Anyone thought about doing something like this? I guess you'd
have to abuse the loopback address.
--
John White
[EMAIL PROTECTED]
PGP Public Key: http://www.triceratops.com/john/public-key.pgp