Patrick Chemla put forth on 1/10/2010 3:00 PM:
> Wietse,
>>> Please try the following, as asked half a week ago:
>>>
>>>      postconf -e smtp_connection_cache_on_demand=no
>>>      postfix reload
>>>
>>> and report if this makes a difference.
>>>     Wietse
>>>      
> I have tested this since yesterday night.
> 
> I got some problems with Linux per user number of processes limit. I
> fixed it. I also increased some delivery concurrency  figures, and now I
> can see up to 1300 processes delivering emails to the qmail servers.
> 
> I had a few minutes shot today at a rate of 6300 emails per minute. I
> ran a full hour at 180,000 emails per hour. The outbound line was
> saturated.
> 
> CPU is about 30% loaded, no Wait I/O, no swap, memory is large.
> 
> I think I will reach about 600,000 emails per hour if I fix some timeout
> on the qmails (replace by postfix?). Maybe I could reach 1 million?
> 
> The full architecture that I plan will include 2 to 3 clustered postfix
> relays and 50 2nd level qmails(or postfix) delivery servers, each with 3
> to 5 IP addresses, and upgraded outbound internet connection.
> 
> With your help, I better understand now the impact of timeout and
> concurrency parameters. In fact, delivery was blocked because postfix
> was trying to reuse connections, so was waiting each email to complete
> to send the next one. Also, because hundreds processes were created at
> start time to manage inbound messages, there were no slots to fork
> processes to deliver messages on the other hand. Same problem caused
> very slow DNS and EHLO, because no available slots to fork.
> 
> Of course, if you want me to post my conf, I will with pleasure.
> 
> Many thanks to you, to Victor and Stan.
> 
> Patrick

On a technical level I'm happy you got it working.  Just please tell us you're
not sending mass spam with this setup.

--
Stan


Reply via email to