Stefano Bagnara wrote:
Bernd Fondermann wrote:
Hi,
Finally, I managed to run the same scenario as described below with
2.3.0a1. 80603 mails sent, 0 lost. Performance seems to be basically
identical which is good news.
How did you find the performance are the same?
I'm not sure I understood the test...
You send always 52 mails per minute so you should spool 52*1440 (minutes
per hour) = 74880 message.
Why your results are 80600 for both servers?
Oops, a typo in my write-ups. 56 mails per minute have been sent.
Probably due to timer glitches, not the full number of 56*1440 = 80640
was reached.
To find the performance you should as many mails as james is able to
handle and slow down only when the spool start increasing it size.
A good way would be to set a number of messages to send in "queue" and
send more new messages only after receiving a few from the remote
delivery (Mail Gateway'ing) or reading them from the pop3 server (not
sure I explained what I mean).
Scenario:
1. queue size = 100
2. postage start sending 100 messages to the server, checking pop3
folders and checking incoming smtp messages.
3. after a message is received by the remote delivery or read from the
pop3 server, postage send a new message to the server.
This way the total messages sent depends on the spooling speed and not
on the messages per minute you set.
To be more compliant with your "scenario" configuration we can think
that your numbers are not meant "per minute" but "per send unit".
You know you have 52 messages "per send unit" in your scenario. You
start sending a number of unit (1 to 10 units seems reasonable), then
you send a new one only after every 52 messages you receive from the
remote delivery or pop3 server.
Does it make sense? Is this possible with "postage architecture"?
Makes perfect sense to me. This would be a stress test application.
What I intended to achieve in the first place is to
* put a steady, reproducable load on James
* gather as much data about successful delivery and errors as possible
* try to mimic real-life load
thus making two runs of the same configuration for different James
builds comparable in terms of: "Is the build I have here still capable
of delivering all messages as the last major version was - or is
something broken?"
The result of the whole test run would be boolean. This could be
executed daily by continuous integration tools.
But of course, the "adaptive" behavior you are describing would still be
an option to see when we "max out" the server. Seems to be more fun, too :-)
Bernd
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]