Jacob Vickers wrote:
Jake Vickers wrote:
Eric Shubert wrote:
Phil Leinhauser wrote:
Is there anything like a load tester for QMT
to mimic the 500 user limit?

I don't know of one. (Doesn't mean there isn't one)

It'd be nice to have a testing harness of some sort for QMT. Anyone up
for writing one?

One of the setups I was working on was Xen, but there is one running
ESX. They have 500 users, all using IMAP, and some users experience 8-10
minute lags in accessing their imap stores (or webmail).
IMAP processes have been increased, and the softlimit has also been
increased.
I do agree we could use some testing tools (this should be on the devel
list, and can easily be added to the "tools" section of the subversion
repo).
I used to have a script around here somewhere that dumped 10K emails to
load test, but haven't used it in a long time and would have to really
dig to find it.
I just wonder if the imap load isn't hitting a limitation in the VMware
storage structure. The system has enough horse power and lots of RAM.
'sar' shows the cpu is not working very hard. When running a 'du -sh *'
on the /home/vpopmail/domains takes as long as 7 minutes on a couple
domains, which is what points me towards a VMware filesystem suspicion,
since that "test" is not even using imap but straight file access.
That's a good guess. I'll think about it a bit.

Courier or Dovecot? (Not that it matters a whole lot if filesystem
access is slow)

Which elevator type is running on the guest? (noop appears to be best)
Any idea which elevator ESX is configured to use?
Which ESX version?

As a side thought - anyone ever looked at freshmeat on some pop/imap
testing tools?



Courier is running. I may switch them to dovecot since it handles the
larger mailboxes better (some boxed are 7+ Gigs in total size). I still
think it's a filesystem issue though, as a 'du -sh' command takes so long.

I agree, but I think dovecot will relieve the strain a bit. Courier doesn't do well on bare iron with mailboxes so big.

I do not have much more information as to the ESX setup however. They did
have the mailstore mapped via NFS and switching that to a "local"
filesytem decreased the issues by 50% or more. They moved a busy host to
another ESX server and that decreased the issues even more, but they still
experience 1-3 issues per day, always during peak times.

Did you check kernel i/o scheduler settings on the guest(s)? noop scheduling makes a lot of sense for guests. You can change it on the fly to test it out.

And when I compare that against a 1500+ user system I have on a bare-metal
server (mixed pop3 and imap) that does not have any issues (and 1/2 the
hardware horsepower), the only other thing I can point a finger at at this
point would be VMware.

Certainly. VMware guests definitely need to be tuned. Check the wiki page for other settings. Having the tmpDirectory going to disk instead of a tmpfs will kill performance, so I'm guessing that's been taken care of already. This can be set in the by guest in the vmx, or globally in the host configuration (not sure where in ESX).

--
-Eric 'shubes'


---------------------------------------------------------------------------------
Qmailtoaster is sponsored by Vickers Consulting Group 
(www.vickersconsulting.com)
   Vickers Consulting Group offers Qmailtoaster support and installations.
     If you need professional help with your setup, contact them today!
---------------------------------------------------------------------------------
    Please visit qmailtoaster.com for the latest news, updates, and packages.
To unsubscribe, e-mail: qmailtoaster-list-unsubscr...@qmailtoaster.com
    For additional commands, e-mail: qmailtoaster-list-h...@qmailtoaster.com


Reply via email to