At 13:02 10-2-00 +0200, Stas Bekman wrote:
>On Thu, 10 Feb 2000, Leslie Mikesell wrote:
>
>> According to Ryan, Aaron:
>> 
>> > We found that we are quicking using up the max connections to the MySQL
>> > database 
>> > and when we raise the max connection, the performance gets worse. What
was
>> > MySQL designed
>> > to handle, should it be able to handle 2000 connections, or is that
outside
>> > the scope
>> > of the design of the server.
>> > 
>> > Does anyone have any suggestions or similar experiences with scaling.
>> 
>> Have you already taken the step of setting up a non-mod_perl proxy
>> front end and avoiding sending any unnecessary requests (images,
>> static html, etc.) to the database-connected backend?  If not, you
>> may be able to reduce the number of connections you need by a
>> factor of 10 or so.
>
>Plus using Mason to cache components to reduce the number of queries or
>any other method to cache the query results. I think you can also run more
>than one mysql server on the same DB... 

There are a couple of things to look out for.

For one, if you have multiple databases and you use persistent
database connections you'll quickly run out of your connections.
Assuming you have 4 different databases and 50 processes, it
doesn't take many requests to reach 200 database connections.
Disable persistent connections in that case. MySQL doesn't take
much of a performance hit without them anyway.

The number of connections your MySQL database can handle is
highly dependant of a lot of things such as the number of
tables you're using. Out of memory I believe it needs a
couple of file descriptors for every n tables that a
connection opens. (See docs for specific details). There
will be a time you run out of file descriptors ... I've
seen a properly configured server handle up to some 200
simultaneous connections [not queries, mostly idle
connections] I vaguely recall. But you really do want to
try to get that number down as best as possible.

You can gain quite a bit of performance by (a) bringing the
number of queries down and (b) making sure all your queries
have proper indexes. Try not to make a crazy amount of
updates either, as that means that your indexes have to
be maintained as well.

If you have very varying database needs (such as one db
with sessions, very frequently updated but tiny amounts
of data, and another with large sets of data that's hardly
ever updated) you might be able to gain a lot by running
two differently configured databases on one server. One
would benefit from lots of caching, the other would
probably reduce the efficiency of the cache but would gain
from fast disk access. Different usage profiles require
vastly different performance needs.

Basically consider every SQL query you have to do as one
too much. :) Using a module like HTML::Mason allows you
to cache the output of your page for a specified amount of
time, for example. If your page needs nothing more than an
hourly update but you have to do 15 queries to build it
and it's often accessed, Mason's caching might be a lot
faster instead. Besides, it will take some load off our
database as well, which might or might not be good
depending on your situation.

Grtz,
Pascal

Reply via email to