I've gone through the mailing list archives, read the appropriate parts of
the manual and searched the internet and still have questions.  What I'm
running into is a serious performance problem with a fairly large (by my
standards) database.  My system setup is as follows:

Dual processor Sun E250 w/1.5GB RAM running Solaris 8 (with all current
patches)
18GB dedicated drive/17GB data partition for the database

I'm running MySQL 4.0.12 standard with the Large my.cnf in place.  I'm not
running the Huge my.cnf because I'm running other applications on the
server.

Other software running on the server consists of:
Apache
120 iterations of MRTG - one for each of the devices I have to monitor.  The
MRTG RRD databases are on a separate drive from MySQL and MRTG doesn't
contribute seriously to the system load.
Various shell scripts and Perl scripts triggered by cron to automate damn
near everything. :)

My db application is a program called RTG - it's a replacement for MRTG that
uses MySQL on the backend - http://rtg.sourceforge.net.  BTW, both of these
applications (if you're not familiar with them) are designed to monitor
traffic through routers and switches.  MRTG uses a program called RRD (Round
Robin Database) on the backend that does data averaging for any data over 24
hour old.  RTG scales better for large networks and doesn't do data
averaging.  It keeps live data to the limit of your drive space.

Since starting up RTG not quite 3 months ago, my database has grown to over
10GB in size.  If I can get the drive space to keep the 2 years worth of
data I want to, my database will be edging upward toward 100GB.  There are
currently 682 tables ranging in size from 2 KB/1 record to around 310
MB/7570511 records depending on the number of ports on a router and how much
traffic flows through it.  

I'm running around 2500 queries/minute - 99+% of them inserts.  My cpu load
is minimal - around .40 nominally but it may go as high as 1.80 or so when
handling multiple large retrievals.  If I look at the iostats output for my
server, the drive controller bandwidth utilization is around 30-40% during
normal operation but immediately jumps to 100% utilization during retrieves.
Retrieval is done via Perl scripts or from Perl/PHP WebPages.

I'm trying to get a RAID array loaded with striped drives to hang off the
server but until I can talk someone into signing the check, I need to do
anything I can to improve performance.

Does anyone have any suggestions?  Preferably ones that don't involve money?
If it would help, I _might_ be able to coerce the IT group into giving me
more RAM but an E250 will only hold 2GB RAM and I'm already at 1.5GMB.  I've
modified the RTG table structure to use different indexes and modified the
Perl/PHP scripts to suit those changes and gotten significant performance
boosts from that.  What I really need are system tweaks and configuration
changes to improve performance.  

Jack

Jack Coxen
IP Network Engineer
TelCove
712 North Main Street
Coudersport, PA 16915
814-260-2705

Reply via email to