By default MySQL flushes keys to disk with every INSERT, hence the
performance degredation with performing several single INSERTs one after the
other. The following extract from the MySQL documentation hints at one way
of changing this on a per-table basis:

a.. Declaring a MyISAM table with the DELAY_KEY_WRITE=1 table option makes
index updates faster because they are not flushed to disk until the table is
closed. The downside is that if something kills the server while such a
table is open, you should ensure that they are okay by running the server
with the --myisam-recover option, or by running myisamchk before restarting
the server. (However, even in this case, you should not lose anything by
using DELAY_KEY_WRITE, because the key information can always be generated
from the data rows.)

There is also a way of getting MySQL to do "lazy writing" of indexes on a
global basis but I couldn't find a quick reference to that.

Cheers

Andrew.

----- Original Message ----- 
From: "Jeremy Zawodny" <[EMAIL PROTECTED]>
To: "Aram Mirzadeh" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Monday, June 28, 2004 7:24 PM
Subject: Re: Performance issues


> On Tue, Jun 22, 2004 at 01:34:39PM -0400, Aram Mirzadeh wrote:
> >
> > We have an internal SNMP monitoring system that is monitoring about
> > 10,000 devices.  Each device is pinged then pulled for about an
> > average of 25-30 elements.  Each of the ping results and elements
> > are then stored in text file, then another system picks them up
> > (NFS) and inserts them into a MyISAM (3.23.54) database.  The data
> > is kept for 13 weeks.
> >
> > The database system is a Xeon 4 way, 12GB of ram with a striped raid
> > array dedicated to the database files and its indexes and such.
> >
> > Every 5 minutes another process goes through the last set of inserts
> > and compares them for any threshold breaches, so the entire last set
> > of data is looked at.
> >
> > We're falling behind on the inserts because the system can't seem to
> > handle the amount of inserts, the front end that generates the web
> > pages based on the previous records is dogging down.
> >
> > I have read the regular optimizations papers and have done as much
> > as I felt safe, are there any huge database optimization papers?
> > Anything I should be looking at?
>
> I'd consider bulking up the INSERTs, performing multi-row INSERTs
> rather than doing them one by one.  That can speed things up quite a
> bit in my experience.
>
> Jeremy
> -- 
> Jeremy D. Zawodny     |  Perl, Web, MySQL, Linux Magazine, Yahoo!
> <[EMAIL PROTECTED]>  |  http://jeremy.zawodny.com/
>
> [book] High Performance MySQL -- http://highperformancemysql.com/
>
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]
>
>



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to