I've tried tweaking the structure of the schema to have, for example, a PRIMARY 
KEY index on email, no other indexes, and then insert in sorted order -- made 
no improvement whatsoever.  Another clue that leads me to believe that this may 
be an OS issue:  Starting a large cp on the same box (from a local filesystem 
other than the one the InnoDB data pool was on, to NFS) caused MySQL to become 
COMPLETELY backlogged (we went from ~15-20 connections at any given instant to 
750 (our max_connections setting)).

-JF


> -----Original Message-----
> From: Heikki Tuuri [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, October 12, 2005 8:15 AM
> To: mysql@lists.mysql.com
> Subject: Re: Non-linear degradation in bulk loads?
> 
> Jon,
> 
> hmm... maybe one of the indexes inevitably is in a random order.
> 
> Please post a typical
> 
> SHOW INNODB STATUS\G
> 
> when the inserts happen slowly.
> 
> What is your my.cnf like?
> 
> Regards,
> 
> Heikki
> Innobase/Oracle
> 
> ----- Alkuperäinen viesti -----
> Lähettäjä: "Jon Frisby" <[EMAIL PROTECTED]>
> Vastaanottaja: "Heikki Tuuri" <[EMAIL PROTECTED]>; 
> <mysql@lists.mysql.com>
> Lähetetty: Wednesday, October 12, 2005 3:08 AM
> Aihe: RE: Non-linear degradation in bulk loads?
> 
> 
> > Two solutions: 1) sort the rows to be inserted on the key 'email' 
> > before inserting.
> >
> > 2) Or:
> >
> > http://dev.mysql.com/doc/mysql/en/innodb-tuning.html
> > "
> > If you have UNIQUE constraints on secondary keys, starting 
> from MySQL
> > 3.23.52 and 4.0.3, you can speed up table imports by temporarily 
> > turning off the uniqueness checks during the import session:
> > SET UNIQUE_CHECKS=0;
> >
> > For big tables, this saves a lot of disk I/O because InnoDB can use 
> > its insert buffer to write secondary index records in a batch.
> > "
> >
> > But make sure you do not have any duplicates in the rows!
> 
> After sending my mail, I discovered SET UNIQUE_CHECKS=0, and 
> subsequent to that it also occurred to me to try putting the 
> data in in sorted order.  Unfortunately, doing 
> UNIQUE_CHECKS=0 did not work, and even the combination of 
> both did not work.  First chunk (3.4m rows) was ~1.5 minutes, 
> second was ~5 minutes...
> 
> At this point I'm inclined to believe that there is something 
> very wrong with the disk subsystem because of this and other 
> problems (doing a large cp from the datapool filesystem to 
> another filesystem brought the database to a near-halt, among 
> other things).
> 
> As a stop-gap solution, I created the table with no indexes, 
> and loaded all the data (loaded in linear time), and plan on 
> doing a CREATE UNIQUE INDEX on the table.  Will this happen 
> in linear time, or near-linear time?
> 
> *sigh*
> 
> -JF 
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to