I believe that we have both already figured out that the index was at-least
part of the problem due to the fact that the MySQL needs to make more disk
writes in that instance. I do not need fast search capability, so I didn't
feel the need for the index, so dropping it helped.

I've heard a bit from the group about the variable length field (varchar)
behavior when building the table. Is the suggestion here that I need to
solve that problem by simply making them all char so MySQL doesn't magically
convert the field types (thanks to Jennifer for bringing up that valuable
tidbit)? Is this why the table is getting slower and slower? Because it's
fragmenting the db?

If that's the case - then why doesn't myisamchk -r -f or optimize table fix
the problem? Since dumping it and reloading fixes it, I'm thinking the table
is becoming fragmented.

Thanks again to all. This is very educational.

Dan

> -----Original Message-----
> From: Brian McCain [mailto:[EMAIL PROTECTED]
> Sent: Thursday, March 27, 2003 7:22 PM
> To: Dan Wright; [EMAIL PROTECTED]
> Subject: Re: Slow Inserts
>
>
> I had a very similar problem a couple weeks ago, although in that
> instance I
> was using MySQL 3.23. But in any case, I had a perl script that was
> inserting thousands of records in chunks, looking up an id based on a name
> for each record. It would get progressively slower and slower the
> longer it
> ran. I found that adding a key to the name used in the lookup
> did, indeeed,
> slow down inserts a little bit, but it sped up the lookup
> exponentially, and
> as it turns out, that's what was causing the slowdown.
>
> Brian McCain
>
> ----- Original Message -----
> From: "Dan Wright" <[EMAIL PROTECTED]>
> To: "Victor Pendleton" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
> Sent: Thursday, March 27, 2003 1:50 PM
> Subject: RE: Slow Inserts
>
>
> > I've tried it both as fixed (char) and variable (varchar). Interestingly
> > when I set is as char when building the table, MySQL changes it
> to varchar
> > sometimes (but not always).
> >
> > Here's a structure dump:
> > CREATE TABLE soldierMain (
> >   id int(20) NOT NULL auto_increment,
> >   timeadded varchar(14) NOT NULL default '',
> >   lastupdate timestamp(14) NOT NULL,
> >   name varchar(50) default NULL,
> >   email varchar(40) NOT NULL default '',
> >   status tinyint(1) NOT NULL default '0',
> >   PRIMARY KEY  (id)
> > ) TYPE=MyISAM;
> >
> > Here's what it looked like when I ran the import:
> >
> > CREATE TABLE soldierMain (
> >   id int(20) NOT NULL auto_increment,
> >   timeadded varchar(14) NOT NULL default '',
> >   lastupdate timestamp(14) NOT NULL,
> >   name char(50) default NULL,
> >   email char(40) NOT NULL default '',
> >   status tinyint(1) NOT NULL default '0',
> >   PRIMARY KEY  (id)
> > ) TYPE=MyISAM;
> >
> >
> > Incidentally - I waited a long time to post my own issue to
> this list and
> > I'm quite pleased by the responsiveness and ideas I'm getting. Thanks to
> all
> > that are consider the issues I'm having.
> >
> > Dan
> >
> > -----Original Message-----
> > From: Victor Pendleton [mailto:[EMAIL PROTECTED]
> > Sent: Thursday, March 27, 2003 4:40 PM
> > To: 'Dan Wright'; [EMAIL PROTECTED]
> > Subject: RE: Slow Inserts
> >
> >
> > What does the table DDL look like. Is the table a fixed or
> dynamic format?
> >
> > -----Original Message-----
> > From: Dan Wright [mailto:[EMAIL PROTECTED]
> > Sent: Thursday, March 27, 2003 11:08 AM
> > To: [EMAIL PROTECTED]
> > Subject: Slow Inserts
> >
> >
> > I've been having trouble with some large tables getting what seems to be
> > corrupted.
> >
> > Here's the situation:
> > I have several tables that have 3 million to as much as 7
> million records.
> I
> > have a process that I run against those tables that pulls out a record
> based
> > on specific criteria (select id,name from table where name !='' and
> > status=0) does something in Perl and then changes the record it just
> pulled
> > to a status of 9. So basically - every time I run this process, every
> > records is scanned and many (90%) are changed to the status of 9.
> >
> > Well - on a newly imported list, it screams through that and I can get
> > upwards of 3 million per second. Each time I run the process, it gets
> > slower, however.
> >
> > I've used myisamchk and optimize table and neither seem to have
> any affect
> > on the performance. The only thing that seems to work is
> mysqldumping the
> > whole table, dropping the table and reimporting the table.
> >
> > I've read up on the site and have found a lot about what could
> be causing
> > this and have tried many things. Now that I've found what's
> wrong and how
> to
> > fix it, I'm happy, but I'd rather not have to dump and reimport.
> >
> > I'm running MySQL 4.0.11 on RHL 7.2. I'm using the "huge"
> my.cnf file with
> > some minor tweaks. The tables I'm speaking of have no indexes
> in it. They
> > had them, but I dumped them and that gave me a huge insert performance
> gain,
> > but I'm still seeing slowdowns the more I run the process on the file.
> >
> > Thanks in advance,
> > Dan
> >
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> >
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> >
> >
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]
>
>


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to