> I've tried it both as fixed (char) and variable (varchar). Interestingly
> when I set is as char when building the table, MySQL changes it to
> varchar sometimes (but not always).
>
> Here's a structure dump:
> CREATE TABLE soldierMain (
>   id int(20) NOT NULL auto_increment,
>   timeadded varchar(14) NOT NULL default '',
>   lastupdate timestamp(14) NOT NULL,
>   name varchar(50) default NULL,
>   email varchar(40) NOT NULL default '',
>   status tinyint(1) NOT NULL default '0',
>   PRIMARY KEY  (id)
> ) TYPE=MyISAM;
>
> Here's what it looked like when I ran the import:
>
> CREATE TABLE soldierMain (
>   id int(20) NOT NULL auto_increment,
>   timeadded varchar(14) NOT NULL default '',
>   lastupdate timestamp(14) NOT NULL,
>   name char(50) default NULL,
>   email char(40) NOT NULL default '',
>   status tinyint(1) NOT NULL default '0',
>   PRIMARY KEY  (id)
> ) TYPE=MyISAM;
>
>
> Incidentally - I waited a long time to post my own issue to this list
> and I'm quite pleased by the responsiveness and ideas I'm getting.
> Thanks to all that are consider the issues I'm having.
>
> Dan
>
> -----Original Message-----
> From: Victor Pendleton [mailto:[EMAIL PROTECTED]
> Sent: Thursday, March 27, 2003 4:40 PM
> To: 'Dan Wright'; [EMAIL PROTECTED]
> Subject: RE: Slow Inserts
>
>
> What does the table DDL look like. Is the table a fixed or dynamic
> format?
>
> -----Original Message-----
> From: Dan Wright [mailto:[EMAIL PROTECTED]
> Sent: Thursday, March 27, 2003 11:08 AM
> To: [EMAIL PROTECTED]
> Subject: Slow Inserts
>
>
> I've been having trouble with some large tables getting what seems to be
> corrupted.
>
> Here's the situation:
> I have several tables that have 3 million to as much as 7 million
> records. I have a process that I run against those tables that pulls out
> a record based on specific criteria (select id,name from table where
> name !='' and status=0) does something in Perl and then changes the
> record it just pulled to a status of 9. So basically - every time I run
> this process, every records is scanned and many (90%) are changed to the
> status of 9.
>
> Well - on a newly imported list, it screams through that and I can get
> upwards of 3 million per second. Each time I run the process, it gets
> slower, however.
>
> I've used myisamchk and optimize table and neither seem to have any
> affect on the performance. The only thing that seems to work is
> mysqldumping the whole table, dropping the table and reimporting the
> table.
>
> I've read up on the site and have found a lot about what could be
> causing this and have tried many things. Now that I've found what's
> wrong and how to fix it, I'm happy, but I'd rather not have to dump and
> reimport.
>
> I'm running MySQL 4.0.11 on RHL 7.2. I'm using the "huge" my.cnf file
> with some minor tweaks. The tables I'm speaking of have no indexes in
> it. They had them, but I dumped them and that gave me a huge insert
> performance gain, but I'm still seeing slowdowns the more I run the
> process on the file.
>
> Thanks in advance,
> Dan
>
If you have a varchar, text, or blob type field then MySQL silently
converts all character fields longer than one character to varchar.  This
may be why you see it "sometimes" converting the fields.

William R. Mussatto, Senior Systems Engineer
Ph. 909-920-9154 ext. 27
FAX. 909-608-7061



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to