Slow Inserts

2003-03-27 Thread Dan Wright
I've been having trouble with some large tables getting what seems to be
corrupted.

Here's the situation:
I have several tables that have 3 million to as much as 7 million records. I
have a process that I run against those tables that pulls out a record based
on specific criteria (select id,name from table where name !='' and
status=0) does something in Perl and then changes the record it just pulled
to a status of 9. So basically - every time I run this process, every
records is scanned and many (90%) are changed to the status of 9.

Well - on a newly imported list, it screams through that and I can get
upwards of 3 million per second. Each time I run the process, it gets
slower, however.

I've used myisamchk and optimize table and neither seem to have any affect
on the performance. The only thing that seems to work is mysqldumping the
whole table, dropping the table and reimporting the table.

I've read up on the site and have found a lot about what could be causing
this and have tried many things. Now that I've found what's wrong and how to
fix it, I'm happy, but I'd rather not have to dump and reimport.

I'm running MySQL 4.0.11 on RHL 7.2. I'm using the huge my.cnf file with
some minor tweaks. The tables I'm speaking of have no indexes in it. They
had them, but I dumped them and that gave me a huge insert performance gain,
but I'm still seeing slowdowns the more I run the process on the file.

Thanks in advance,
Dan


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slow Inserts

2003-03-27 Thread Dan Wright
I've tried it both as fixed (char) and variable (varchar). Interestingly
when I set is as char when building the table, MySQL changes it to varchar
sometimes (but not always).

Here's a structure dump:
CREATE TABLE soldierMain (
  id int(20) NOT NULL auto_increment,
  timeadded varchar(14) NOT NULL default '',
  lastupdate timestamp(14) NOT NULL,
  name varchar(50) default NULL,
  email varchar(40) NOT NULL default '',
  status tinyint(1) NOT NULL default '0',
  PRIMARY KEY  (id)
) TYPE=MyISAM;

Here's what it looked like when I ran the import:

CREATE TABLE soldierMain (
  id int(20) NOT NULL auto_increment,
  timeadded varchar(14) NOT NULL default '',
  lastupdate timestamp(14) NOT NULL,
  name char(50) default NULL,
  email char(40) NOT NULL default '',
  status tinyint(1) NOT NULL default '0',
  PRIMARY KEY  (id)
) TYPE=MyISAM;


Incidentally - I waited a long time to post my own issue to this list and
I'm quite pleased by the responsiveness and ideas I'm getting. Thanks to all
that are consider the issues I'm having.

Dan

-Original Message-
From: Victor Pendleton [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 27, 2003 4:40 PM
To: 'Dan Wright'; [EMAIL PROTECTED]
Subject: RE: Slow Inserts


What does the table DDL look like. Is the table a fixed or dynamic format?

-Original Message-
From: Dan Wright [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 27, 2003 11:08 AM
To: [EMAIL PROTECTED]
Subject: Slow Inserts


I've been having trouble with some large tables getting what seems to be
corrupted.

Here's the situation:
I have several tables that have 3 million to as much as 7 million records. I
have a process that I run against those tables that pulls out a record based
on specific criteria (select id,name from table where name !='' and
status=0) does something in Perl and then changes the record it just pulled
to a status of 9. So basically - every time I run this process, every
records is scanned and many (90%) are changed to the status of 9.

Well - on a newly imported list, it screams through that and I can get
upwards of 3 million per second. Each time I run the process, it gets
slower, however.

I've used myisamchk and optimize table and neither seem to have any affect
on the performance. The only thing that seems to work is mysqldumping the
whole table, dropping the table and reimporting the table.

I've read up on the site and have found a lot about what could be causing
this and have tried many things. Now that I've found what's wrong and how to
fix it, I'm happy, but I'd rather not have to dump and reimport.

I'm running MySQL 4.0.11 on RHL 7.2. I'm using the huge my.cnf file with
some minor tweaks. The tables I'm speaking of have no indexes in it. They
had them, but I dumped them and that gave me a huge insert performance gain,
but I'm still seeing slowdowns the more I run the process on the file.

Thanks in advance,
Dan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slow Inserts

2003-03-27 Thread Dan Wright
I believe that we have both already figured out that the index was at-least
part of the problem due to the fact that the MySQL needs to make more disk
writes in that instance. I do not need fast search capability, so I didn't
feel the need for the index, so dropping it helped.

I've heard a bit from the group about the variable length field (varchar)
behavior when building the table. Is the suggestion here that I need to
solve that problem by simply making them all char so MySQL doesn't magically
convert the field types (thanks to Jennifer for bringing up that valuable
tidbit)? Is this why the table is getting slower and slower? Because it's
fragmenting the db?

If that's the case - then why doesn't myisamchk -r -f or optimize table fix
the problem? Since dumping it and reloading fixes it, I'm thinking the table
is becoming fragmented.

Thanks again to all. This is very educational.

Dan

 -Original Message-
 From: Brian McCain [mailto:[EMAIL PROTECTED]
 Sent: Thursday, March 27, 2003 7:22 PM
 To: Dan Wright; [EMAIL PROTECTED]
 Subject: Re: Slow Inserts


 I had a very similar problem a couple weeks ago, although in that
 instance I
 was using MySQL 3.23. But in any case, I had a perl script that was
 inserting thousands of records in chunks, looking up an id based on a name
 for each record. It would get progressively slower and slower the
 longer it
 ran. I found that adding a key to the name used in the lookup
 did, indeeed,
 slow down inserts a little bit, but it sped up the lookup
 exponentially, and
 as it turns out, that's what was causing the slowdown.

 Brian McCain

 - Original Message -
 From: Dan Wright [EMAIL PROTECTED]
 To: Victor Pendleton [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Sent: Thursday, March 27, 2003 1:50 PM
 Subject: RE: Slow Inserts


  I've tried it both as fixed (char) and variable (varchar). Interestingly
  when I set is as char when building the table, MySQL changes it
 to varchar
  sometimes (but not always).
 
  Here's a structure dump:
  CREATE TABLE soldierMain (
id int(20) NOT NULL auto_increment,
timeadded varchar(14) NOT NULL default '',
lastupdate timestamp(14) NOT NULL,
name varchar(50) default NULL,
email varchar(40) NOT NULL default '',
status tinyint(1) NOT NULL default '0',
PRIMARY KEY  (id)
  ) TYPE=MyISAM;
 
  Here's what it looked like when I ran the import:
 
  CREATE TABLE soldierMain (
id int(20) NOT NULL auto_increment,
timeadded varchar(14) NOT NULL default '',
lastupdate timestamp(14) NOT NULL,
name char(50) default NULL,
email char(40) NOT NULL default '',
status tinyint(1) NOT NULL default '0',
PRIMARY KEY  (id)
  ) TYPE=MyISAM;
 
 
  Incidentally - I waited a long time to post my own issue to
 this list and
  I'm quite pleased by the responsiveness and ideas I'm getting. Thanks to
 all
  that are consider the issues I'm having.
 
  Dan
 
  -Original Message-
  From: Victor Pendleton [mailto:[EMAIL PROTECTED]
  Sent: Thursday, March 27, 2003 4:40 PM
  To: 'Dan Wright'; [EMAIL PROTECTED]
  Subject: RE: Slow Inserts
 
 
  What does the table DDL look like. Is the table a fixed or
 dynamic format?
 
  -Original Message-
  From: Dan Wright [mailto:[EMAIL PROTECTED]
  Sent: Thursday, March 27, 2003 11:08 AM
  To: [EMAIL PROTECTED]
  Subject: Slow Inserts
 
 
  I've been having trouble with some large tables getting what seems to be
  corrupted.
 
  Here's the situation:
  I have several tables that have 3 million to as much as 7
 million records.
 I
  have a process that I run against those tables that pulls out a record
 based
  on specific criteria (select id,name from table where name !='' and
  status=0) does something in Perl and then changes the record it just
 pulled
  to a status of 9. So basically - every time I run this process, every
  records is scanned and many (90%) are changed to the status of 9.
 
  Well - on a newly imported list, it screams through that and I can get
  upwards of 3 million per second. Each time I run the process, it gets
  slower, however.
 
  I've used myisamchk and optimize table and neither seem to have
 any affect
  on the performance. The only thing that seems to work is
 mysqldumping the
  whole table, dropping the table and reimporting the table.
 
  I've read up on the site and have found a lot about what could
 be causing
  this and have tried many things. Now that I've found what's
 wrong and how
 to
  fix it, I'm happy, but I'd rather not have to dump and reimport.
 
  I'm running MySQL 4.0.11 on RHL 7.2. I'm using the huge
 my.cnf file with
  some minor tweaks. The tables I'm speaking of have no indexes
 in it. They
  had them, but I dumped them and that gave me a huge insert performance
 gain,
  but I'm still seeing slowdowns the more I run the process on the file.
 
  Thanks in advance,
  Dan
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
 http://lists.mysql.com

RE: completely resetting root account

2003-03-25 Thread Dan Wright
There are many things short of wiping the hard drive that you can try.

I just had this happen and was able to find the answer quickly on
www.mysql.com

IE: http://www.mysql.com/doc/en/Resetting_permissions.html

Just read, step back, think, then try something.

You shouldn't need to rebuild everything though... this is a good learning
experience for you!

Dan

 Okay, so this question may not indicate I'm smart enough to
 administer MySQL
 and I should slowly back away from the keyboard, but here goes...

 When using phpMyAdmin, I attempted to duplicate the root account,
 which had
 access to everything. I accidentally selected the wrong radio
 button, which
 didn't make a duplicate root account, it just took away all
 privileges for
 the root user. Now I can't give myself any rights to any database.

 I've tried completely removing and reinstalling from the RPMs,
 but I still get
 the same errors. I set it to replace all files and packages, but I still
 don't have access to anything. I can't even change the root
 password because
 I don't have the privileges. I even receive the errors when using
 the command
 'mysqld --skip-grant-tables -uroot. It complains even worse if I
 don't add
 the '-uroot'.

 So my question is: short of wiping the partiton and doing a
 completely clean
 install of EVERYTHING, what can I do to reset the privileges for the root
 user?

 I'm running Slackware8.0, the MySQL 3.23.49 RPMs from the
 Website, I have full
 access to everything (it's my home computer) on a PentiumIII. I
 have never
 had a problem doing anything with MySQL on this machine, and it
 seems to run
 fine, I just can't do anything with it.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: mysqlimport: Error: Can't get stat of

2003-03-24 Thread Dan Wright
You're having a permissions problem.

From the Load data infile section of the manual:
For security reasons, when reading text files located on the server, the
files must either reside in the database directory or be readable by all.
Also, to use LOAD DATA INFILE on server files, you must have the FILE
privilege on the server host.

Try chmoding it 666 or moving it to the same partition as your MySQL data
dir.

Dan


-Original Message-
From: James E Hicks III [mailto:[EMAIL PROTECTED]
Sent: Monday, March 24, 2003 2:48 PM
To: Mysql
Subject: mysqlimport: Error: Can't get stat of


Why am I getting this error? I have looked on google and it said to use
the full path when naming the import file. I have done this and still
get the error. What else could I be doing wrong?

#
mysqlimport -d --fields-optionally-enclosed-by=\' --fields-terminated-by=, -
-use
r=userid --password=pword DB_2update /fullpathto/thefile.SQL
mysqlimport: Error: Can't get stat of '/fullpathto/thefile.SQL' (Errcode:
13), w
hen using table: thefile




For the filter.
sql, query

James E Hicks III
Noland Company
2700 Warwick Blvd
Newport News, VA 23607
757-928-9000 ext 435
[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: where drink is not equal to pepsi

2003-03-20 Thread Dan Wright

 Hi guys, 
 Hopefully have an easy question for you guys.
 Whats wrong with this query
 
 
 Select * from Tablename where drinks is not = 'pepsi';
 
 Driving me batty..

select * from CaseSensitiveTablename where drink != 'pepsi';
or
select * from CaseSensitiveTablename where drink not like '%pepsi%';
(the above to be a little wilder)

If that doesn't work - what kind of error are you getting?

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php