slow connections with 4.1.9

2005-01-26 Thread Keith Thompson
Since upgrading my Solaris9 server to MySQL 4.1.9 (previously 4.1.3), 
remote connections to it from my WinXP laptop have become very slow.

Remote connections using the mysql command-line tool (4.1.7 client), 
MySQL Query Browser (1.1.5) or JDBC connections with Connector/J 
(3.0.16) used to all connect immediately.  Now they take anywhere from 
10-30 seconds.  The Query Browser has also become very frustrating 
because every time I execute a query it freezes for 10-30 seconds before 
finally executing each query.

I have noticed that on the server during this long wait time, my new 
connection appears immediately, but mysqladmin processlist shows the 
user to be unauthenticated user until it finally completes the 
connection (and updates the user to the correct username).  So, I'm not 
having network problems getting to the server or anything like that.  
Also, the server is not low on memory, low on connections, is not 
producing any errors, etc.  Via Google I found a few occurrences of 
others with unauthenticated user issues, but they all seem to involve 
lots of connections in this state.  In my case it's only one--the user 
very slowly connecting.

This has been happening since a recent server update to 4.1.9 and never 
happened previously with 4.1.3.  Also, it does not occur when making 
remote connections from the same PC to a 4.1.7 server.

Any suggestions?
-keith
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


4.1.3 InnoDB corruption problems

2004-08-19 Thread Keith Thompson
We am having constant index corruption problems since moving
from 4.0.17 to 4.1.3.  All tables are InnoDB and we're using
the file-per-table setting on Solaris9.

Every couple days another few tables will show up corrupt
and queries start missing things.  The database has never
gone down, there have never been any hard errors (drive failures,
etc.), no crashes, no messages in the .err file, etc.  It stays
up the whole time and just keeps getting corrupt tables.  I then
run CHECK TABLE, which reports them corrupt and puts messages in
the .err file saying that the indexes don't contain enough entries.
It is not always the same tables, although there are a few that
just seem to be doomed and are constantly having the problem.

The only way I know to fix the failing tables is to dump and
reload them.  Since many of these tables have several million
rows and are growing quickly, this problem is getting more and more
miserable every day.

This never happened once with the same tables in 4.0.17, where
there was never a table corruption problem.  So, I'm having a hard
time believing it would have something to do with how we're using
our tables.

Has this happened to anyone else, or does anyone have a suggestion?

Thanks -keith




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



active new data with inactive old data

2004-08-12 Thread Keith Thompson
I have some large tables (hundreds of millions of rows) that are extremely
active.  They have heavy read and heavy modify activity.  But, all
modifications are only on recent records (data added within the last month
is very volatile, but data earlier than that never changes).

We use the InnoDB engine for these tables because of the high concurrency
(as well as the desire for transactions [potential rollback] on data changes
and inserts).

But, for all the data that's older than a month and never changes, having
it in huge InnoDB tables is very cumbersome (constantly re-backing up all
the old data that never changes, etc.).  Plus, the data from the past six
months is heavily accessed, but older data gets little use and is only
necessary as historic information, so it would be nice to pack it away
using MyISAM compressed tables.

Ideally, what I'd like to do is create packed MyISAM tables of older data,
possibly separated by quarter years, keep the last couple months in an InnoDB
table, and use a single Merge table to access them.  Obviously I can't do
that because Merge tables only span MyISAM tables.

My current plan is to put old data in packed MyISAM tables made accessible
with a single Merge table, put the new data in an InnoDB table, and put my
own frontend on it in my code to do UNIONs between the InnoDB and Merge
tables as necessary.  This is going to be a significant change in my code
(plus a bunch of work to setup the job of transferring data from InnoDB
to next MyISAM table at quarter-rollover time, etc.), so I'd like to know
if anyone else has done something similar and has some suggestions from
their experience.  Or, if there is simply a better approach in general,
then I'd love to hear your suggestions.

Better yet, is there a plan to be able to create Merge tables spanning
different engine types in the future?

-keith



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB 4.1.3: count(*) and number of rows does not match

2004-08-01 Thread Keith Thompson
Thanks Mike,

I've always ignored CHECK TABLE because I always thought it was
just for MyISAM.

I ran CHECK TABLE.  It told me that my table was corrupt.  I then
dumped the table with mysqldump and recreated it.  After that
CHECK TABLE said it was OK (and in comparing values with the master
server it appears to have all data intact as well).  Unlike CHECK
TABLE, REPAIR TABLE only works on MyISAM tables, so I can't use that.

Then, I decided to run CHECK TABLE on all my tables (which for the
ones with 125 million rows will probably be running for a while).
The problem now is that all of my larger tables are reported as
being corrupt--every single table with more than say 500,000
records is reported as corrupt.  Wow!  Could this be true?  The
tables all access fine and only these two smaller tables had these
count(*) mismatch problems (and were the only two smaller tables
that came up corrupt).

How did this happen?  I've never gotten an error in my .err file,
never had a hardware access failure in the system logs, and have
done very little with this server beyond initially loading it
(by replaying mysqldump output in the first place) and letting it
stay up to date with replication.

It's going to take me a week to reload these tables if they are
all corrupt.  Based on what little this system has done so far,
I wonder how long it will be before they're corrupt again.

Is there a faster way to fix these corruptions than to dump and
reload the tables?

-keith

Hi Marc,

Thanks for you response.  In answer to your questions, there are no
embedded newlines and I did look at index issues.  I did not try
rebuilding the index, which would be easy to do in this case since
the tables are small (unlike a couple of my other tables that have
125+ million rows and changing indexes is measured in days instead
of minutes).

I forgot to mention in my first message that the select count(*)...
is the one that is wrong.  Counting all returned elements isn't too
high, the count(*) is too low.  Doing the count(*) on the same exact
table on a 4.0.17 system (the replication master) gives the correct
count.

-keith

Keith,
 Try doing a Check Table tablename.
See http://dev.mysql.com/doc/mysql/en/CHECK_TABLE.html

Mike




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB 4.1.3: count(*) and number of rows does not match

2004-08-01 Thread Keith Thompson
Hi Sergei,

I did come from 4.0.17 to 4.1.3 with a complete dump/reload.
Then, all changes after that forward came via replication.
I didn't read anything about corruption problems as a replication
slave.

-keith

Hi!

On Aug 01, Keith Thompson wrote:
 Thanks Mike,
 
 I've always ignored CHECK TABLE because I always thought it was
 just for MyISAM.
 
 Then, I decided to run CHECK TABLE on all my tables (which for the
 ones with 125 million rows will probably be running for a while).
 The problem now is that all of my larger tables are reported as
 being corrupt--every single table with more than say 500,000
 records is reported as corrupt.  Wow!  Could this be true?  The
 tables all access fine and only these two smaller tables had these
 count(*) mismatch problems (and were the only two smaller tables
 that came up corrupt).
 
 How did this happen?  I've never gotten an error in my .err file,
 never had a hardware access failure in the system logs, and have
 done very little with this server beyond initially loading it
 (by replaying mysqldump output in the first place) and letting it
 stay up to date with replication.

Just a thought - if you upgraded, be sure to read all changelog entries
carefully, there were few bugfixes that would require to dump/reload
innodb tables (otherwise they'll be corrupted).
 
Regards,
Sergei

-- 
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /   Sergei Golubchik [EMAIL PROTECTED]
 / /|_/ / // /\ \/ /_/ / /__  MySQL AB, Senior Software Developer
/_/  /_/\_, /___/\___\_\___/  Osnabrueck, Germany
   ___/  www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB 4.1.3: count(*) and number of rows does not match

2004-08-01 Thread Keith Thompson
Hey Mike,

Thanks again for your suggestions.

I understand (and agree) with your comments about a clean shutdown.
I'm always careful to do that.  In this case, the mysql server has
only been shutdown a couple times and it was a clean shutdown in
each case.

You suggest running table checks daily.  That would be impossible
here.  The tables are large, there are several of them, and data
loads around the clock.  It takes several hours to do a CHECK TABLE
on any of the larger tables alone.  It would require quite a bit
of application change to load data in (and make it available from)
a staging area while the tables were checking.  It may come down to
that, but it's certainly not a picture I like.

Thanks -keith

At 01:21 AM 8/1/2004, Keith Thompson wrote:
Thanks Mike,

I've always ignored CHECK TABLE because I always thought it was
just for MyISAM.

I ran CHECK TABLE.  It told me that my table was corrupt.  I then
dumped the table with mysqldump and recreated it.  After that
CHECK TABLE said it was OK (and in comparing values with the master
server it appears to have all data intact as well).  Unlike CHECK
TABLE, REPAIR TABLE only works on MyISAM tables, so I can't use that.

Then, I decided to run CHECK TABLE on all my tables (which for the
ones with 125 million rows will probably be running for a while).
The problem now is that all of my larger tables are reported as
being corrupt--every single table with more than say 500,000
records is reported as corrupt.  Wow!  Could this be true?

Yup. But it could just be that the table counts that are off.

The tables all access fine and only these two smaller tables had these
count(*) mismatch problems (and were the only two smaller tables
that came up corrupt).

How did this happen?  I've never gotten an error in my .err file,
never had a hardware access failure in the system logs, and have
done very little with this server beyond initially loading it
(by replaying mysqldump output in the first place) and letting it
stay up to date with replication.

In my case, I can corrupt tables by not shutting down the server 
properly. For example, if the server crashes then it could cause problems 
with the files because the tables are not flushed prior to closing the 
tables so the counts can be off. Or if you Kill a process like REPAIR TABLE 
or OPTIMIZE TABLE can lead to a corrupt table. Or if you are using 
--delay-key-writes in your .cnf file can cause problems if you're not careful.

There is a Shutdown command you can execute to shut down the server that 
automatically flushes the tables and closes everything in an orderly 
fashion. http://dev.mysql.com/doc/mysql/en/Server_Shutdown.html
http://dev.mysql.com/doc/mysql/en/Crashing.html


It's going to take me a week to reload these tables if they are
all corrupt.  Based on what little this system has done so far,
I wonder how long it will be before they're corrupt again.

A week? That must be a lot of large tables. (Of course InnoDb takes longer 
than MYISAM tables to load.) I think everyone needs to write a check and 
repair script for their MySQL database. Run the Check script daily to see 
if and when the error occurs or twice a day if you're really paranoid. Then 
have a recovery procedure written down so someone can follow it. And of 
course log the error report in a file so you can try and determine what 
caused it and how often it occurs.

Is there a faster way to fix these corruptions than to dump and
reload the tables?

I'm not sure because I don't use InnoDb. I'm sure Heikki would know.


Mike


-keith

 Hi Marc,
 
 Thanks for you response.  In answer to your questions, there are no
 embedded newlines and I did look at index issues.  I did not try
 rebuilding the index, which would be easy to do in this case since
 the tables are small (unlike a couple of my other tables that have
 125+ million rows and changing indexes is measured in days instead
 of minutes).
 
 I forgot to mention in my first message that the select count(*)...
 is the one that is wrong.  Counting all returned elements isn't too
 high, the count(*) is too low.  Doing the count(*) on the same exact
 table on a 4.0.17 system (the replication master) gives the correct
 count.
 
 -keith

 Keith,
  Try doing a Check Table tablename.
 See http://dev.mysql.com/doc/mysql/en/CHECK_TABLE.html

 Mike




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



InnoDB 4.1.3: count(*) and number of rows does not match

2004-07-31 Thread Keith Thompson
I just discovered that two of my tables (out of about 300)
show a very unusual behavior.  This is that select count(*) ...
and selecting all the rows and counting them do not produce
the same number.

This is on MySQL 4.1.3 on Solaris9.  Look at this:

$ mysql -e select count(*) from pstat.plist
+--+
| count(*) |
+--+
|15315 |
+--+
$ mysql -e select * from pstat.plist | wc -l
   15372

Actually, these counts shouldn't quite be the same.  The second
produces a header line that's getting counted, so it should be
one more than the count(*).  But, it's off by 57!  The other bad
table is off by 3.

My tables are all InnoDB.  The other 300 or so tables produce
counts that are correct.  These tables are all replicated from
a 4.0.17 server.  On that server all counts are correct and this
is not an issue.

This server has never crashed, had any replication problems
from the master, or had any hardware glitches that I've learned
about, so I see no reason to believe that a corruption
should've occurred at any time.  There are no errors in the
mysql .err log.  Stopping and restarting the server had no effect.
I will probably dump and reload the two tables later to see if that
corrects it, but no time for that now.

-keith



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB 4.1.3: count(*) and number of rows does not match

2004-07-31 Thread Keith Thompson
Hi Marc,

Thanks for you response.  In answer to your questions, there are no
embedded newlines and I did look at index issues.  I did not try
rebuilding the index, which would be easy to do in this case since
the tables are small (unlike a couple of my other tables that have
125+ million rows and changing indexes is measured in days instead
of minutes).

I forgot to mention in my first message that the select count(*)...
is the one that is wrong.  Counting all returned elements isn't too
high, the count(*) is too low.  Doing the count(*) on the same exact
table on a 4.0.17 system (the replication master) gives the correct
count.

-keith

 I just discovered that two of my tables (out of about 300)
 show a very unusual behavior.  This is that select count(*) ...
 and selecting all the rows and counting them do not produce
 the same number.
 
 This is on MySQL 4.1.3 on Solaris9.  Look at this:
 
 $ mysql -e select count(*) from pstat.plist
 +--+
 | count(*) |
 +--+
 |15315 |
 +--+
 $ mysql -e select * from pstat.plist | wc -l
15372
 
 Actually, these counts shouldn't quite be the same.  The second
 produces a header line that's getting counted, so it should be
 one more than the count(*).  But, it's off by 57!  The other bad
 table is off by 3.

First, have you verified there is no data in the table with embedded
newlines or some such?

Perhaps there is some index corruption..  Do an explain on the
count(*), it is likely doing an index scan.

Then try a select column_in_index_that_is_being_used from pstat.plist
and see if that returns the same as the count(*), or try doing the
select count(*) with an ignore index of whichever index it is using.

If it seems to be related to that one index, you could try dropping
and rebuilding the index.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: 4.0.17 to 4.1.3 connection problem

2004-07-29 Thread Keith Thompson
Wes,

I don't think this is the issue as my passwords are all still in
the old format.  Plus, the 4.0.16 client has no problem connecting.

Thanks -keith

Keith:

I don't know if it's the same problem, but I recently had issues where 
I had a similar setup with only two MySQL servers, one 4.0.20, the 
other 4.1.3, same usernames/passwords on each.

My solution, and I don't remember where in the manual I saw this 
(possibly the FAQ), was to do an update to the 4.1.3 mysql user table:
UPDATE user SET Password = OLD_PASSWORD( Password ) WHERE ... fill in 
with username/host/whatever

I believe this was because I was trying to connect to MySQL 4.1.x with 
a 4.0.x client. I don't know if that's your case as well.
http://dev.mysql.com/doc/mysql/en/Old_client.html

Wes


On Jul 28, 2004, at 6:21 PM, Keith Thompson wrote:

 I have a mysql connection problem that I'm trying to understand.

 The three servers and the version of mysql they are running (all under
 Solaris9) are:
   db1 - 4.0.16
   db2 - 4.0.17
   db3 - 4.1.3

 All three systems have the same mysql user and passwords setup.  I'll
 use the mythical user xx with password yy to explain here.
 Connections with the mysql client (using -uxx -pyy) from system to
 system all work except this one on db2:

 mysql -hdb3 -uxx -pyy
 ERROR 1045: Access denied for user 'xx'@'db2' (using password: YES)

 The mysql.user table entry has host=% and user=xx, so it's not simply
 an issue of a system-specific entry allowing one and not the other.  
 Since
 db1 has no problem getting to db3, I wouldn't expect db2 to struggle.
 This same problem occurs with all users, so it' is also not something
 specific to how this user is setup.

 Does anyone know why this would be happening?

 Is there something different in 4.0.17 (compared to 4.0.16) that 
 prevents
 it from connecting to the 4.1.3 server?  I don't see anything in the
 4.0.17 change list specific to this.

 Thanks -keith



 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



4.0.17 to 4.1.3 connection problem

2004-07-28 Thread Keith Thompson
I have a mysql connection problem that I'm trying to understand.

The three servers and the version of mysql they are running (all under
Solaris9) are:
  db1 - 4.0.16
  db2 - 4.0.17
  db3 - 4.1.3

All three systems have the same mysql user and passwords setup.  I'll
use the mythical user xx with password yy to explain here.
Connections with the mysql client (using -uxx -pyy) from system to
system all work except this one on db2:

mysql -hdb3 -uxx -pyy
ERROR 1045: Access denied for user 'xx'@'db2' (using password: YES)

The mysql.user table entry has host=% and user=xx, so it's not simply
an issue of a system-specific entry allowing one and not the other.  Since
db1 has no problem getting to db3, I wouldn't expect db2 to struggle.
This same problem occurs with all users, so it' is also not something
specific to how this user is setup.

Does anyone know why this would be happening?

Is there something different in 4.0.17 (compared to 4.0.16) that prevents
it from connecting to the 4.1.3 server?  I don't see anything in the
4.0.17 change list specific to this.

Thanks -keith



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



innodb_buffer_pool_size limit

2004-04-02 Thread Keith Thompson
Is there a limit on what innodb_buffer_pool_size can be set in 4.0.17
(64-bit Solaris 9)?

I tried setting it to 4G in my.cnf and SHOW VARIABLES says it's 0.

-keith



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



innodb_buffer_pool_size limit

2004-03-23 Thread Keith Thompson
Is there a limit on what innodb_buffer_pool_size can be set in 4.0.17
(64-bit Solaris 9)?

I tried setting it to 4G in my.cnf and SHOW VARIABLES says it's 0.

-keith



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Variables for InnoDB only

2004-03-17 Thread Keith Thompson
All of my tables are now InnoDB only.  So, what I'd like to do is
reduce MySQL tunable variables down as much as possible for things
that don't affect InnoDB so that I'm not wasting memory on buffers
that will get little or no use.

It's obvious which variables are only for InnoDB (they start with
innodb_).  But, which of the others are never used by InnoDB?

For example, are the biggees like key_buffer_size, sort_buffer_size,
etc. for both InnoDB and MyISAM, or just MyISAM?

Thanks -keith



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Two indexing questions

2004-02-26 Thread Keith Thompson
Given these two tables:

create table t1 (
   id int unsigned auto_increment,
   a int,
   ... [other fields]
   primary key (id),
   index aid (a,id)
) type=innodb;

create table t2 (
   id int unsigned,
   b int,
   ... [other fields]
   index id (id),
   index bid (b,id)
) type=innodb;

Using searches of the form:
   select *
   from t1, t2
   where t1.id = t2.id
   and t1.a = somevalue
   and t2.b = somevalue

Now, let's say that the data is such that the driving table is t2
(order of tables with EXPLAIN is t2, t1).

Can MySQL take advantage of the bid index to retrieve the id
for the join out of the index rather than pulling the data row,
or is there no advantage to using index bid (b,id) over just
using index bid (b) for this query?

Similarly, can MySQL use aid for this query to satisfy both the
join and the t1.a = somevalue comparison together when t1 is
not the driving table like this?  It appears to only want to use
the primary key for t1 for this query, which leads me to believe
that on non-driving tables the only index it can use is one to do
the join and that it can't use an index that could satisfy both
the join and another field comparison at the same time.

Thanks -keith



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



dollar amounts

2004-02-25 Thread Keith Thompson
What is the preferred way of storing a dollar amount in the range
0.00 - 9.99?

   double
   decimal(11,2)
   bigint  (storing value*100)
   ...?

I'm more interested in speed of use as an indexed column (especially
for range searches) than in disk space usage.

-keith



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Changing innodb_log_file_size

2004-02-21 Thread Keith Thompson
Can I just do a clean shutdown of MySQL, change my configured
innodb_log_file_size, then restart?  Or is there more I need
to do to make sure the current log files are flushed to the
data files before changing their size?

Thanks -keith



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Interrupting a query?

2004-02-05 Thread Keith Thompson
I have a Java application that is connecting to a MySQL database
via JDBC (Connector/J 3.0.10 connecting to MySQL 4.0.17 server).
I would like to implement a time-limit on certain queries (all
SELECTs) and kill the query if it exceeds a given limit.  Is there
a convenient way to do this?

Maybe there is something obvious that I've overlooked (certainly
wouldn't be the first time), but I haven't hit on the right approach yet.

I have tried interrupting the thread doing the query from another
thread (Thread.interrupt()), but the InterruptedException doesn't
happen until the query eventually finishes (or until I do a mysqladmin
kill on the server).

I have tried closing the JDBC connection from another thread.  This
makes my JDBC connection drop out like I had hoped, but the query
remains running on the server.

Suggestions?

Thanks -keith



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



InnoDB rollback

2004-02-02 Thread Keith Thompson
I just had a script fail unexpectedly and now I've got a couple
hundred thousand undo entries rolling back.  Yes, I realize that
I should be committing smaller groups--my bad.

I was really hoping to restart my server tonight to change some
server parameters, but a rollback like this will take many, many
hours (my only major pet peeve in InnoDB--rollbacks are WAY too slow).

Can I shutdown the server during a big rollback, or will the
shutdown wait for the rollback to finish before completely
successfully?

-keith



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



InnoDB loading: add keys before or after

2004-01-22 Thread Keith Thompson
Hello all,

I need to load a new InnoDB table with about 80 million rows.
With MyISAM I have often found that it is faster to create
the table without indexes, add all the data, then add the
indexes.  Is this true with InnoDB as well, or should I put
my indexes in before loading?

Similarly, I have another large table that is currently MyISAM
that I'd like to move to InnoDB.  What would be the fastest
steps towards accomplishing this (as far as indexes, etc.)?

Thanks -keith




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB loading: add keys before or after

2004-01-22 Thread Keith Thompson
Heikki,

Thanks for your help.

I have another very large table to convert to InnoDB from MyISAM.
I also have the same table saved in a file suitable for 'load data'.
Which would be faster:

ALTER TABLE xxx TYPE=InnoDB;

or

CREATE TABLE newxxx ...; INSERT INTO newxxx select * from xxx;

or

CREATE TABLE newxxx ...; LOAD DATA INFILE ...

Thanks -keith


Hi!

You should always create the indexes BEFORE adding the data to an InnoDB
table.

In CREATE INDEX ... ON ..., MySQL rebuilds the whole table. Thus, it will 
be
much slower to add the indexes afterwards.

Many databases have an optimized index build procedure where adding an 
index
afterwards is faster, but that is not the case for InnoDB.

DISABLE KEYS has no effect on InnoDB.

It is in the TODO to speed up index creation. Maybe in 2005 it will be
faster to add the indexes afterwards :).

Best regards,

Heikki Tuuri
Innobase Oy
http://www.innodb.com
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM
tables

Order MySQL technical support from https://order.mysql.com/



- Original Message - 
From: Mirza [EMAIL PROTECTED]
Newsgroups: mailing.database.myodbc
Sent: Thursday, January 22, 2004 11:04 AM
Subject: Re: InnoDB loading: add keys before or after


 In theory it is fastest to add indexes first, then disable it (ALTER
 TABLE x DISABLE KEYS), then add data and, on the end, re-enable keys
 (ALTER TABLE x DISABLE KEYS).
 mirza

 Keith Thompson wrote:
  Hello all,
 
  I need to load a new InnoDB table with about 80 million rows.
  With MyISAM I have often found that it is faster to create
  the table without indexes, add all the data, then add the
  indexes.  Is this true with InnoDB as well, or should I put
  my indexes in before loading?
 
  Similarly, I have another large table that is currently MyISAM
  that I'd like to move to InnoDB.  What would be the fastest
  steps towards accomplishing this (as far as indexes, etc.)?
 
  Thanks -keith





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



InnoDB key adding slowness

2004-01-20 Thread Keith Thompson
Hi all,

I decided I needed another index in an InnoDB table that has about
25 million rows (each 80 bytes long).  As I've done dozens of times
with MyISAM tables, I did a simple alter table:
alter table WMH_CHK_a add key JoinTrans (TransID)

This has been running for 14 hours and I have no idea how much
longer it's going to take, but it's getting darned frustrating
as it's preventing lots of other work.

This is not a complicated table and it is running on a 4-CPU Sun
server with a high-speed disk setup.  The server has 4GB memory
and I've got all of the InnoDB parameters set up at a moderate
level (such as a 1GB buffer_pool_size), etc.  It is otherwise not
very busy on other activity, so it should have most of the system's
resources available to it.

Any ideas what is taking this so long?  This would not have
taken more than a couple hours with a similar MyISAM table on
the same server (based on prior experience).

Thanks -keith



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB key adding slowness

2004-01-20 Thread Keith Thompson
I forgot to mention too that this is the 64bit MySQL 4.0.17
running on Solaris9.

Hi all,

I decided I needed another index in an InnoDB table that has about
25 million rows (each 80 bytes long).  As I've done dozens of times
with MyISAM tables, I did a simple alter table:
   alter table WMH_CHK_a add key JoinTrans (TransID)

This has been running for 14 hours and I have no idea how much
longer it's going to take, but it's getting darned frustrating
as it's preventing lots of other work.

This is not a complicated table and it is running on a 4-CPU Sun
server with a high-speed disk setup.  The server has 4GB memory
and I've got all of the InnoDB parameters set up at a moderate
level (such as a 1GB buffer_pool_size), etc.  It is otherwise not
very busy on other activity, so it should have most of the system's
resources available to it.

Any ideas what is taking this so long?  This would not have
taken more than a couple hours with a similar MyISAM table on
the same server (based on prior experience).

Thanks -keith



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]