Hi Frank,

you actually got me doubting here. We don't use mysql client, but I made sure that autocommit was turned off.

I double checked (with select count(*) from smalltest) to see the inserts were in fact commited by chunk of 100,000 and not one by one, and it was.

We still see exactly the same results. And yes, innodb_flush_log_at_trx_commit is set to 0.

We did new tests with commiting every 100,000 rows instead of every 10,000 and it did not make any major change in behaviour.

Thanks for the hint anyway.

Luc

PS:
Still I'm wondering if anybody else has seen this. For sure, this isn't the biggest table that anyone has imported in innodb?



Dr. Frank Ullrich wrote:

Luc,

do you use the mysql client for the insert operations?
And is autocommit set to "yes"?
Then the answer is:
turn off autocommit mode and commit every <high number but not too high to grow InnoDB's transaction handling resources too big> rows.


Commit every 100,000 rows for example.

The speeds up the whole thing a lot because there is no need for a disk flush after every record insert.
"commit" means "the data are on disk for sure now
(or after a few seconds (if you set innodb_flush_log_at_trx_commit to 0 or 2 instead of 1))".



Regards, Frank.


Luc Charland wrote:

We are evaluating the replacement of a Sybase database with MySQL. The databases are 60+GB, containing more than 100 tables.

Since we need transactions, that implies InnoDB. We were happy with the early results, but we hit a major roadblock when trying to import the biggest table (20+GB, with 4 indexes).

We have reproduced the problem with a simpler table on many different servers and MySQL versions (4.X).

At first, we easily insert 1600+ lines per second. As the number of lines grows, the performance deteriorate (which I can understand), but it eventually gets so slow that the import would take weeks.

Doing a vmstat on the server shows that after a certain limit is reached (index bigger than the total mem ?), mysqld starts reading as much as writing, and the CPU usage goes down as the I/O eventually reach the maximum for the server.

If you wait long enough, you get less than 50 lines per second (which is 30+ times slower than the first few million inserts).

We have done the same tests on Sybase and another database on the same machines and have not seen this behavior, so it is not hardware related.

We have done the same import in a MyISAM table and have not see any slowdown (the whole data was imported very fast, even if we had to wait a very long time --5+ hours-- for the index to rebuild after).

We have tried to transform the MyISAM table into a InnoDB (same problem occurs). We have tried to import from the MyISAM table into an empty InnoDB, same problem occurs.

SETUP:
We have of course changed the following
innodb_buffer_pool_size= (50% to 80% of total ram)
innodb_log_file_size=(20% to 40% of total ram)
we have tried different innodb_flush_method
we have tried innodb_flush_log_at_trx_commit (0, 1)
we have tried ibdata1:1G:autoextend, and also make it big enough so that all the data will fit without autoextending.
we have tried creating the indexes after instead of before the inserts, but like the documentation says, it is not better.


Is there an upper limit to the size of the indexes of a single table in InnoDB?

Anybody else has seen this kind of slowdown for big InnoDB tables?

Here is a small table that reproduce the problem (if you make 5 to 15 million inserts). We wrote a few programs (one in C++, one in Python) that generates random data and insert into the database.
______________________________________________________


"create table smallest ( id int primary key, name varchar(80), model char(20)
, description varchar(255), lastupdate date, price decimal(8,2), cost decimal(8,2))
type=innodb"


"create unique index smallcomplex on smalltest (model, id, name)"
"create index smallprice on smalltest (price)"
"create index smallcost on smalltest (cost)"
"create index smallname on smalltest (name)"
______________________________________________________

Thanks for any help.

Luc Charland





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]



Reply via email to