Greg,

I think mysqldump and import work for all table handlers in the same
way in MySQL. But to get the maximum performance, there are some optimizations
available.

Andreas Vierengel measured that using autocommit=0 makes the import
much faster for Innobase. You have to issue a commit after you have
imported the table.

You should also set Innobase log files very big, say 150 MB, to
reduce checkpointing and disk i/o during the import.

Delayed index creation is technically not very difficult to implement,
but on the TODO list it comes probably after several other items.

Innobase already has an optimization which speeds up insertions
to secondary indexes if the insertions would result in disk i/o.
The optimization is called insert buffering. If a secondary index
is non-unique, then Innobase does not need to read the index page in
to check the uniqueness criterion. Instead, Innobase will insert
the secondary index record into a main-memory cache, from which
the insertions are done to disk in batches, saving a lot of disk i/o.

If the insertions are disk-bound, I have measured that insert buffering
can speed them up some 15 times. It would be nice if someone would measure
dump and import speed for real-world tables. Andreas measured for
a 150 000-row table import speed of some 1000 rows per second. (See his posting
in thread Re:Innobase in MySQL).

Regards,

Heikki

At 12:31 AM 3/17/01 +0000, you wrote:
>Michael Widenius wrote:
>> 
>> Hi!
>> 
>> Try:
>> 
>> mysqldump --tab=directory
>> 
>> This does basicly what you want.
>> 
>> After that, it's up to Heikki to fix Innobase to do delayed creation
>> of indexes.
>
>It would be very handy if Innobase (and the GEMINI when it comes along)
>where to support mysqldump in the standard way, as I assume it works as
>such and I and many others would have to change thier backup scripts. 
>Delayed index creation is very usefull (in saving time) in larger DB
>loads via a mysqldump - Hiekki is this difficult ?
>
>Thanks all for your work.
>
>Greg
>
>> 
>> Peter> At least it would be a standart way to quickly backup data and recover
>> Peter> it for all table handlers (backup probably does not work for all tablr
>> Peter> types yet)
>> 
>> Regards,
>> Monty
>> 
>> ---------------------------------------------------------------------
>> Before posting, please check:
>>    http://www.mysql.com/manual.php   (the manual)
>>    http://lists.mysql.com/           (the list archive)
>> 
>> To request this thread, e-mail <[EMAIL PROTECTED]>
>> To unsubscribe, e-mail
<[EMAIL PROTECTED]>
>> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to