Hi!
I measured the following import speeds with different types of
LOAD TABLE INFILE, which is used by mysqlimport.
Since autocommit does not commit the transaction during
a LOAD TABLE INFILE, there is little difference whether one
uses LOW_PRIORITY or CONCURRENT, or none.
Regards,
Heikki
http://www.innodb.com
..........
mysql> create table nt1 (a int, b int, c int) type = innodb;
Query OK, 0 rows affected (0.00 sec)
mysql> load data infile '/home/heikki/mysqlt/client/testdata' into table
nt1;
Query OK, 100000 rows affected (2.32 sec)
Records: 100000 Deleted: 0 Skipped: 0 Warnings: 0
mysql> drop table nt1;
Query OK, 0 rows affected (0.06 sec)
mysql> create table nt1 (a int, b int, c int) type = innodb;
Query OK, 0 rows affected (0.00 sec)
mysql> load data low_priority infile '/home/heikki/mysqlt/client/testdata'
into
table nt1;
Query OK, 100000 rows affected (2.54 sec)
Records: 100000 Deleted: 0 Skipped: 0 Warnings: 0
mysql> drop table nt1;
Query OK, 0 rows affected (0.06 sec)
mysql> create table nt1 (a int, b int, c int) type = innodb;
Query OK, 0 rows affected (0.03 sec)
mysql> load data concurrent infile '/home/heikki/mysqlt/client/testdata'
into ta
ble nt1;
Query OK, 100000 rows affected (2.29 sec)
Records: 100000 Deleted: 0 Skipped: 0 Warnings: 0
ryc writes:
> My application needs to insert between 200-500k rows about once a day...
> When using MyISAM tables I determined the best way to get this done (while
> still allowing users to perform select statements from the table) was to
use
> mysqlimport with --low-priority. This way all the inserts get bundled up
> into large groups, and the low-proriorty allows select statements to
> continue. However I switched the table type to Innodb now and I am not
sure
> what the best way to insert these rows would be. There is:
>
> a) continue using mysqlimport (without low-priority??)... the question
about
> this is will mysqlimport group the inserts into one large begin/commit
> block, or will each insert have its own block?
> b) create the begin/insert..../commit statements myself
>
> What way would be the fastest and least abrasive on the server?
>
> Another question I have is reguarding memory usage... Will innobase use
any
> of the key-buffer memory Mysql is using for MyISAM tables or is the only
> memory innobase uses defined with innodb_buffer_pool_size?
>
> Thanks!!
>
> ryan
>Hi!
>Innobase uses it's own memory and has nothing to do with MySQL'a
>key_buffer that is used for MyISAM only.
>Regarding inserts, betst way to accomplish this is without any
>transactions, but with multi-row inserts and with max_allowed_packet
>and net_buffer_length set at higher values.
>--
>Regards,
> __ ___ ___ ____ __
> / |/ /_ __/ __/ __ \/ / Mr. Sinisa Milivojevic <[EMAIL PROTECTED]>
> / /|_/ / // /\ \/ /_/ / /__ MySQL AB, FullTime Developer
>/_/ /_/\_, /___/\___\_\___/ Larnaca, Cyprus
> <___/ www.mysql.com
>
---------------------------------------------------------------------
Before posting, please check:
http://www.mysql.com/manual.php (the manual)
http://lists.mysql.com/ (the list archive)
To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php