Re: Altering InnoDB tables
John, did you configure innodb_buffer_pool_size and InnoDB log files big? Look at the online manual on the recommended sizes. If the table is very big, then UNIQUE secondary keys will cause a lot of disk i/o, because a disk read is required to check the constraint for each row, and the insert buffer cannot be used. Hmm... a fix would be to disable UNIQUE checking during the conversion. I could add an option where you can switch the UNIQUE checking off in my.cnf. That option would be used during big conversion procedures, if you already know that the data satisifies the constraint. Ok, I will introduce the my.cnf option innodb_no_unique_checks in 3.23.4x. Thank you for bringing this up :). About Robert Ricci's question: SHOW TABLE STATUS FROM ... reports FOREIGN KEY constraints in the table comment section. Best regards, Heikki Tuuri Innobase Oy --- Order technical MySQL/InnoDB support at https://order.mysql.com/ See http://www.innodb.com for the online manual and latest news on InnoDB John Kemp wrote in message ... I am also in the process of altering tables from MyISAM to INNODB. I have one big problem - a very large table ( 5 million rows) with 5 indices/indexes on it, including a couple that are UNIQUE but with NULL values allowed. Creating the last of these indices is taking an absurd amount of time on my test system, which makes it impossible for me to recommend this change for production as our systems would be down for many days. Here are the steps I took (note that after each failure, I recreated the original data structure) 1) I tried altering the table directly to be Innodb. I stopped this after 4 days, when all it said in the process list was 'copying to tmp table' - after 4 days! 2) Then I tried mysqldumping the table structure, and changing MyISAM - INNODB, dropped the table, and ran the mysqldump output back into the database. The INSERTS ran horribly slowly - something like 50,000 inserts in 4 hours. This extrapolates to a query that would run for several weeks on 5 million rows, so I stopped it again. 3) Then I tried dropping all the indices, altering the table to INNODB and re-creating the indicies. This worked pretty quickly until the final index creation, which is still running after more than 4 days. What am I doing wrong? This seems like an unacceptable length of time to convert a valid MyISAM table to Innodb, yet I REALLY need the row-level locking ASAP! Any help would be appreciated. Table structure is shown below, as is the current mysqladmin processlist. Thanks, John CREATE TABLE Subscriber ( SubscriberID int(11) NOT NULL auto_increment, Email varchar(255) NOT NULL default '', Zipcode varchar(5) default NULL, InitialContactDate date NOT NULL default '-00-00', FirstName varchar(20) default NULL, LastName varchar(20) default NULL, OrigDate date NOT NULL default '-00-00', AlertWhenAvailable char(1) default NULL, Gender char(1) default NULL, DOB date default NULL, Password varchar(25) default NULL, SubscriberTypeID mediumint(9) default NULL, Attempts tinyint(4) NOT NULL default '0', Username varchar(255) default NULL, Address varchar(255) default NULL, City varchar(100) default NULL, State char(2) default NULL, Married char(1) default NULL, Age varchar(5) default NULL, Income varchar(20) default NULL, optin int(11) default NULL, Country char(2) default 'US', SourcePartnerID int(11) default '0', Occupation char(2) default NULL, PRIMARY KEY (SubscriberID), UNIQUE KEY AK1Subscriber (Email), UNIQUE KEY XAK2Subscriber (Username), KEY xie2Subscriber (optin,Gender,DOB), KEY xie3Subscriber (Zipcode) ) TYPE=MyISAM; __ ++---+---++-++- --+---+ | Id | User | Host | db | Command | Time | State | Info | ++---+---++-++- --+---+ | 87 | jkemp | localhost | streetmail | Query | 250195 | copy to tmp table | alter table Subscriber add unique index XAK2Subscriber (Username) | | 95 | jkemp | localhost || Query | 0 | | show processlist | ++---+---++-++- --+---+ __ Robert P Ricci wrote: I've recently converted from using MyISAM to InnoDB tables, and am pleased with InnoDB so far. However, the inability to use 'ALTER TABLE' with InnoDB tables is very problematic, as our software is still under heavy development and our database schema changes frequently. I'm working on a way to automate the drop/create/repopulate
Re: Altering InnoDB tables
Heikki, I didn't config the log that big, but the buffer pool is using 1GB of memory, and yes, the UNIQUE index is definitely causing a lot of disk I/O! I will be very happy to see your change, and thanks for (as usual) being so quick to make changes to an already very welcome addition to MySQL. I'll add this - we're running several tables that have 5-10 million rows, one logging table with 56 million rows on a web-based system that is used by 30 people at our company, editing our newsletters (which are stored in the database), and almost 1 million email subscribers, who receive 3 database-driven emails every week, and use our similarly data-driven website. So MySQL is dealing with a lot of roughly simultaneous requests. Row-level locking has improved the performance massively, and made my internal and external customers happier, which makes my day better (fewer complaints!) Thanks again, John Kemp Director, Software Development Streetmail Heikki Tuuri wrote: John, did you configure innodb_buffer_pool_size and InnoDB log files big? Look at the online manual on the recommended sizes. If the table is very big, then UNIQUE secondary keys will cause a lot of disk i/o, because a disk read is required to check the constraint for each row, and the insert buffer cannot be used. Hmm... a fix would be to disable UNIQUE checking during the conversion. I could add an option where you can switch the UNIQUE checking off in my.cnf. That option would be used during big conversion procedures, if you already know that the data satisifies the constraint. Ok, I will introduce the my.cnf option innodb_no_unique_checks in 3.23.4x. Thank you for bringing this up :). About Robert Ricci's question: SHOW TABLE STATUS FROM ... reports FOREIGN KEY constraints in the table comment section. Best regards, Heikki Tuuri Innobase Oy --- Order technical MySQL/InnoDB support at https://order.mysql.com/ See http://www.innodb.com for the online manual and latest news on InnoDB John Kemp wrote in message ... I am also in the process of altering tables from MyISAM to INNODB. I have one big problem - a very large table ( 5 million rows) with 5 indices/indexes on it, including a couple that are UNIQUE but with NULL values allowed. Creating the last of these indices is taking an absurd amount of time on my test system, which makes it impossible for me to recommend this change for production as our systems would be down for many days. Here are the steps I took (note that after each failure, I recreated the original data structure) 1) I tried altering the table directly to be Innodb. I stopped this after 4 days, when all it said in the process list was 'copying to tmp table' - after 4 days! 2) Then I tried mysqldumping the table structure, and changing MyISAM - INNODB, dropped the table, and ran the mysqldump output back into the database. The INSERTS ran horribly slowly - something like 50,000 inserts in 4 hours. This extrapolates to a query that would run for several weeks on 5 million rows, so I stopped it again. 3) Then I tried dropping all the indices, altering the table to INNODB and re-creating the indicies. This worked pretty quickly until the final index creation, which is still running after more than 4 days. What am I doing wrong? This seems like an unacceptable length of time to convert a valid MyISAM table to Innodb, yet I REALLY need the row-level locking ASAP! Any help would be appreciated. Table structure is shown below, as is the current mysqladmin processlist. Thanks, John CREATE TABLE Subscriber ( SubscriberID int(11) NOT NULL auto_increment, Email varchar(255) NOT NULL default '', Zipcode varchar(5) default NULL, InitialContactDate date NOT NULL default '-00-00', FirstName varchar(20) default NULL, LastName varchar(20) default NULL, OrigDate date NOT NULL default '-00-00', AlertWhenAvailable char(1) default NULL, Gender char(1) default NULL, DOB date default NULL, Password varchar(25) default NULL, SubscriberTypeID mediumint(9) default NULL, Attempts tinyint(4) NOT NULL default '0', Username varchar(255) default NULL, Address varchar(255) default NULL, City varchar(100) default NULL, State char(2) default NULL, Married char(1) default NULL, Age varchar(5) default NULL, Income varchar(20) default NULL, optin int(11) default NULL, Country char(2) default 'US', SourcePartnerID int(11) default '0', Occupation char(2) default NULL, PRIMARY KEY (SubscriberID), UNIQUE KEY AK1Subscriber (Email), UNIQUE KEY XAK2Subscriber (Username), KEY xie2Subscriber (optin,Gender,DOB), KEY xie3Subscriber (Zipcode) ) TYPE=MyISAM; __ ++---+---++-++- --+---+ | Id | User | Host
RE: Altering InnoDB tables
Hi, I also did experience the same problem converting a 2 million + row table from MYISAM to INNODB. It took 3 days! I had highlighted the problem to Heikki. (thanks Heikki for the response previously :)). In my case, there was no UNIQUE KEY for the table, but just an index on one of the varchars. However, common to both schemas in your email and mine was the primary key which happened to be an auto_increment : SubscriberID int(11) NOT NULL auto_increment, PRIMARY KEY (SubscriberID),I msgid bigint(20) unsigned NOT NULL auto_increment PRIMARY KEY (msgid) Could this be a problem? Anyway, to get around this long conversion process, I performed the conversion on a table on a slave to the master MySQL instance, then let the replication update this new InnoDB table. On the next scheduled downtime of that particular service, I did a switch of this replicated InnoDB table to the master, and then moved the original MyISAM to be the slave. I'm happy to report that I experienced the same performance improvements with InnoDB and it's row-level locking. Needless to say, the users of the service are a lot happier :) Next thing would be to work on another table, but this one is about 100+ GB so it might take a while :) Any comments? Cheers, Geoffrey __ Geoffrey Soh, Software Architect Ufinity - http://www.ufinity.com Leading Enterprise Access Management Software! 9 Scotts Road, Pacific Plaza, #06-01, Singapore 228210 Tel : +65 830-0341 Fax : +65 737-0213 __ -Original Message- From: John Kemp [mailto:[EMAIL PROTECTED]] Sent: Tuesday, January 15, 2002 11:58 PM To: Heikki Tuuri Cc: [EMAIL PROTECTED] Subject: Re: Altering InnoDB tables Heikki, I didn't config the log that big, but the buffer pool is using 1GB of memory, and yes, the UNIQUE index is definitely causing a lot of disk I/O! I will be very happy to see your change, and thanks for (as usual) being so quick to make changes to an already very welcome addition to MySQL. I'll add this - we're running several tables that have 5-10 million rows, one logging table with 56 million rows on a web-based system that is used by 30 people at our company, editing our newsletters (which are stored in the database), and almost 1 million email subscribers, who receive 3 database-driven emails every week, and use our similarly data-driven website. So MySQL is dealing with a lot of roughly simultaneous requests. Row-level locking has improved the performance massively, and made my internal and external customers happier, which makes my day better (fewer complaints!) Thanks again, John Kemp Director, Software Development Streetmail Heikki Tuuri wrote: John, did you configure innodb_buffer_pool_size and InnoDB log files big? Look at the online manual on the recommended sizes. If the table is very big, then UNIQUE secondary keys will cause a lot of disk i/o, because a disk read is required to check the constraint for each row, and the insert buffer cannot be used. Hmm... a fix would be to disable UNIQUE checking during the conversion. I could add an option where you can switch the UNIQUE checking off in my.cnf. That option would be used during big conversion procedures, if you already know that the data satisifies the constraint. Ok, I will introduce the my.cnf option innodb_no_unique_checks in 3.23.4x. Thank you for bringing this up :). About Robert Ricci's question: SHOW TABLE STATUS FROM ... reports FOREIGN KEY constraints in the table comment section. Best regards, Heikki Tuuri Innobase Oy --- Order technical MySQL/InnoDB support at https://order.mysql.com/ See http://www.innodb.com for the online manual and latest news on InnoDB John Kemp wrote in message ... I am also in the process of altering tables from MyISAM to INNODB. I have one big problem - a very large table ( 5 million rows) with 5 indices/indexes on it, including a couple that are UNIQUE but with NULL values allowed. Creating the last of these indices is taking an absurd amount of time on my test system, which makes it impossible for me to recommend this change for production as our systems would be down for many days. Here are the steps I took (note that after each failure, I recreated the original data structure) 1) I tried altering the table directly to be Innodb. I stopped this after 4 days, when all it said in the process list was 'copying to tmp table' - after 4 days! 2) Then I tried mysqldumping the table structure, and changing MyISAM - INNODB, dropped the table, and ran the mysqldump output back into the database. The INSERTS ran horribly slowly - something like 50,000 inserts in 4 hours. This extrapolates to a query that would run for several weeks on 5 million rows, so I stopped it again. 3) Then I tried
Altering InnoDB tables
I've recently converted from using MyISAM to InnoDB tables, and am pleased with InnoDB so far. However, the inability to use 'ALTER TABLE' with InnoDB tables is very problematic, as our software is still under heavy development and our database schema changes frequently. I'm working on a way to automate the drop/create/repopulate process to alter tables, but am running into a few problems: Is there a way to prevent clients from noticing that the table in question is temporarily gone? Something like locking the entire database for writing? Second, mysqldump doesn't give foreign key constraints in its table creation commands. Is there any way to retrieve these? -- /--- | Robert P Ricci [EMAIL PROTECTED] || [EMAIL PROTECTED] | Research Associate, University of Utah Flux Group | www.cs.utah.edu/flux || www.emulab.net \--- - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Altering InnoDB tables
I am also in the process of altering tables from MyISAM to INNODB. I have one big problem - a very large table ( 5 million rows) with 5 indices/indexes on it, including a couple that are UNIQUE but with NULL values allowed. Creating the last of these indices is taking an absurd amount of time on my test system, which makes it impossible for me to recommend this change for production as our systems would be down for many days. Here are the steps I took (note that after each failure, I recreated the original data structure) 1) I tried altering the table directly to be Innodb. I stopped this after 4 days, when all it said in the process list was 'copying to tmp table' - after 4 days! 2) Then I tried mysqldumping the table structure, and changing MyISAM - INNODB, dropped the table, and ran the mysqldump output back into the database. The INSERTS ran horribly slowly - something like 50,000 inserts in 4 hours. This extrapolates to a query that would run for several weeks on 5 million rows, so I stopped it again. 3) Then I tried dropping all the indices, altering the table to INNODB and re-creating the indicies. This worked pretty quickly until the final index creation, which is still running after more than 4 days. What am I doing wrong? This seems like an unacceptable length of time to convert a valid MyISAM table to Innodb, yet I REALLY need the row-level locking ASAP! Any help would be appreciated. Table structure is shown below, as is the current mysqladmin processlist. Thanks, John CREATE TABLE Subscriber ( SubscriberID int(11) NOT NULL auto_increment, Email varchar(255) NOT NULL default '', Zipcode varchar(5) default NULL, InitialContactDate date NOT NULL default '-00-00', FirstName varchar(20) default NULL, LastName varchar(20) default NULL, OrigDate date NOT NULL default '-00-00', AlertWhenAvailable char(1) default NULL, Gender char(1) default NULL, DOB date default NULL, Password varchar(25) default NULL, SubscriberTypeID mediumint(9) default NULL, Attempts tinyint(4) NOT NULL default '0', Username varchar(255) default NULL, Address varchar(255) default NULL, City varchar(100) default NULL, State char(2) default NULL, Married char(1) default NULL, Age varchar(5) default NULL, Income varchar(20) default NULL, optin int(11) default NULL, Country char(2) default 'US', SourcePartnerID int(11) default '0', Occupation char(2) default NULL, PRIMARY KEY (SubscriberID), UNIQUE KEY AK1Subscriber (Email), UNIQUE KEY XAK2Subscriber (Username), KEY xie2Subscriber (optin,Gender,DOB), KEY xie3Subscriber (Zipcode) ) TYPE=MyISAM; __ ++---+---++-++---+---+ | Id | User | Host | db | Command | Time | State | Info | ++---+---++-++---+---+ | 87 | jkemp | localhost | streetmail | Query | 250195 | copy to tmp table | alter table Subscriber add unique index XAK2Subscriber (Username) | | 95 | jkemp | localhost || Query | 0 | | show processlist | ++---+---++-++---+---+ __ Robert P Ricci wrote: I've recently converted from using MyISAM to InnoDB tables, and am pleased with InnoDB so far. However, the inability to use 'ALTER TABLE' with InnoDB tables is very problematic, as our software is still under heavy development and our database schema changes frequently. I'm working on a way to automate the drop/create/repopulate process to alter tables, but am running into a few problems: Is there a way to prevent clients from noticing that the table in question is temporarily gone? Something like locking the entire database for writing? Second, mysqldump doesn't give foreign key constraints in its table creation commands. Is there any way to retrieve these? - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Altering InnoDB tables
Thus spake John Kemp on Mon, Jan 14, 2002 at 02:37:07PM -0500: 2) Then I tried mysqldumping the table structure, and changing MyISAM - INNODB, dropped the table, and ran the mysqldump output back into the database. The INSERTS ran horribly slowly - something like 50,000 inserts in 4 hours. This extrapolates to a query that would run for several weeks on 5 million rows, so I stopped it again. Did you try putting 'set autocommit=0;' at the top of your dump, and 'commit;' and at the bottom? That may help, though it sounds like it's the index creation, and not any transaction overhead, that's causing you trouble. Luckily, in my case, our database is sufficiently small (on the order of tens of thousands of rows spread out across ~50 tables) that this isn't really an issue. -- /--- | Robert P Ricci [EMAIL PROTECTED] || [EMAIL PROTECTED] | Research Associate, University of Utah Flux Group | www.cs.utah.edu/flux || www.emulab.net \--- - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php