Hi all, we are trying to run the following query on a table that contains over 600 million rows:
'ALTER TABLE `typed_strengths` CHANGE `entity1_id` `entity1_id` int(10) UNSIGNED DEFAULT NULL FIRST' The query takes ages to run (has been running for over 10 hours now). Is this normal? As a side issue, is MySQL suited for such big tables? I've seen a couple of case studies with MySQL databases over 1.4 billion rows but it is not clear to me whether this size corresponds to the whole database or whether it is for a single table. The MySQL distribution we're using is 4.1.12. The database sits on a HP Proliant DL585 server with 2 dual-core Opterons and 12 GB of RAM, running Linux Fedora Core 3. Thanks in advance for any responses -Christos Andronis