I have tables that are over 7 million records and I originally had the
same issue, however if you will create indexes in those tables, on the
columns that you will be using for your queries this will GREATLY speed up
your queries.

I am sure that there is a more concise way to state how you should create
indexes but you can look at the mysql online docs to figure out what is
best for you.

Chris Hood 
Investigator Verizon Global Security Operations Center 


-----Original Message-----
From: Christos Andronis [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 25, 2005 8:20 AM
To: mysql@lists.mysql.com
Subject: query on a very big table

Hi all,
we are trying to run the following query on a table that contains over 600
million rows: 

'ALTER TABLE `typed_strengths` CHANGE `entity1_id` `entity1_id` int(10)
UNSIGNED DEFAULT NULL FIRST'

The query takes ages to run (has been running for over 10 hours now). Is
this normal?

As a side issue, is MySQL suited for such big tables? I've seen a couple
of case studies with MySQL databases over 1.4 billion rows but it is not
clear to me whether this size corresponds to the whole database or whether
it is for a single table.

The MySQL distribution we're using is 4.1.12. The database sits on a HP
Proliant DL585 server with 2 dual-core Opterons and 12 GB of RAM, running
Linux Fedora Core 3.

Thanks in advance for any responses

-Christos Andronis


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to