Hi,
I have one 15GB table with 250 million records and just the primary key,
it is a very simple table but when a report is run (query) it just takes
hours,
and sometimes the application hangs.
I was trying to play a little with indexes and tuning (there is not great
indexes to be done though)
but eveytime I try to alter table for indexes it just hogs the disk space
and takes hours
to try to build indexes in various passages(.TMD) but it is a real pain
since I cannot even kill the mysql process,
and I had to kill the server with table corruption and had to stop/start and
repair table.
Does anybody experience problems in managing a simple MyISAM table with  250
million records and a primary key?
I tried also to duplicate the table, add indexes and insert into it (also
using INNODB for the new table) but it is really
taking ages everytime. And I had to move the 'tmpdir' to the data partition
because it was filling the / 100%.

MySQL is 5.0.x on 64bit RHEL 5 with 16GB RAM and NAS storage.

Any hint on how to manage big tables?

Thanks

Claudio Nanni

Reply via email to