Hi,
In our opinion, you can opt any or all of these:
a) Build indexes; or rebuild indexes with REPAIR TABLE
b) Take Periodic backup(mysqldump) based on the importance of the data.
Clear the current table on specific condition.
c) If clearing the table affects the transactions that depends on
wow 30GB is a lot of data. Do let us know what kind of hardware / OS
you are using.
In the past I have worked with larger tables then these, but I was
using Objectivity DB running on UltraSPARC 64bit architecture.
--
Saqib Ali, CISSP, ISSAP
http://www.full-disk-encryption.net
--
MySQL General
Hi,
Would you like to express your opinion as to what design strategy to take if
a table (used for read operations only) is supposed to get more than 3GB of
data per day? With 1000 simultaneous users ?
--
Thanks in advance,
Asif
In the last episode (Dec 21), Asif Lodhi said:
Would you like to express your opinion as to what design strategy to
take if a table (used for read operations only) is supposed to get
more than 3GB of data per day? With 1000 simultaneous users ?
With that data rate, you'll definitely have to