Howdy Guys and Gals,

We are acquiring data on background radiation in a master-slave server 
environment (RH9.0, MySQL 4.0.24) at the rate of approximately 19,000 
records per day.  The data are insert-only into about 25 of 31 tables in 
the database -- no updates are ever applied to the data. Information from 
the database is used via select statements for graphical display and 
report generation amongst other uses. 

A PHP backup script using mysqldump runs as a cron job each night from a 
third server which also functions as an intranet webserver. After 1 1/2 
years of operation, the mysqldump file of the entire database is roughly 
760 MB, and this takes under 2 minutes to create.  Once bzipped and 
tarred, the entire file is 31.7 MB in size, and this part of the backup 
process now takes 46-47 minutes. 

The rate of acquisition of data will be fairly constant, and up to 3 years 
of data will be kept on the live master-slave, so simply doubling all 
these values seems a realistic expectation for a full backup of the 
database after 3 years.  Data older than 3 years would be deleted from the 
master-slave system.

How long it would be reasonable to keep doing a full dump of the database 
versus using mysqldump with a where clause, i.e., doing a daily 
incremental backup, say of the last 24 hours. 

Also, what are the key mysqldump and/or server variables to pay attention 
to in order to process these large, maybe multi-gigabyte dump files?

Thanks,

David

Reply via email to