On a different note just running a slave will not help when there is some
accidental deletion of data or tables or databases. Hence have a slave as
Mikhail mentioned and also run regular backups on the slave. Preferably a
tar of your mysql server after shutting it down, also ensure you have
log-bin and log-slave-updates enabled on your slave. Push the tar to more
than one location and hosts so that you can survive disk corruptions. This
way you can ensure you can recover from most possible database corruption
scenarios.

Thanks
Alex

On 1/30/07, Mikhail Berman <[EMAIL PROTECTED]> wrote:

Hi David,

Is the space on hard-drive is major concern of yours or abilities to
recover from crash is?

Backups are usually taking to be able to recover from a crash. Which in
its turn means if there is a way to recover faster it is better. Having
slave that is constantly updated gives you very quick way of recovering
if master goes down. Just point you PHP scripts to slave and be happy.

If you need additional back up, do them from "full" slave. Stop it for a
while, do backups and then restart slave again.

For huge backups in our office we use old, and I mean old, Dells with
huge 500GB drives running one of *Nix's


Regards,

Mikhail Berman

-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Monday, January 29, 2007 3:27 PM
To: Mikhail Berman
Cc: mysql
Subject: RE: Mysqldump Files

Hi Mikhail,

I don't think that would save much space, in terms of file size. The
tables that are actively getting inserts are large and growing larger
(~750,000 records), and those that have no activity are either currently

empty or have less than a hundred records in them. So just dumping the
active tables will comprise I'd guess 99% or more of the database size.

David

----------------------------------------------------------------------

Hi David,

Let me point you in a bit different direction. You are already running
replication as it seems from your E-mail

So, why not just run chained replication from second to the third server
and use "replicate-do-table = [table_name]" in my.cnf of the third
server to limit selection of tables to be used by web server. Or do full
replication to another server from the first one for full backup?

Regards,

Mikhail Berman

-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Monday, January 29, 2007 2:33 PM
To: mysql
Subject: Mysqldump Files

Howdy Guys and Gals,

We are acquiring data on background radiation in a master-slave server
environment (RH9.0, MySQL 4.0.24) at the rate of approximately 19,000
records per day.  The data are insert-only into about 25 of 31 tables in

the database -- no updates are ever applied to the data. Information
from
the database is used via select statements for graphical display and
report generation amongst other uses.

A PHP backup script using mysqldump runs as a cron job each night from a

third server which also functions as an intranet webserver. After 1 1/2
years of operation, the mysqldump file of the entire database is roughly

760 MB, and this takes under 2 minutes to create.  Once bzipped and
tarred, the entire file is 31.7 MB in size, and this part of the backup
process now takes 46-47 minutes.

The rate of acquisition of data will be fairly constant, and up to 3
years
of data will be kept on the live master-slave, so simply doubling all
these values seems a realistic expectation for a full backup of the
database after 3 years.  Data older than 3 years would be deleted from
the
master-slave system.

How long it would be reasonable to keep doing a full dump of the
database
versus using mysqldump with a where clause, i.e., doing a daily
incremental backup, say of the last 24 hours.

Also, what are the key mysqldump and/or server variables to pay
attention
to in order to process these large, maybe multi-gigabyte dump files?

Thanks,

David

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


Reply via email to