confused here on the archived logs and the active logs.

In general, there can be multiple logs covering the time between one
backup and the next backup, and those logs must be applied, serially,
in the correct order, to recover the database fully.

Once you take that "next" backup, you no longer need the previous
backup/logs, though it's probably wise to establish a reasonable
holding period depending on your resources (e.g., backup weekly, keep
3 months of backups and logs, destroy the oldest set when you complete
the most recent backup, etc.)

I need to protect against a media crash and it is not so important to go back 
to specific periods of time for the database.

Perhaps you can entirely use lower-level mechanisms, then, such as
RAID or other redundant storage hardware, or a modern filesystem which
automatically replicates the underlying data against the failure of
the storage, such as ZFS (http://en.wikipedia.org/wiki/ZFS)

I think it's still wise to have an application-level backup strategy,
because sometimes logical recovery is necessary (e.g., to recover from
an application bug or an administrative mistake), so I think that the
exercise you're going through about documenting your backup and
recovery strategies is an excellent one.

And don't forget to test those backup/restore practices, since an
untested restore is no better than no restore at all.

I've found that one useful technique is to provision a secondary
machine, which can be MUCH smaller in terms of CPU, memory, networking,
etc., and just has to have enough disk space, and automate things so
that every time I take a backup, my scripts automatically copy the
backup to this spare machine, restore the backup and apply all the
logs, and then run a few queries to satisfy myself that the database
is correctly recovered.

thanks,

bryan

Reply via email to