I would also suggest to use the innodb storage option 'innodb-file-per-table=ON' so that at least the datafile is split to have as many (smaller) datafiles as innodb tables.
This could make it easier to deal with the whole database.

Cheers
Claudio

Baron Schwartz wrote:
On Fri, Jan 23, 2009 at 4:18 PM, Daevid Vincent <dae...@daevid.com> wrote:
We have some INNODB tables that are over 500,000,000 rows. These
obviously make for some mighty big file sizes:

-rw-rw---- 1 mysql mysql 73,872,179,200 2009-01-22 02:31 ibdata1

Daevid, we have started working on an incremental/differential InnoDB
backup tool.  It is in need of a sponsor though.

I'm betting that you don't change all 70GB of that table every day,
and you'd appreciate being able to keep differentials and only do full
backups every so often.

For big datasets like this, dump is impossible or too expensive at
some point.  There are a lot of ways you could do this, but I'd
recommend filesystem snapshots and binary copies.  Unless you like
long dumps and long restores...

There might also be some higher-level strategies like archiving and
purging or aggregation that would benefit you.  These are the kinds of
things I see pretty often and help people select good strategies, but
it requires a lot of knowledge of your application to give good
advice.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql?unsub=arch...@jab.org

Reply via email to