Here's a little script for the google cache and email archive engines to
store in case anyone ever needs this.
We have several servers (dev/test/prod) and replication setup too.
Sometimes we need to restore one server from another, however we have
different mysql user accounts setup on each for
We have some INNODB tables that are over 500,000,000 rows. These
obviously make for some mighty big file sizes:
-rw-rw 1 mysql mysql 73,872,179,200 2009-01-22 02:31 ibdata1
This can take a good amount of time to copy even just the file, and a
mysqldump can take hours to export and import
Attila,
I would like to select only the most recent targettime within 1 minute and
only display only the rows that are the latest and print out all of the
stats as columns on a per toolname basis:
One way: a three-step:
1. There is a formula (see Group data by time periods at
On Fri, Jan 23, 2009 at 4:18 PM, Daevid Vincent dae...@daevid.com wrote:
We have some INNODB tables that are over 500,000,000 rows. These
obviously make for some mighty big file sizes:
-rw-rw 1 mysql mysql 73,872,179,200 2009-01-22 02:31 ibdata1
Daevid, we have started working on an
Hi Baron,
Thanks for your message.
After endless nights trying I could not get it to work, so I finally
created a temporary table and run a second query to get the percentages.
It works fine for now, but I wonder if it will take too long once
there are thousands of records. Is there an
I would also suggest to use the innodb storage option
'innodb-file-per-table=ON'
so that at least the datafile is split to have as many (smaller)
datafiles as innodb tables.
This could make it easier to deal with the whole database.
Cheers
Claudio
Baron Schwartz wrote:
On Fri, Jan 23, 2009
Something totally ghetto that might work...
If you could convert the files to appear to be text with some kind of
reversible fast translation, rsync might be able to handle the diff part.
You'd sure want to test this out thoroughly...
--
MySQL General Mailing List
For list archives:
We have a very large, multi-terabyte database with individual tables that
are over 100Gig. We have it on a Red Hat Linux system and we set up logical
volumes, take LVM snapshots, then use rsync to move the data over. This
works well and is a lot faster than dumping and certainly restore is
I know how you feel! I think your two best options are these:
1.) Use LVM snapshots per the MPB links you mentioned as a guide. Your
incremental backup would be the binary logs that MySQL writes. You could
copy any of this data off site by mounting the snapshots and using your
remote copy