Hi ,
--single-transaction will execute the same nature of mysqldump command
with begin and end transaction. How ever the table is locked for the
backup your site may be slow.
--
Praj
Ian P. Christian wrote:
Recently my one and only slave went down, and stupidly I don't have a
dump suitable for reseeding (is that's the right term...) the slave,
so need to make a snapshot of the master database again. This time
I'll make sure I keep this datafile for future restores should I need
to - you live and learn.
So... I'm doing a database dump:
mysqldump --master-data --single-transaction database > dump.sql
This database I'm dumping has something like 17 million rows, all but
1 table (which uses FULLTEXT, and only has 3-4k rows) run innodb.
There is only one table of any real size, and this table has all but
about 100k of the total rows in. My understanding of this command is
that the database should not be locked whilst this command is running.
However, here's my problem...
When the dump starts to read from large table, the database just
grinds to a halt - my website running from the database just stops,
and the dump (which I was watching progress with a privative `watch ls
-la`) slows down a bit.
Last time I had to do this (for the first 'seeding' of my slave), I
eventually gave up trying to dump from the database whilst the site
remained live, and took the site down for 15 minutes whilst the dump
ran. As I'm sure you'll understand I'm not too keen on taking the
website down again.
Any suggestions as to why my database is stopping (could be I/O
related maybe? it's on a good RAID setup though), and what I could do
about it?
Many Thanks,
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]