Mark, that's the expected behavior of mysqldump with --opt and
--single-transaction; it locks all databases and all tables for the
duration of the dump, ensuring a consistent snapshot.

With a database this size (100 GB), it's an area where throwing
hardware at the problem may be your best bet.  I suggest one of two
approaches as possible solutions:

1) Buy a *really fast* disk array and set it up as striped on a
superfast connection, like Ultra320 SCSI or fibre.  This will lower
the amount of time required to write the mysqldump output (which will
likely exceed 100 GB data size due to overhead within the file).  You
might even look at 2 disk arrays on 2 channels, striping across both
the disks in the array and across the arrays.  Pros: fairly easy to
do, not terribly expensive.  Cons: You still lock up your main
database server for backups, though possibly for less time than you do
now.

2) Buy a second physical server for MySQL and set up replication.
Then use the replication server to do your backups - provided you
never let people connect directly to it, no one will notice when it
locks up for a few hours dumping data.  Once it's done dumping,
replication will catch up on its own.  This doesn't even have to be a
very fast box, depending on your needs.  If it falls behind from time
to time that may be acceptable - depends on your needs.  Pros:
possibly less expensive than superfast arrays, no lockups of your main
server, backup server in case of primary failure.  Cons: requires
monitoring of replication, and still requires a one-time consistent
dump as a starting point for replication.

HTH,
Dan

On 7/10/06, Mark Steele <[EMAIL PROTECTED]> wrote:
Hi folks,



I've recently tried to do a database backup on a database server that
has a fairly high concurrency rate (1000+ queries/sec) and have noticed
that the backup process seemed to deadlock the machine and I had to
resort to extreme measures to get the database back up (killed the
process and had to restart it in recovery mode).



The command:

mysqldump --all-databases --opt --single-transaction --master-data=1
>dump.txt



All my tables use InnoDB, and the database is about 100 gigabytes in
size.



Does anyone have any suggestions for getting consistent database
snapshots?



I tried the InnoDB binary backup tool in the past, but that lead to a
corrupted database, and I'm not sure that it'll lead to a different
outcome as both single-transaction and the binary backup tool use the
same mechanism (versionnning). The documentation describes the
single-transaction as taking a short global lock, which is the root
cause of the deadlock I saw I believe.



When the server was deadlocked, all the connections were 'waiting on
table', and the backup process was apparently stuck on 'flushing
tables'.



Cheers,



Mark Steele
Information Systems Manager

Zango

E: [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>
P: 514.787.4681 |  F: 514.787.4707

www.zango.com <BLOCKED::http://www.zango.com>

Read our blog at http://blog.zango.com <BLOCKED::http://blog.zango.com>






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to