I've dumped alot of databases before using mysqldump, and am trying to
dump a larger database than normal, about 2.2GB in size..  The largest
table just over 12 million rows...  It's dumping over a network to a
tape backup server..

I start the job off:

/usr/local/bin/mysqldump -c -F --host=prv-master1 \
--password=blahblah --port=3306 --user=blahblah --verbose mdb1 >
/tapesource/MDB1/mdb1.db

It runs for bit, dumping some smaller tables, then gets the the largest
table (12mil row) .. runs for a bit and reports "Killed"

Dmesg shows:

__alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
VM: killing process mysqldump

Which leads to a memory problem, or lack of...  The box does have approx.
500MB of free ram...

Is it just eating it up buffering the network response from the server?

Mysqldump on client is Ver 8.22 Distrib 3.23.57
Mysqld on server is 3.23.55-log

Thoughts?


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to