Hi,

Yeah, by default mysqldump buffers the result of the "SELECT * FROM
table" query in memory before writing the SQL statements (using
mysql_store_result()). If you use the --opt option (or at least -q
or --quick), it dumps the data as it gets it (using mysql_use_result()).


Hope that helps.


Matt


----- Original Message -----
From: <[EMAIL PROTECTED]>
Sent: Thursday, February 19, 2004 1:23 PM
Subject: mysqldump via tcp/ip memory problem


>
> I've dumped alot of databases before using mysqldump, and am trying to
> dump a larger database than normal, about 2.2GB in size..  The largest
> table just over 12 million rows...  It's dumping over a network to a
> tape backup server..
>
> I start the job off:
>
> /usr/local/bin/mysqldump -c -F --host=prv-master1 \
> --password=blahblah --port=3306 --user=blahblah --verbose mdb1 >
> /tapesource/MDB1/mdb1.db
>
> It runs for bit, dumping some smaller tables, then gets the the
largest
> table (12mil row) .. runs for a bit and reports "Killed"
>
> Dmesg shows:
>
> __alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
> VM: killing process mysqldump
>
> Which leads to a memory problem, or lack of...  The box does have
approx.
> 500MB of free ram...
>
> Is it just eating it up buffering the network response from the
server?
>
> Mysqldump on client is Ver 8.22 Distrib 3.23.57
> Mysqld on server is 3.23.55-log
>
> Thoughts?


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to