On 2011/10/20 11:54 AM, Johan De Meersman wrote:
----- Original Message -----
From: "Alex Schaft"<al...@quicksoftware.co.za>

I'm monitoring a mysqldump via stdout, catching the create table
commands prior to flushing them to my own text file. Then on the
restore side, I'm trying to feed these to mysql via the c api so I can
monitor progress (no of lines in the dump file vs no of lines sent to mysql),
but the lines are as much as 16k long in the text file times about
110 of those for one huge insert statement.

What can I pass to mysqldump to get more sane statement lengths?
That's a pretty sane statement length, actually. It's a lot more efficient to 
lock the table once, insert a block of records, update the indices once and 
unlock the table; as opposed to doing that for every separate record.
I realize that, I'm just trying to stop the phone calls saying "I started a restore, and my pc just froze...."

I might just read all the single insert lines, and get a whole lot of values clauses together before passing it on to get around the performance issue while having some idea of progress.

Alex


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql?unsub=arch...@jab.org

Reply via email to