----- Original Message -----
> From: "Alex Schaft" <al...@quicksoftware.co.za>
> 
> I'm monitoring a mysqldump via stdout, catching the create table
> commands prior to flushing them to my own text file. Then on the
> restore side, I'm trying to feed these to mysql via the c api so I can
> monitor progress (no of lines in the dump file vs no of lines sent to mysql),
> but the lines are as much as 16k long in the text file times about
> 110 of those for one huge insert statement.
> 
> What can I pass to mysqldump to get more sane statement lengths?

That's a pretty sane statement length, actually. It's a lot more efficient to 
lock the table once, insert a block of records, update the indices once and 
unlock the table; as opposed to doing that for every separate record.

If you really want to go to single-record inserts, you can pass 
--skip-extended-insert. I'm not sure you can control the maximum length of a 
statement beyond "one" or "lots".




-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql?unsub=arch...@jab.org

Reply via email to