Frank Bax <[EMAIL PROTECTED]> wrote on 04/25/2005 02:27:58 PM:

> At 02:07 PM 4/25/05, Scott Gifford wrote:
> 
> >[EMAIL PROTECTED] writes:
> >
> > > Frank Bax <[EMAIL PROTECTED]> wrote on 04/25/2005 11:47:12 AM:
> > >> Or simply use "split", default is 1000 lines, but can be changed 
via
> > >> command line.
> >
> >That's a start, but the files each need to be a valid SQL statement,
> >so that's not enough; a little fixup needs to happen at the beginning
> >and end of each file.
> 
> 
> OP did not say that the SQL statements crossed over a newline; but if 
they 
> do, it is a trivial sequence of shell commands to "fix" them so that 
they 
> are one line each. 
> 
> 
Odds are, he has one long INSERT statement (extended insert format). 
Mysqldump creates those very well and will make them as big as the source 
data unless you tell it to make smaller chunks. I working from the 
assumption he has a 9MB INSERT statement and needs to split it into nine 
1MB (or smaller) INSERT statements.

If I am right, he does have a delimiter between each of his records (the 
comma between each tuple of values) and you could break up his source file 
with those. However he has to have his original "INSERT .... VALUES" at 
the head of each file and a terminating semicolon at the end of every file 
(and still stay smaller than 1M) or it won't work. 

I don't have a tool or know of a link to a tool I can send to him to help 
him split his big INSERT statement into smaller ones. I thought he might 
be able to temporarily reset the max_allowed_packet variable value on his 
destination server so that he could avoid breaking up his script but I 
haven't heard if that worked or not.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Reply via email to