Thanks Simon, this is exactly what I needed to know.

gert


2014-07-01 16:48 GMT+02:00 Simon Slavin <slav...@bigfraud.org>:

>
> > On 1 Jul 2014, at 12:26pm, Gert Van Assche <ger...@gmail.com> wrote:
> >
> > 1 - Open the BIG db, attach all small files, ten by 10, and copy the
> tables
> > from the attached databases to the big table. I can speed up the import
> by
> > putting the INSERT in a transaction.
> >
> > 2 - Export one by one the table from all databases to a CSV file; import
> > all CSV files into the BIG db; I don't have to worry about grouping the
> > files by 10, but I cannot use transaction.
> >
> > What would be the best way to do this? One of these 2 ways, or is there a
> > third way to do this?
>
> First, are you doing this just once, or does it have to become part of a
> workflow for regular use ?
>
> The results of the two should be exactly equivalent.
>
> (1) will be faster.  You can do things like "INSERT INTO tableB SELECT *
> FROM tableA".  This automatically executes the whole thing as one
> transaction.
>
> (2) will be easier to debug.  SQLite doesn't have a built-in function to
> express a row as an INSERT command, but this is part of the function of the
> SQLite shell tool and shell tool scripts can incorporate "BEGIN" and "END"
> commands.
>
> If I was doing it just once I'd probably write a script for the shell tool
> that did (2).  If I had to implement it as part of a frequently used
> workflow I'd probably write code which did (1).
>
> Simon.
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to