On Sun, Mar 29, 2009 at 04:21:26PM -0500, Stephen Woodbridge scratched on the 
wall:
> Try wrapping all the inserts in a transaction:
> 
> 
> BEGIN;
> INSERT ...
> INSERT ...
> ....
> COMMIT;
> 
> This should be MUCH faster.

> > I am new with sqlite, and I create a program that reads several mllion
> > records and puts them into a sqlite db using.

  If you're doing several million records, I wouldn't wrap them *all*
  in a single transaction....   You might try batches of 10,000 or so.

  For example:

  BEGIN;
  INSERT ....
  ...
  ...                           x10,000
  COMMIT;

  BEGIN;
  INSERT ....
  ...
  ...                           x10,000
  COMMIT;

  ...
  ...
  ...


  Also, if possible, things will go much faster if you pre-order the
  data.  If the data is not pre-ordered, bump up the page cache to give
  the index some working room.  The default is 2000 pages, which
  consumes about 3MB of real memory using the default 1K page size.
  Consider bumping that up by a factor of 10x to maybe even 100x,
  depending on how much physical RAM you have available.  You can
  change the size of the page cache with the command
  "PRAGMA cache_size = <#pages>".

  You don't need the extra space for queries, but it helps when doing a
  large number of random inserts into the index.

    -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to