You know, this kind of "smells" like a full sync, non transactioned, single
inserts problem.  Basically, you open the connection to the database, start
throwing inserts at it without a transaction around each insert.  To make
matters worse (For time anyways) you've probably got full sync on, and
you're not using prepared statements.  I think this because of the volume
of data you're writing versus the time it takes and the final size of the
file.  2 meg isn't even a snack by todays standards.  Depending on the
wrapper you're using, synchronous can be turned on to FULL, or it could be
set to normal.  The wrapper of my wrapper always turns Synchronous off.

http://www.sqlite.org/pragma.html#pragma_fullfsync
http://www.sqlite.org/pragma.html#pragma_synchronous

If this is the production method, since you're rebuilding the data every
time, I think you could pull all the safeties off in this case.  Right
after you open the connection, execute these two PRAGMAs:
**
PRAGMA synchronous = 0
PRAGMA fullfsync = 0 (Only if being used on a Mac)

When you start inserting data, put them within a transaction.  Personally,
I haven't figured out how to do prepared statements with the wrapper I
have, so I basically re-do my query every time.  It IS a performance hit.
I get the concept, but the methodology seems to be odd.  *shrugs*.
Basically, from what I've gathered, you write your SQL statement like:

insert into TempTable (Field1, Field2) values (:x,:y)

Then you tell SQLite to substitute the :x for one value, and ;y for
another.  Then you loop through the bulk inserts updating only the fields
you substituted, not the entire query.

The actual METHOD to do this depends on the wrapper and language you're
using, so a bit of homework is going to be needed.

The other thing you could do is create a new database in memory.  What you
do is create a second database connection but instead of opening a file,
open it to memory via *OPEN(':MEMORY:)* with the colons, and use the backup
API SQLite provides, work off the data in memory, then reverse the backup
and put it back to the disk.  This, of course, is going to depend entirely
on how much memory you have to play with.  The backup API copies the ENTIRE
database to the destination specified.  So if you have a 12gig database in
a 4gig machine... .. well....


**


On Sun, Apr 21, 2013 at 2:31 PM, Paolo Bolzoni <
paolo.bolzoni.br...@gmail.com> wrote:

> FAQ 19 applies also to SAVEPOINT/RELEASE, not only to BEGIN/COMMIT, right?
>
> On Sun, Apr 21, 2013 at 9:35 AM, Kees Nuyt <k.n...@zonnet.nl> wrote:
> > On Sun, 21 Apr 2013 11:15:23 +0800 (CST), ?????? <yunjie....@163.com>
> > wrote:
> >>
> >> Ok,I do not make my means clearly. I mean 60 seconds after my
> >> program started,not token 60 seconds to load database file.
> >>
> >> Now, I got the reason of sqlite abnormal IO writing,it about
> >> batch insert. Here is my usecase: One table about 4 column and
> >> 500 row,the content of every row no exceed 100 byte, every time
> >> I update the whole table using batch query.
> >>
> >> It should take about one second and 100k IO writing on
> >> estimate,BUT it sustained about 20 second and wrote about 2.5M
> >> actually.
> >>
> >> Now,I modify the implement of batch query, it take about one
> >> second and 70k IO writing.So there are abnormal something in
> >> batch query indeed,Sqlite or Qt SQL module.
> >
> > Is this still about bulk INSERT or about a SEELCT query?
> > In which way did you modify it?
> > Perhaps http://sqlite.org/faq.html#q19 helps?
> >
> > --
> > Groet, Cordialement, Pozdrawiam, Regards,
> >
> > Kees Nuyt
> >
> > _______________________________________________
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to