Simon Slavin-3 wrote
> However, other information in your message suggests that you have a
> resource leak of some type somewhere.  Especially, it should not take 12
> minutes to insert 3.5M rows into a simple table with an index or two
> unless really long strings or blobs are involved.
> 
> Unfortunately, I'm only really familiar with the C and PHP interfaces to
> SQLite.  But in both of those you can check the result code of each API
> call to make sure it is SQLITE_OK.  Are you able to do this with whatever
> interface you're using ?

We use c# API originally inspired by system.data.sqlite library. Every
single call is checked.
Some of the numbers I reported were obtained with a customized version of
sqlite3 shell. 

These 12 min (730 secs) refer to the total time of 35 individual commits. It
is the result of processing 7 GB of data. On the other hand, the processor
load was very low all the time indicating that the disk might be a bit slow.
(Although I did not observe any other slow down...)


Simon Slavin-3 wrote
>> DB size increased by roughly 17-18K after each commit. This suggests that
>> WAL needs 10x more memory than the DB itself.
> 
> Very variable.  Depends on whether the changes in one transaction change
> many different pages or change fewer different pages multiple times.  At
> least, I think so.

Sure, it is variable. But my goal when I opened this discussion was to
discuss the worst possible case. That generates the user complaints, normal
cases when everything runs smoothly are of no interest.

I consider this a very important information that should be present in the
official WAL documentation.





--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Huge-WAL-log-tp79991p80043.html
Sent from the SQLite mailing list archive at Nabble.com.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to