I understand that the WAL log uses less efficient storage rules than the real
database, but this case was a real surprise for me. Here is the brief
description.

We start from an empty database, create a few tables (each having a few
indexes), then begin a transaction, do a couple of inserts into the first
table, commit the transaction and repeat the same with the remaining tables.

No other active readers or writers.

Summary of this process:
- DB size 1.022 GB
- WAL log size 7.490 GB
- SHM file size 57 MB (About 7mil pages, which - page size is 1024 -
corresponds to the WAL size.)
- Total no. of records inserted is 5.207 mil.
- Among the tables there is one that dominates - it takes about 80% of all
records. The commit of this table took over 30 min.
- The test was done on a rather old W7 notebook. Memory consumption
approached 1 GB (as opposed to the normal state ~100 MB).

I understand that we have to modify above algorithm by cutting the
transaction into smaller pieces. The official SQLite documentation only says
"WAL does not work well for very large transactions. For transactions larger
than about 100 megabytes...".

Could somebody quantify this statement? While the official documentation
mentions the size (megabytes), my gut feeling is that it has more to do with
the record count. (In our case large records are rather exceptional.)

Do table indexes play an important role? Wouldn't be better to create them
after all inserts are done?










--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Huge-WAL-log-tp79991.html
Sent from the SQLite mailing list archive at Nabble.com.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to