Thanks for the documentation update.

>From my point of view I would invite more details related to the term "large
transaction". Specifically the role of indexes is important. (They are often
overlooked, added by an admin after the development is over etc.)


> Defenses against this failure mode include: ... (2) making sure that the
> cache_size is at least as big as the transaction working set

a) This is not realistic in many cases
b) It is even not needed. What we need to avoid is multiple spills of the
same page to the disk. AFAIK this should not concern the "true" data written
to the database, but only the indexes. Hence the cache size should be
related to the size of the active indexes.

Maybe the point (2) could be re-formulated along these lines:
Increase of the cache size reduces the number of cache spills and helps thus
to reduce the growth of the WAL file. The more indexes are updated during
the transaction, the larger should be the cache size. In some cases it might
be better to delete the indexes and re-create them after the write operation
is done.

P.S. Another typo:

This scenario can be avoiding by ensuring  ---> avoided

P.P.S.

Does the latest SQLite optimize the case "DELETE FROM table"? Or for
commands that have the effect of deleting all table rows? This was the case
with the SQL command I wrote in my last post:

DELETE FROM discount WHERE discounttypeid NOT IN (SELECT discounttypeid FROM
discounttype) 

What happens in v3.7.15 is that WAL log growths to a huge size. (The reason
is as before - the indexes.)

This should be possible to avoid.







--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Huge-WAL-log-tp79991p80209.html
Sent from the SQLite mailing list archive at Nabble.com.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to