Hello,
The windows desktop application that I maintain uses sqlite for some of its storage. The data volume that we must handle has increased dramatically over the past 6 months and as it was to be expected update performance has degraded accordingly. Because of that, I was very quick to jump onto 3.7 when I read that WAL could be selected with it. The speed improvements when updating data are indeed very noticeable when running the application on my laptop's drive (3x faster) although not so much when running on a fast SSD connected to it via ESATA (only about 20% extra speed); I guess that the different ratio of improvement was to be expected given the access characteristics of each. Overall, I have to say that I believe that WAL was a great addition. Unfortunately, I've encountered what could potentially be considered a big problem. When I run a very large update the process space for the application seems to be exhausted. The way that it manifested itself at first was that there would be a disc I/O error, but because I test the application while running perfmon.exe on win xp sp3 to monitor the IO read bytes/sec, IO write bytes/sec, processor time and virtual bytes I noticed that the virtual bytes were at the 2GB max process space limit when the disc I/O error occurred. In order to rule out the possibility that I was doing something wrong, I decided to test a similar update using the sqlite3.exe CLI. During the update, what the application will do is it will iterate over all the records in a table in a specific order assigning a pair of integers to two columns (both initially null) of each record, based on domain specific rules; accordingly, the test with the CLI is the opposite operation; I take a db file that is about 1.5 GB in size, with over 3.7 million records in the table that needs to be updated and then I proceed to assign null to one of the columns for all records. After some time of working, the virtual bytes (as reported by perfmon) hit the max process space and the disk I/O error is reported. At that time, the wal file is over 5.5 GB in size and the shm file is over 10MB in size. My initial guess is that there is a problem memory mapping files. I wish that I could make the db available for testing but the data contained in it cannot be disclosed due to an NDA and the schema is proprietary information of my employer. First I need to finish a workaround for this (it seems that by closing and reopening the db connection, the situation improves somewhat) and then I will write a small piece of code that will create a dummy database large enough that the error can be reproduced in it so that I can post it in a reply to this email. Thank you!!! Victor _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users